본문 바로가기

논문/review12

Sequence to Sequence Learning with Neural Networks 목차 0. Abstract 1. Introduction 2. The Model 3. Experiments 3.1 Dataset details 3.2 Decoding and Rescoring 3.3 Reversing the Souce Sentence 3.4 Training details 3.5 Parallelization 3.6 Experimental Results 3.7 Performance on long sentences 3.8 Model Analysis 4. Related work 5. Conclusion 6. Acknowledgments 0. Abstract큰 labeled training sets → DNN 효과 좋음 sequence to sequence mapping → DNN 효과가 좋지 않음.. 2024. 8. 28.
ImageNet Classification with Deep Convolutional Neural Networks 목차0. Abstract1. Introduction2. The Dataset3. The Architecture3.1 ReLU Nonlinearity3.2 Training on Multiple GPUs3.3 Local Response Normalization3.4 Overlapping Pooling3.5 Overall Architecture4. Reduce Overfitting4.1 Data Augmentation4.2 Dropout5. Details of learning6. Results6.1 Qualitative Evaluations7. Discussion0. Abstract  neural network - 60M parameter - 650,000 neurons - 5 convolutional lay.. 2024. 8. 7.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 목차0. Abstract1. Introduction2. Related Work2.1 Unsupervised Feature-based Approaches 2.2 Unsupervised Fine-tuning Approaches2.3  Transfer Learning from Supervised Data3. BERT3.1 Pre-training BERT3.2 Fine-tuning BERT4 Experiments 4.1 GLUE4.2 SQuAD v1.14.3 SQuAD v2.04.4 SWAG5. Ablation Studies5.1 Effect of Pre-training Tasks5.2 Effect of Model Size5.3 Feature-based Approaches with BERT6. Conclusio.. 2024. 7. 17.
Very Deep Convolutional Networks For Large-Scale Image Recognition 목차0. Abstract1. Introduction2. ConvNet Configuration2.1 Architecture2.2 Configuration2.3  Discussion3. Classification Framework3.1 Training3.2 Testing3.3 Implementation Details4 Classification Experiments4.1 Single Scale Evaluation4.2 Multi-Scale Evaluation4.3 Multi-Crop Evaluation4.4 ConvNet Fusion4.5 Comparison with the state of art5. Conclusion0. Abstract 대규모 이미지 인식에서 convolution network 깊이에 .. 2024. 7. 3.
Attention Is All You Need 리뷰 목차0. Abstract1. Introduction2. Background3. Model Architecture3.1 Encoder and Decoder Stacks3.2 Attention3.2.1 Scaled Dot-Product Attention3.2.2 Multi-Head Attention3.2.3 Applications if Attention in our Model3.3 Position-wise Feed-Forward Networks3.4 Embeddings and Softmax3.5 Positional Encoding4. Why Self-Attention5. Training5.1 Training Data and Batching5.2 Hardware and Schedule5.3 Optimizer5.. 2024. 6. 22.
Learning Phrase Representation using RNN Encoder-Decoder for Statistical Machine Translation 리뷰 목차 0. Abstract 1. Introduction 2. RNN Encoder-Decoder 2.1 Preliminary: Recurrent Neural Networks 2.2 RNN Encoder-Decoder 2.3 Hidden Unit that Adaptively Remembers and Forgets 3. Statistical Machine Translation 3.1 Scoring Phrase Pairs with RNN Encoder-Decoder 3.2 Related Approaches: Neural Networks in Machine Translation 4. Experiments 4.1 Data and Baseline System 4.1.1 RNN Encoder-Decoder 4.1.2.. 2022. 8. 19.
728x90