전체 글76 [Paper Review] Robust Speech Recognition via Large-Scale Weak Supervision 논문 원본 : https://arxiv.org/abs/2212.04356?utm_source=chatgpt.com Robust Speech Recognition via Large-Scale Weak SupervisionWe study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standardarxiv.org 1. Introd.. 2025. 4. 19. [Paper Review] AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets 논문 원본 : https://arxiv.org/abs/2208.08084 AdaBin: Improving Binary Neural Networks with Adaptive Binary SetsThis paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated desiarxiv.org 1. Introduction양자화 (Quan.. 2025. 3. 16. [Paper Review] A comprehensive review of Binary Neural Network 논문 원본 : https://arxiv.org/abs/2110.06804 A comprehensive review of Binary Neural NetworkDeep learning (DL) has recently changed the development of intelligent systems and is widely adopted in many real-life applications. Despite their various benefits and potentials, there is a high demand for DL processing in different computationally limitearxiv.org 2. Binary Neural NetworkBNN은 기존 CNN과 동일한 구조.. 2025. 3. 13. [X:AI] BART 논문 리뷰 논문 원본 : https://arxiv.org/abs/1910.13461v1 BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and ComprehensionWe present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tra.. 2025. 2. 11. [X:AI] RoBERTa 논문 리뷰 논문 원본 : https://arxiv.org/abs/1907.11692 RoBERTa: A Robustly Optimized BERT Pretraining ApproachLanguage model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperpararxiv.org 1. Abstract & Introduction 자기지도학습(S.. 2025. 2. 4. [X:AI] RetinaNet 논문 리뷰 논문 원본 : https://arxiv.org/abs/1708.02002 Focal Loss for Dense Object DetectionThe highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense samplarxiv.org 1. Abstract & Introduction본 논문 시점 기준 Object Detection .. 2025. 2. 3. 이전 1 2 3 4 5 ··· 13 다음