논문 리뷰/Quantization9 [Paper Review] AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets 논문 원본 : https://arxiv.org/abs/2208.08084 AdaBin: Improving Binary Neural Networks with Adaptive Binary SetsThis paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated desiarxiv.org 1. Introduction양자화 (Quan.. 2025. 3. 16. [Paper Review] A comprehensive review of Binary Neural Network 논문 원본 : https://arxiv.org/abs/2110.06804 A comprehensive review of Binary Neural NetworkDeep learning (DL) has recently changed the development of intelligent systems and is widely adopted in many real-life applications. Despite their various benefits and potentials, there is a high demand for DL processing in different computationally limitearxiv.org 2. Binary Neural NetworkBNN은 기존 CNN과 동일한 구조.. 2025. 3. 13. [Paper Review] A Comprehensive Overhaul of Feature Distillation 논문 원본 : https://arxiv.org/abs/1904.01866 A Comprehensive Overhaul of Feature DistillationWe investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student tarxiv.org 3. Approach 3.1. Distillation positionN.. 2025. 2. 1. [Paper Review] Quantized Feature Distillation for Network Quantization 논문 원본 : https://arxiv.org/abs/2307.10638 Quantized Feature Distillation for Network QuantizationNeural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually comparxiv.org Abstract & Introduction신경망 양자화(Neura.. 2024. 12. 29. [Paper Review] Sparse Model Soups : A Recipe For Improved Pruning Via Model Averaging 논문 원본 : https://arxiv.org/abs/2306.16788 Sparse Model Soups: A Recipe for Improved Pruning via Model AveragingNeural networks can be significantly compressed by pruning, yielding sparse models with reduced storage and computational demands while preserving predictive performance. Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (Oarxiv.org Abstract신경망은 pruning을 .. 2024. 12. 22. [Paper review] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time 논문 원본 : https://arxiv.org/abs/2203.05482 Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference timeThe conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we re.. 2024. 12. 22. 이전 1 2 다음