본문 바로가기

quantization7

[Paper Review] AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets 논문 원본 : https://arxiv.org/abs/2208.08084 AdaBin: Improving Binary Neural Networks with Adaptive Binary SetsThis paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values, thus greatly reducing the memory usage and computational complexity. Since the modern deep neural networks are of sophisticated desiarxiv.org  1. Introduction양자화 (Quan.. 2025. 3. 16.
[Paper Review] A comprehensive review of Binary Neural Network 논문 원본 : https://arxiv.org/abs/2110.06804 A comprehensive review of Binary Neural NetworkDeep learning (DL) has recently changed the development of intelligent systems and is widely adopted in many real-life applications. Despite their various benefits and potentials, there is a high demand for DL processing in different computationally limitearxiv.org  2. Binary Neural NetworkBNN은 기존 CNN과 동일한 구조.. 2025. 3. 13.
[Paper Review] A Comprehensive Overhaul of Feature Distillation 논문 원본 : https://arxiv.org/abs/1904.01866 A Comprehensive Overhaul of Feature DistillationWe investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student tarxiv.org 3. Approach     3.1. Distillation positionN.. 2025. 2. 1.
[Paper Review] Quantized Feature Distillation for Network Quantization 논문 원본 : https://arxiv.org/abs/2307.10638 Quantized Feature Distillation for Network QuantizationNeural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually comparxiv.org Abstract & Introduction신경망 양자화(Neura.. 2024. 12. 29.
[Paper Review] Self-Supervised Quantization-Aware Knowledge Distillation 논문 원본 : https://arxiv.org/abs/2403.11106 Self-Supervised Quantization-Aware Knowledge DistillationQuantization-aware training (QAT) and Knowledge Distillation (KD) are combined to achieve competitive performance in creating low-bit deep learning models. However, existing works applying KD to QAT require tedious hyper-parameter tuning to balance the weiarxiv.org AbstractQuantization-aware trainin.. 2024. 12. 1.
[Paper Review] Learned Step Size Quantization 논문 원본 : https://arxiv.org/abs/1902.08153 Learned Step Size QuantizationDeep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for trainarxiv.org Abstract딥러닝 네트워크는 inference 시 계산 비용을 줄이고 효율성을 높이기 위해 저정밀도(ex... 2024. 11. 30.