분류 전체보기71 [X:AI] RetinaNet 논문 리뷰 논문 원본 : https://arxiv.org/abs/1708.02002 Focal Loss for Dense Object DetectionThe highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense samplarxiv.org 1. Abstract & Introduction본 논문 시점 기준 Object Detection .. 2025. 2. 3. [Paper Review] A Comprehensive Overhaul of Feature Distillation 논문 원본 : https://arxiv.org/abs/1904.01866 A Comprehensive Overhaul of Feature DistillationWe investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student tarxiv.org 3. Approach 3.1. Distillation positionN.. 2025. 2. 1. [X:AI] YOLO 논문 리뷰 논문 원본 : https://arxiv.org/abs/1506.02640 You Only Look Once: Unified, Real-Time Object DetectionWe present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabiliarxiv.org Abstract이전 연구들은 객체 탐지를 위해 분류 모델을 변형.. 2025. 1. 28. [X:AI] Faster-RCNN 논문 리뷰 논문 원본 : https://arxiv.org/abs/1506.01497 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksState-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottlearxiv.org 1. Abstract .. 2025. 1. 21. [X:AI] MobileNet 논문 리뷰 논문 원본 : https://arxiv.org/abs/1704.04861 MobileNets: Efficient Convolutional Neural Networks for Mobile Vision ApplicationsWe present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce twarxiv.org Abstract.. 2025. 1. 12. [Paper Review] Quantized Feature Distillation for Network Quantization 논문 원본 : https://arxiv.org/abs/2307.10638 Quantized Feature Distillation for Network QuantizationNeural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually comparxiv.org Abstract & Introduction신경망 양자화(Neura.. 2024. 12. 29. 이전 1 2 3 4 5 ··· 12 다음