전체 글69 [X:AI] YOLO 논문 리뷰 논문 원본 : https://arxiv.org/abs/1506.02640 You Only Look Once: Unified, Real-Time Object DetectionWe present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabiliarxiv.org Abstract이전 연구들은 객체 탐지를 위해 분류 모델을 변형.. 2025. 1. 28. [X:AI] Faster-RCNN 논문 리뷰 논문 원본 : https://arxiv.org/abs/1506.01497 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksState-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottlearxiv.org 1. Abstract .. 2025. 1. 21. [X:AI] MobileNet 논문 리뷰 논문 원본 : https://arxiv.org/abs/1704.04861 MobileNets: Efficient Convolutional Neural Networks for Mobile Vision ApplicationsWe present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce twarxiv.org Abstract.. 2025. 1. 12. [Paper Review] Quantized Feature Distillation for Network Quantization 논문 원본 : https://arxiv.org/abs/2307.10638 Quantized Feature Distillation for Network QuantizationNeural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually comparxiv.org Abstract & Introduction신경망 양자화(Neura.. 2024. 12. 29. [Paper Review] Sparse Model Soups : A Recipe For Improved Pruning Via Model Averaging 논문 원본 : https://arxiv.org/abs/2306.16788 Sparse Model Soups: A Recipe for Improved Pruning via Model AveragingNeural networks can be significantly compressed by pruning, yielding sparse models with reduced storage and computational demands while preserving predictive performance. Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (Oarxiv.org Abstract신경망은 pruning을 .. 2024. 12. 22. [Paper review] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time 논문 원본 : https://arxiv.org/abs/2203.05482 Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference timeThe conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we re.. 2024. 12. 22. 이전 1 2 3 4 5 ··· 12 다음