cv18 [X:AI] DINO 논문 리뷰 논문 원본 : https://arxiv.org/abs/2104.14294 Emerging Properties in Self-Supervised Vision TransformersIn this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particarxiv.org 1. Introduction ViT는 CNN과 경쟁력이 있지.. 2025. 7. 28. [X:AI] Faster R-CNN 논문 리뷰 논문 원본 : https://arxiv.org/abs/1506.01497 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksState-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottlearxiv.org 1. Abstract .. 2025. 1. 21. [X:AI] DDPM 논문 리뷰 Denoising Diffusion Probabilistic Models 논문 원본 : https://arxiv.org/abs/2006.11239발표 영상 : https://www.youtube.com/watch?v=ASwSuJmMtts Denoising Diffusion Probabilistic ModelsWe present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training .. 2024. 8. 11. [X:AI] BYOL 논문 리뷰 Bootstrap Your Own Latent A New Approach to Self-Supervised Learning논문 원본 : https://arxiv.org/abs/2006.07733 Bootstrap your own latent: A new approach to self-supervised LearningWe introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from.. 2024. 8. 5. [X:AI] NeRF 논문 리뷰 NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis논문 원본 : https://arxiv.org/abs/2003.08934 NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisWe present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorit.. 2024. 7. 28. [X:AI] VAE 논문 리뷰 Auto-Encoding Variational Bayes논문 원본 : https://arxiv.org/abs/1312.6114 Auto-Encoding Variational BayesHow can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learningarxiv.org 1. Abstract & Introduction"intr.. 2024. 7. 26. 이전 1 2 3 다음