Study

Study/Literature Review

[논문 리뷰] Attention Is All You Need

논문 링크: https://arxiv.org/abs/1706.03762 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org 1. 서론1.1 논문 선정 이유"Attention Is All You Need"는 Transformer 모델을 제안한 기념..

Study/Literature Review

[논문 리뷰] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

논문 링크: https://arxiv.org/abs/2501.12948 DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningWe introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoninarxiv.org  1. 서론1.1 ..

Study/Literature Review

[논문 리뷰] AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head

논문 링크: https://arxiv.org/abs/2304.12995 AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking HeadLarge language models (LLMs) have exhibited remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. Despite the recent success, current LLMs are not capable of processing complex audio informaarxiv.org 1. 서론1.1 논문 선정 ..

Study/Literature Review

[논문 리뷰] FastSpeech 2: Fast and High-Quality End-to-End Text to Speech

논문 링크: https://arxiv.org/abs/2006.04558 FastSpeech 2: Fast and High-Quality End-to-End Text to SpeechNon-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duratioarxiv.org 1. 서론1.1 논문 선정 이유FastSpeech 2는 ..

Study/Literature Review

[논문 리뷰] FastSpeech: Fast, Robust and Controllable Text to Speech

논문 링크: https://arxiv.org/abs/1905.09263 FastSpeech: Fast, Robust and Controllable Text to SpeechNeural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram usarxiv.org 1. 서론1.1 논문 선정 이유FastSpeech는 딥러닝 기반 ..

Study/Coding Test

[Python] 백준 27961 - 고양이는 많을수록 좋다

문제https://www.acmicpc.net/problem/27961 해설2배수로 늘려가며 탐색해 보면 쉽게 해결된다. 0이 될 수 있는 조건은 주의해야 한다. Pythonfrom sys import stdindef magic(n: int): if n == 0: return 0 cnt = 1 cat = 1 while cat

AlienCoder
'Study' 카테고리의 글 목록
loading