Literature Review

Literature Review

[논문 리뷰] Attention Is All You Need

논문 링크: https://arxiv.org/abs/1706.03762 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org 1. 서론1.1 논문 선정 이유"Attention Is All You Need"는 Transformer 모델을 제안한 기념..

Literature Review

[논문 리뷰] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

논문 링크: https://arxiv.org/abs/2501.12948 DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningWe introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoninarxiv.org  1. 서론1.1 ..

Literature Review

[논문 리뷰] AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head

논문 링크: https://arxiv.org/abs/2304.12995 AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking HeadLarge language models (LLMs) have exhibited remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. Despite the recent success, current LLMs are not capable of processing complex audio informaarxiv.org 1. 서론1.1 논문 선정 ..

Literature Review

[논문 리뷰] FastSpeech 2: Fast and High-Quality End-to-End Text to Speech

논문 링크: https://arxiv.org/abs/2006.04558 FastSpeech 2: Fast and High-Quality End-to-End Text to SpeechNon-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duratioarxiv.org 1. 서론1.1 논문 선정 이유FastSpeech 2는 ..

Literature Review

[논문 리뷰] FastSpeech: Fast, Robust and Controllable Text to Speech

논문 링크: https://arxiv.org/abs/1905.09263 FastSpeech: Fast, Robust and Controllable Text to SpeechNeural network based end-to-end text to speech (TTS) has significantly improved the quality of synthesized speech. Prominent methods (e.g., Tacotron 2) usually first generate mel-spectrogram from text, and then synthesize speech from the mel-spectrogram usarxiv.org 1. 서론1.1 논문 선정 이유FastSpeech는 딥러닝 기반 ..

Literature Review

[논문 리뷰] Sequence to Sequence Learning with Neural Networks

논문 링크: https://arxiv.org/abs/1409.3215 Sequence to Sequence Learning with Neural NetworksDeep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paparxiv.org 1. 서론1.1 논문 선정 이유자연어 처리 분야에서 가장 강력한 모델로 평가받..

AlienCoder
'Literature Review' 카테고리의 글 목록
loading