Kyunghyun Cho mainly focuses on Artificial intelligence, Machine translation, Recurrent neural network, Speech recognition and Artificial neural network. Kyunghyun Cho combines subjects such as Machine learning and Natural language processing with his study of Artificial intelligence. His studies deal with areas such as Encoder, Image, Translation, Rule-based machine translation and Phrase as well as Machine translation.
His Phrase study deals with Sentence intersecting with Representation and Byte pair encoding. His work on TIMIT is typically connected to Sequence modeling as part of general Speech recognition study, connecting several disciplines of science. His work on Gradient descent as part of general Artificial neural network study is frequently linked to Random matrix, therefore connecting diverse disciplines of science.
His scientific interests lie mostly in Artificial intelligence, Machine translation, Natural language processing, Machine learning and Artificial neural network. Much of his study explores Artificial intelligence relationship to Pattern recognition. His Machine translation research includes elements of Decoding methods, Speech recognition, Translation and Rule-based machine translation.
His study on Speech recognition is mostly dedicated to connecting different topics, such as Encoder. Kyunghyun Cho interconnects Context, Embedding and Word in the investigation of issues within Natural language processing. His biological study spans a wide range of topics, including Algorithm, Breast cancer screening and Phrase.
His main research concerns Artificial intelligence, Machine learning, Language model, Algorithm and Natural language processing. His Artificial intelligence research integrates issues from Breast cancer screening and Pattern recognition. His Algorithm study incorporates themes from Normalization, Parameter space, Covariance, Autoregressive model and Machine translation.
He has researched Machine translation in several fields, including Punctuation and Latent variable. His Natural language processing study combines topics in areas such as Combinatorial explosion and Inflection. His Artificial neural network research is multidisciplinary, relying on both Context and Biological network.
His primary areas of investigation include Artificial intelligence, Machine learning, Deep learning, Language model and Pattern recognition. Kyunghyun Cho studies Classifier, a branch of Artificial intelligence. His study in Machine learning is interdisciplinary in nature, drawing from both Similarity measure, Selection and Fluency.
Kyunghyun Cho has researched Deep learning in several fields, including Artificial neural network, Graph, Inference and Breast cancer screening. His research integrates issues of Covariance and Hyperparameter in his study of Artificial neural network. His Language model research is multidisciplinary, incorporating elements of Blocking, Small number, Natural language and Transformer.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau;Kyunghyun Cho;Yoshua Bengio.
international conference on learning representations (2015)
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho;Bart van Merrienboer;Caglar Gulcehre;Dzmitry Bahdanau.
empirical methods in natural language processing (2014)
Empirical evaluation of gated recurrent neural networks on sequence modeling
Junyoung Chung;Çaglar Gülçehre;KyungHyun Cho;Yoshua Bengio;Yoshua Bengio;Yoshua Bengio.
arXiv: Neural and Evolutionary Computing (2014)
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Kelvin Xu;Jimmy Ba;Ryan Kiros;Kyunghyun Cho.
international conference on machine learning (2015)
On the Properties of Neural Machine Translation: Encoder--Decoder Approaches
Kyunghyun Cho;Bart van Merrienboer;Dzmitry Bahdanau;Yoshua Bengio;Yoshua Bengio;Yoshua Bengio.
empirical methods in natural language processing (2014)
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau;Kyunghyun Cho;Yoshua Bengio.
arXiv: Computation and Language (2014)
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Kelvin Xu;Jimmy Ba;Ryan Kiros;Kyunghyun Cho.
arXiv: Learning (2015)
Theano: A Python framework for fast computation of mathematical expressions
Rami Al-Rfou;Guillaume Alain;Amjad Almahairi.
arXiv: Symbolic Computation (2016)
Attention-based models for speech recognition
Jan Chorowski;Dzmitry Bahdanau;Dmitriy Serdyuk;Kyunghyun Cho.
neural information processing systems (2015)
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N Dauphin;Razvan Pascanu;Caglar Gulcehre;Kyunghyun Cho.
neural information processing systems (2014)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of Montreal
Facebook (United States)
Facebook (United States)
DeepMind (United Kingdom)
University of Montreal
New York University
University of Hong Kong
Google (United States)
DeepMind (United Kingdom)
University of Waterloo
National Institutes of Health
Huaibei Normal University
University of Leoben
Norwich Research Park
Environment and Climate Change Canada
Michigan State University
University of California, Davis
University of Barcelona
Aarhus University
Geological Survey of Canada
Institut de Physique du Globe de Paris
National Academies of Sciences, Engineering, and Medicine
University of Padua
University of Melbourne
Sorbonne University
Brigham and Women's Hospital