2022 - Research.com Rising Star of Science Award
Caglar Gulcehre mainly investigates Artificial intelligence, Recurrent neural network, Artificial neural network, Speech recognition and Machine translation. His studies in Artificial intelligence integrate themes in fields like Computation and Natural language processing. Recurrent neural network and Newton's method are two areas of study in which Caglar Gulcehre engages in interdisciplinary work.
His work on Gradient descent as part of his general Artificial neural network study is frequently connected to Random matrix, Maxima and minima and Saddle point, thereby bridging the divide between different branches of science. His research on Speech recognition frequently links to adjacent areas such as Convolutional neural network. As part of the same scientific family, he usually focuses on Machine translation, concentrating on Phrase and intersecting with Feature, Rule-based machine translation and Evaluation of machine translation.
Caglar Gulcehre mainly focuses on Artificial intelligence, Reinforcement learning, Recurrent neural network, Artificial neural network and Machine learning. His Artificial intelligence study frequently draws connections between adjacent fields such as Natural language processing. His study in Reinforcement learning is interdisciplinary in nature, drawing from both Domain, Translation and Human–computer interaction.
His Recurrent neural network research includes themes of Algorithm and Pattern recognition. His Artificial neural network research incorporates elements of Speech recognition and Mathematical optimization. Caglar Gulcehre focuses mostly in the field of Machine translation, narrowing it down to topics relating to Phrase and, in certain cases, Feature.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho;Bart van Merrienboer;Caglar Gulcehre;Dzmitry Bahdanau.
empirical methods in natural language processing (2014)
Empirical evaluation of gated recurrent neural networks on sequence modeling
Junyoung Chung;Çaglar Gülçehre;KyungHyun Cho;Yoshua Bengio;Yoshua Bengio;Yoshua Bengio.
arXiv: Neural and Evolutionary Computing (2014)
Theano: A Python framework for fast computation of mathematical expressions
Rami Al-Rfou;Guillaume Alain;Amjad Almahairi.
arXiv: Symbolic Computation (2016)
Relational inductive biases, deep learning, and graph networks
Peter W. Battaglia;Jessica B. Hamrick;Victor Bapst;Alvaro Sanchez-Gonzalez.
arXiv: Learning (2018)
Grandmaster level in StarCraft II using multi-agent reinforcement learning.
Oriol Vinyals;Igor Babuschkin;Wojciech M. Czarnecki;Michaël Mathieu.
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Ramesh Nallapati;Bowen Zhou;Cicero Nogueira dos santos;Caglar Gulcehre.
conference on computational natural language learning (2016)
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N Dauphin;Razvan Pascanu;Caglar Gulcehre;Kyunghyun Cho.
neural information processing systems (2014)
Gated Feedback Recurrent Neural Networks
Junyoung Chung;Caglar Gulcehre;Kyunghyun Cho;Yoshua Bengio;Yoshua Bengio.
international conference on machine learning (2015)
On using monolingual corpora in neural machine translation
Çaglar Gülçehre;Orhan Firat;Kelvin Xu;Kyunghyun Cho.
arXiv: Computation and Language (2015)
How to Construct Deep Recurrent Neural Networks
Razvan Pascanu;Caglar Gulcehre;Kyunghyun Cho;Yoshua Bengio.
international conference on learning representations (2014)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: