His main research concerns Artificial intelligence, Natural language processing, Machine translation, Word and Translation. His study brings together the fields of Machine learning and Artificial intelligence. His Natural language processing research includes elements of Syntax, Inflection and Benchmark.
His work on BLEU as part of general Machine translation study is frequently connected to Simple, therefore bridging the gap between diverse disciplines of science and establishing a new relationship between them. NIST, Preprocessor and Heuristic is closely connected to Lexicon in his research, which is encompassed under the umbrella topic of Word. His Translation study integrates concerns from other disciplines, such as Domain, Process and Component.
Artificial intelligence, Natural language processing, Machine translation, Speech recognition and Translation are his primary areas of study. His Artificial intelligence study frequently intersects with other fields, such as Machine learning. His Natural language processing study frequently draws connections between related disciplines such as Benchmark.
His Machine translation research is multidisciplinary, incorporating elements of Domain, Robustness and Adaptation. The Speech recognition study combines topics in areas such as Mixture model, Speech translation and Decoding methods. His biological study spans a wide range of topics, including Python, Source code and Code generation.
His primary scientific interests are in Artificial intelligence, Natural language processing, Machine translation, Machine learning and Benchmark. Language model, Sentence, Leverage, BLEU and Task are among the areas of Artificial intelligence where he concentrates his study. His studies in Natural language processing integrate themes in fields like Word, Representation and Focus.
His work in Machine translation covers topics such as Training set which are related to areas like Control. The study incorporates disciplines such as Construct, Initialization, Heuristics and Upload in addition to Machine learning. His studies deal with areas such as Semantic role labeling, Joint and Meaning as well as Benchmark.
The scientist’s investigation covers issues in Artificial intelligence, Natural language processing, Machine translation, Benchmark and Machine learning. His Artificial intelligence research focuses on Sentence, Task, BLEU, Knowledge base and Automatic summarization. His Natural language processing research is multidisciplinary, incorporating perspectives in Representation and Control.
His Language translation study in the realm of Machine translation interacts with subjects such as Distillation, Nat, Autoregressive model and Contextual image classification. His research in Benchmark focuses on subjects like Joint, which are connected to Feature, Data model, Parsing, Representation and Allophone. His Machine learning study combines topics from a wide range of disciplines, such as Embedding, Construct, Initialization and Upload.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
DyNet: The Dynamic Neural Network Toolkit
Graham Neubig;Chris Dyer;Yoav Goldberg;Austin Matthews.
arXiv: Machine Learning (2017)
Are Sixteen Heads Really Better than One
Paul Michel;Omer Levy;Graham Neubig.
neural information processing systems (2019)
A Syntactic Neural Model for General-Purpose Code Generation
Pengcheng Yin;Graham Neubig.
meeting of the association for computational linguistics (2017)
Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis
Graham Neubig;Yosuke Nakata;Shinsuke Mori.
meeting of the association for computational linguistics (2011)
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation
Junjie Hu;Sebastian Ruder;Aditya Siddhant;Graham Neubig.
international conference on machine learning (2020)
When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?
Ye Qi;Devendra Singh Sachan;Matthieu Felix;Sarguna Janani Padmanabhan.
north american chapter of the association for computational linguistics (2018)
Learning to Generate Pseudo-Code from Source Code Using Statistical Machine Translation (T)
Yusuke Oda;Hiroyuki Fudaba;Graham Neubig;Hideaki Hata.
automated software engineering (2015)
Stress Test Evaluation for Natural Language Inference
Aakanksha Naik;Abhilasha Ravichander;Norman M. Sadeh;Carolyn Penstein Rosé.
international conference on computational linguistics (2018)
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
Junjie Hu;Sebastian Ruder;Aditya Siddhant;Graham Neubig.
arXiv: Computation and Language (2020)
Incorporating Discrete Translation Lexicons into Neural Machine Translation
Philip Arthur;Graham Neubig;Graham Neubig;Satoshi Nakamura.
empirical methods in natural language processing (2016)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Nara Institute of Science and Technology
Nagoya University
Carnegie Mellon University
Carnegie Mellon University
Google (United States)
Carnegie Mellon University
Carnegie Mellon University
National Institute of Information and Communications Technology
Kyoto University
Carnegie Mellon University
Association for Computational Linguistics
Dokuz Eylül University
Sorbonne University
University of Münster
University of Oregon
University of Bordeaux
University of Edinburgh
University of Massachusetts Amherst
Rutgers, The State University of New Jersey
University of Manchester
Baylor College of Medicine
American Museum of Natural History
Ruhr University Bochum
University of Lübeck
Aix-Marseille University
University of Eastern Finland