Mohit Bansal mainly focuses on Artificial intelligence, Natural language processing, Word, Sentence and Machine learning. His work on Artificial intelligence deals in particular with Recurrent neural network, Natural language, Benchmark, Test set and Question answering. His study in the field of Parsing and Paraphrase is also linked to topics like Simple and Fraction.
His biological study spans a wide range of topics, including S-attributed grammar, Context, Speech recognition and Dependency grammar. His research investigates the link between Sentence and topics such as Inference that cross with problems in Encoding, Classifier and Generative model. His Machine learning study combines topics from a wide range of disciplines, such as Graph, Heuristics and Graph.
His primary areas of investigation include Artificial intelligence, Natural language processing, Machine learning, Natural language and Question answering. As part of his studies on Artificial intelligence, Mohit Bansal often connects relevant subjects like Context. His work is dedicated to discovering how Natural language processing, Logical consequence are connected with Closed captioning and other disciplines.
His Machine learning research is multidisciplinary, relying on both Adversarial system, Inference and Benchmark. His study in Natural language is interdisciplinary in nature, drawing from both Comprehension, Robot, Theoretical computer science and Human–computer interaction. His studies in Question answering integrate themes in fields like Modality, Image and Training set.
Artificial intelligence, Machine learning, Natural language processing, Code and Language model are his primary areas of study. His Artificial intelligence study frequently involves adjacent topics like Context. His Machine learning research is multidisciplinary, incorporating elements of Domain, Variety and Inference.
His studies deal with areas such as Modality, Conversation and Coreference as well as Natural language processing. The various areas that Mohit Bansal examines in his Language model study include Code, Semantics, Control and Predicate. In his research on the topic of Sentence, Semi-supervised learning is strongly related with Endangered language.
Mohit Bansal spends much of his time researching Artificial intelligence, Natural language processing, Machine learning, Closed captioning and Code. His Artificial intelligence research incorporates themes from Context and Orthogonality. His work deals with themes such as Set and Robustness, which intersect with Natural language processing.
The study incorporates disciplines such as Domain, Variety, Debiasing and Embedding in addition to Machine learning. Mohit Bansal interconnects Paragraph, Transformer, Window, Sentence and Coreference in the investigation of issues within Closed captioning. The concepts of his Question answering study are interwoven with issues in Classifier and Intelligent decision support system.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures
Makoto Miwa;Mohit Bansal.
meeting of the association for computational linguistics (2016)
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
Hao Tan;Mohit Bansal.
empirical methods in natural language processing (2019)
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
Yen-Chun Chen;Mohit Bansal.
meeting of the association for computational linguistics (2018)
Towards Universal Paraphrastic Sentence Embeddings
John Wieting;Mohit Bansal;Kevin Gimpel;Karen Livescu.
international conference on learning representations (2016)
MAttNet: Modular Attention Network for Referring Expression Comprehension
Licheng Yu;Zhe Lin;Xiaohui Shen;Jimei Yang.
computer vision and pattern recognition (2018)
Tailoring Continuous Word Representations for Dependency Parsing
Mohit Bansal;Kevin Gimpel;Karen Livescu.
meeting of the association for computational linguistics (2014)
Adversarial NLI: A New Benchmark for Natural Language Understanding
Yixin Nie;Adina Williams;Emily Dinan;Mohit Bansal.
meeting of the association for computational linguistics (2020)
From Paraphrase Database to Compositional Paraphrase Model and Back
John Wieting;Mohit Bansal;Kevin Gimpel;Karen Livescu.
Transactions of the Association for Computational Linguistics (2015)
TVQA: Localized, Compositional Video Question Answering
Jie Lei;Licheng Yu;Mohit Bansal;Tamara L. Berg.
empirical methods in natural language processing (2018)
What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment
Hongyuan Mei;Mohit Bansal;Matthew R. Walter.
north american chapter of the association for computational linguistics (2016)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Toyota Technological Institute at Chicago
University of North Carolina at Chapel Hill
Toyota Technological Institute at Chicago
Toyota Technological Institute at Chicago
Facebook (United States)
Georgia Institute of Technology
University of California, Berkeley
Facebook (United States)
Indian Institute of Technology Roorkee
University of Illinois at Urbana-Champaign
University of Georgia
NTT (Japan)
Xiamen University
South China University of Technology
University of Groningen
Tohoku University
Agency for Science, Technology and Research
Oregon State University
Sanford Burnham Prebys Medical Discovery Institute
National Institutes of Health
University College London
University of Glasgow
Dalhousie University
University of California, Los Angeles
University of Milan
King's College London