Jonathan Berant mainly investigates Artificial intelligence, Natural language processing, Parsing, Reading comprehension and Machine learning. His work in the fields of Recurrent neural network and Top-down parsing overlaps with other areas such as Research community, Speedup and Database schema. His work on Natural language understanding, Predicate and Language model is typically connected to Process as part of general Natural language processing study, connecting several disciplines of science.
His Parsing research integrates issues from Principle of compositionality, Knowledge base, Programmer, Structured prediction and Lisp. The study incorporates disciplines such as Question answering, Selection and Focus in addition to Reading comprehension. His Machine learning study frequently draws parallels with other fields, such as Interpreter.
Jonathan Berant mostly deals with Artificial intelligence, Natural language processing, Parsing, Machine learning and Question answering. As a part of the same scientific family, Jonathan Berant mostly works in the field of Artificial intelligence, focusing on Crowdsourcing and, on occasion, Formal language. The concepts of his Natural language processing study are interwoven with issues in Range, Logical consequence, Reading comprehension and Benchmark.
His research in Parsing intersects with topics in Representation, Structured prediction, SQL and Logical form. His work deals with themes such as Feature engineering, Interpreter and Lisp, which intersect with Artificial neural network. His research integrates issues of Stability, Pruning and Programmer in his study of Feature engineering.
His scientific interests lie mostly in Artificial intelligence, Natural language processing, Language model, Parsing and Question answering. His Artificial intelligence study frequently intersects with other fields, such as Machine learning. His work in the fields of Natural language processing, such as Transliteration, intersects with other areas such as Glyph.
Jonathan Berant combines subjects such as Simple and Transformer with his study of Language model. His Parsing study integrates concerns from other disciplines, such as Tree and Test set. His Question answering research focuses on Crowdsourcing and how it connects with Formal language, Metric and Information retrieval.
His primary areas of investigation include Artificial intelligence, Natural language processing, Language model, Parsing and Theoretical computer science. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Crowdsourcing and Machine learning. His work on Question answering as part of general Natural language processing study is frequently connected to Learning curve, therefore bridging the gap between diverse disciplines of science and establishing a new relationship between them.
His Language model research is multidisciplinary, incorporating perspectives in Range, Conjunction, Protocol and Composition. His studies in Parsing integrate themes in fields like Contrast, Decision boundary, Test data, Supervised learning and Decision rule. His Theoretical computer science study combines topics from a wide range of disciplines, such as Artificial neural network, Principle of compositionality, Reading comprehension and Forcing.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Semantic Parsing on Freebase from Question-Answer Pairs
Jonathan Berant;Andrew Chou;Roy Frostig;Percy Liang.
empirical methods in natural language processing (2013)
Semantic Parsing via Paraphrasing
Jonathan Berant;Percy Liang.
meeting of the association for computational linguistics (2014)
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
Alon Talmor;Jonathan Herzig;Nicholas Lourie;Jonathan Berant.
north american chapter of the association for computational linguistics (2019)
Building a Semantic Parser Overnight
Yushi Wang;Jonathan Berant;Percy Liang.
international joint conference on natural language processing (2015)
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Chen Liang;Chen Liang;Chen Liang;Jonathan Berant;Jonathan Berant;Jonathan Berant;Quoc V. Le;Quoc V. Le;Quoc V. Le;Kenneth D. Forbus.
meeting of the association for computational linguistics (2017)
The Web as a Knowledge-Base for Answering Complex Questions
Alon Talmor;Jonathan Berant.
north american chapter of the association for computational linguistics (2018)
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva;Yoav Goldberg;Jonathan Berant.
empirical methods in natural language processing (2019)
Modeling Biological Processes for Reading Comprehension
Jonathan Berant;Vivek Srikumar;Pei-Chun Chen;Abby Vander Linden.
empirical methods in natural language processing (2014)
Learning Recurrent Span Representations for Extractive Question Answering
Kenton Lee;Shimi Salant;Tom Kwiatkowski;Ankur Parikh.
arXiv: Computation and Language (2016)
Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner;Yoav Artzi;Victoria Basmov;Jonathan Berant.
empirical methods in natural language processing (2020)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: