Yejin Choi mainly investigates Artificial intelligence, Natural language processing, Context, Natural language and Machine learning. His research combines Quality and Artificial intelligence. His research integrates issues of Checklist, Image and Internet privacy in his study of Natural language processing.
His Context research is multidisciplinary, incorporating elements of Question answering, News media, Dialog box and Reading comprehension. His study in Machine learning is interdisciplinary in nature, drawing from both Context-free grammar and Stylometry. His Language model study incorporates themes from Counterfactual conditional, Commonsense knowledge and Explicit knowledge.
His scientific interests lie mostly in Artificial intelligence, Natural language processing, Language model, Commonsense reasoning and Machine learning. His Artificial intelligence research is multidisciplinary, incorporating perspectives in Context and Set. His research in Natural language processing intersects with topics in Image and Reading comprehension.
His work in Language model tackles topics such as Commonsense knowledge which are related to areas like Knowledge graph. The various areas that Yejin Choi examines in his Commonsense reasoning study include Crowdsourcing, Frame, Cognitive science and Human–computer interaction. His work deals with themes such as Adversarial system and Benchmark, which intersect with Machine learning.
Artificial intelligence, Language model, Natural language processing, Commonsense reasoning and Machine learning are his primary areas of study. His work in the fields of Artificial intelligence, such as Natural language, overlaps with other areas such as Transformer. His Language model research includes elements of Transfer of learning, Decoding methods, Commonsense knowledge, Range and Natural language generation.
His Natural language processing research is multidisciplinary, relying on both Narrative, Interpretability, Image and Scripting language. The Commonsense reasoning study combines topics in areas such as Inference, Question answering, Frame, Cognitive science and Generative grammar. When carried out as part of a general Machine learning research project, his work on Overfitting and Leverage is frequently linked to work in Data mapping and Degeneration, therefore connecting diverse disciplines of study.
Yejin Choi spends much of his time researching Artificial intelligence, Commonsense reasoning, Language model, Machine learning and Natural language processing. His Statistical model and Word study are his primary interests in Artificial intelligence. His Commonsense reasoning study combines topics in areas such as Inference, Question answering, Crowdsourcing, Generative grammar and Natural language.
The study incorporates disciplines such as Text corpus, Commonsense knowledge, Winograd Schema Challenge, Benchmark and Transfer of learning in addition to Language model. His work on Overfitting and Leverage is typically connected to Data mapping and Degeneration as part of general Machine learning study, connecting several disciplines of science. His studies deal with areas such as Object, Semantics, Image and Set as well as Natural language processing.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
BabyTalk: Understanding and Generating Simple Image Descriptions
Girish Kulkarni;Visruth Premraj;Vicente Ordonez;Sagnik Dhar.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2013)
OpinionFinder: A System for Subjectivity Analysis
Theresa Wilson;Paul Hoffmann;Swapna Somasundaran;Jason Kessler.
empirical methods in natural language processing (2005)
Baby talk: Understanding and generating simple image descriptions
Girish Kulkarni;Visruth Premraj;Sagnik Dhar;Siming Li.
computer vision and pattern recognition (2011)
Finding Deceptive Opinion Spam by Any Stretch of the Imagination
Myle Ott;Yejin Choi;Claire Cardie;Jeffrey T. Hancock.
meeting of the association for computational linguistics (2011)
Identifying Sources of Opinions with Conditional Random Fields and Extraction Patterns
Yejin Choi;Claire Cardie;Ellen Riloff;Siddharth Patwardhan.
empirical methods in natural language processing (2005)
Syntactic Stylometry for Deception Detection
Song Feng;Ritwik Banerjee;Yejin Choi.
meeting of the association for computational linguistics (2012)
Learning with Compositional Semantics as Structural Inference for Subsentential Sentiment Analysis
Yejin Choi;Claire Cardie.
empirical methods in natural language processing (2008)
Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking
Hannah Rashkin;Eunsol Choi;Jin Yea Jang;Svitlana Volkova.
empirical methods in natural language processing (2017)
Composing Simple Image Descriptions using Web-scale N-grams
Siming Li;Girish Kulkarni;Tamara L. Berg;Alexander C. Berg.
conference on computational natural language learning (2011)
The Curious Case of Neural Text Degeneration
Ari Holtzman;Jan Buys;Leo Du;Maxwell Forbes.
international conference on learning representations (2020)
Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking d-index is inferred from publications deemed to belong to the considered discipline.
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: