His primary scientific interests are in Artificial intelligence, Natural language processing, Topic model, Machine learning and Inference. His study deals with a combination of Artificial intelligence and Matching. While the research belongs to areas of Natural language processing, he spends his time largely on the problem of Artificial neural network, intersecting his research to questions surrounding Principle of compositionality, Question answering and Interpersonal communication.
His Topic model study integrates concerns from other disciplines, such as Discrete mathematics, Class, Formalism, Parallel corpora and Statistical model. In general Machine learning, his work in Document clustering is often linked to Yield, Depression and Conceptual clustering linking many areas of study. In his research, Markov chain is intimately related to Latent Dirichlet allocation, which falls under the overarching field of Inference.
His primary areas of study are Artificial intelligence, Natural language processing, Topic model, Machine learning and Question answering. Jordan Boyd-Graber studied Artificial intelligence and Pattern recognition that intersect with Feature. His research brings together the fields of Annotation and Natural language processing.
He interconnects Quality, Prior probability and Data science in the investigation of issues within Topic model. His Question answering research includes themes of Adversarial system, Context, Natural language and Ambiguity. His work in Word tackles topics such as Embedding which are related to areas like Similarity.
His main research concerns Artificial intelligence, Natural language processing, Question answering, Data science and Word. His Artificial intelligence study frequently links to other fields, such as SQL. His Machine translation study in the realm of Natural language processing interacts with subjects such as Lexico.
Jordan Boyd-Graber focuses mostly in the field of Question answering, narrowing it down to matters related to Ambiguity and, in some cases, Variation and Subject. His study in Data science is interdisciplinary in nature, drawing from both The Internet and Natural language. His work carried out in the field of Word brings together such families of science as Embedding, Document classification, Representation, Similarity and Representation.
Jordan Boyd-Graber mainly focuses on Natural language processing, Word, Artificial intelligence, Data science and Question answering. His study in the fields of Document classification under the domain of Natural language processing overlaps with other disciplines such as Knowledge transfer. Jordan Boyd-Graber has researched Word in several fields, including Active learning, Similarity, Overfitting, Similarity and Character.
His study ties his expertise on Space together with the subject of Artificial intelligence. His studies in Data science integrate themes in fields like The Internet and Natural language. His Factoid study in the realm of Question answering connects with subjects such as Text messaging.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Reading Tea Leaves: How Humans Interpret Topic Models
Jonathan Chang;Sean Gerrish;Chong Wang;Jordan L. Boyd-graber.
neural information processing systems (2009)
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
Mohit Iyyer;Varun Manjunatha;Jordan Boyd-Graber;Hal Daumé Iii.
international joint conference on natural language processing (2015)
A Neural Network for Factoid Question Answering over Paragraphs
Mohit Iyyer;Jordan Boyd-Graber;Leonardo Claudino;Richard Socher.
empirical methods in natural language processing (2014)
Interactive Topic Modeling
Yuening Hu;Jordan Boyd-Graber;Brianna Satinoff.
meeting of the association for computational linguistics (2011)
A Topic Model for Word Sense Disambiguation
Jordan Boyd-Graber;David Blei;Xiaojin Zhu.
empirical methods in natural language processing (2007)
Political Ideology Detection Using Recursive Neural Networks
Mohit Iyyer;Peter Enns;Jordan Boyd-Graber;Philip Resnik.
meeting of the association for computational linguistics (2014)
Syntactic Topic Models
Jordan L. Boyd-graber;David M. Blei.
neural information processing systems (2008)
Mr. LDA: a flexible large scale topic modeling package using variational inference in MapReduce
Ke Zhai;Jordan Boyd-Graber;Nima Asadi;Mohamad L. Alkhouja.
the web conference (2012)
Applications of Topic Models
Jordan L. Boyd-Graber;Yuening Hu;David M. Mimno.
Beyond LDA: Exploring Supervised Topic Modeling for Depression-Related Language in Twitter
Philip Resnik;William Armstrong;Leonardo Claudino;Thang Nguyen.
north american chapter of the association for computational linguistics (2015)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: