H-Index & Metrics Top Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science H-index 78 Citations 44,017 187 World Ranking 510 National Ranking 309

Overview

What is he best known for?

The fields of study he is best known for:

  • Artificial intelligence
  • Programming language
  • Natural language processing

Artificial intelligence, Natural language processing, Parsing, Natural language and Question answering are his primary areas of study. Language model, Probabilistic logic, Logical form, Semantics and Deep learning are the primary areas of interest in his Artificial intelligence study. His Natural language processing research incorporates elements of Context, Meaning and Reading comprehension.

His Parsing study combines topics from a wide range of disciplines, such as Lexical item, Lift, Rule-based machine translation and Benchmark. Luke Zettlemoyer interconnects Machine learning and Knowledge base in the investigation of issues within Natural language. He combines subjects such as Scheme and Ontology alignment with his study of Question answering.

His most cited work include:

  • Deep contextualized word representations (4682 citations)
  • RoBERTa: A Robustly Optimized BERT Pretraining Approach (3373 citations)
  • Deep contextualized word representations (1446 citations)

What are the main themes of his work throughout his whole career to date?

His primary areas of study are Artificial intelligence, Natural language processing, Parsing, Language model and Natural language. The Artificial intelligence study combines topics in areas such as Context and Machine learning. His Machine learning research focuses on subjects like Question answering, which are linked to Scheme.

The study incorporates disciplines such as Semantics and Meaning in addition to Natural language processing. His research in Language model tackles topics such as Machine translation which are related to areas like Decoding methods and Transformer. His Natural language research integrates issues from Robot and Human–computer interaction.

He most often published in these fields:

  • Artificial intelligence (69.14%)
  • Natural language processing (50.39%)
  • Parsing (21.09%)

What were the highlights of his more recent work (between 2019-2021)?

  • Artificial intelligence (69.14%)
  • Natural language processing (50.39%)
  • Machine translation (10.16%)

In recent papers he was focusing on the following fields of study:

The scientist’s investigation covers issues in Artificial intelligence, Natural language processing, Machine translation, Language model and Benchmark. In his study, Sentence is inextricably linked to Machine learning, which falls within the broad field of Artificial intelligence. Within one scientific family, Luke Zettlemoyer focuses on topics pertaining to Textual entailment under Sentence, and may sometimes address concerns connected to Question answering.

Luke Zettlemoyer has researched Natural language processing in several fields, including Embedding, Word embedding, Word-sense disambiguation and Protocol. His Machine translation research is multidisciplinary, incorporating elements of Speech recognition and Decoding methods. His research integrates issues of Algorithm, Cross lingual, Automatic summarization and Pattern recognition in his study of Language model.

Between 2019 and 2021, his most popular works were:

  • Unsupervised Cross-lingual Representation Learning at Scale (395 citations)
  • SpanBERT: Improving Pre-training by Representing and Predicting Spans (376 citations)
  • SpanBERT: Improving Pre-training by Representing and Predicting Spans (376 citations)

In his most recent research, the most cited papers focused on:

  • Artificial intelligence
  • Programming language
  • Machine learning

Luke Zettlemoyer mostly deals with Artificial intelligence, Speech recognition, Language model, Machine translation and Natural language processing. His study in Rewriting extends to Artificial intelligence with its themes. His research investigates the connection between Speech recognition and topics such as BLEU that intersect with problems in Automatic summarization.

His studies in Language model integrate themes in fields like Urdu, Word, Swahili and Similarity. Many of his research projects under Natural language processing are closely connected to Supervised learning with Supervised learning, tying the diverse disciplines of science together. His biological study spans a wide range of topics, including Machine learning, Representation and Noise.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Top Publications

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Yinhan Liu;Myle Ott;Naman Goyal;Jingfei Du.
arXiv: Computation and Language (2019)

5500 Citations

Deep contextualized word representations

Matthew E. Peters;Mark Neumann;Mohit Iyyer;Matt Gardner.
north american chapter of the association for computational linguistics (2018)

4913 Citations

Learning to map sentences to logical form: structured classification with probabilistic categorial grammars

Luke S. Zettlemoyer;Michael Collins.
uncertainty in artificial intelligence (2005)

1083 Citations

Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

Raphael Hoffmann;Congle Zhang;Xiao Ling;Luke Zettlemoyer.
meeting of the association for computational linguistics (2011)

781 Citations

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

Mandar Joshi;Eunsol Choi;Daniel S. Weld;Luke Zettlemoyer.
meeting of the association for computational linguistics (2017)

525 Citations

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension.

Mike Lewis;Yinhan Liu;Naman Goyal;Marjan Ghazvininejad.
arXiv: Computation and Language (2019)

445 Citations

Online Learning of Relaxed CCG Grammars for Parsing to Logical Form

Luke Zettlemoyer;Michael Collins.
empirical methods in natural language processing (2007)

439 Citations

AllenNLP: A Deep Semantic Natural Language Processing Platform

Matt Gardner;Joel Grus;Mark Neumann;Oyvind Tafjord.
arXiv: Computation and Language (2018)

422 Citations

End-to-end Neural Coreference Resolution

Kenton Lee;Luheng He;Mike Lewis;Luke Zettlemoyer.
empirical methods in natural language processing (2017)

397 Citations

Unsupervised Cross-lingual Representation Learning at Scale

Alexis Conneau;Kartikay Khandelwal;Naman Goyal;Vishrav Chaudhary.
meeting of the association for computational linguistics (2020)

395 Citations

Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.

If you think any of the details on this page are incorrect, let us know.

Contact us

Top Scientists Citing Luke Zettlemoyer

Dan Roth

Dan Roth

University of Pennsylvania

Publications: 110

Noah A. Smith

Noah A. Smith

University of Washington

Publications: 100

Jonathan Berant

Jonathan Berant

Tel Aviv University

Publications: 95

Mohit Bansal

Mohit Bansal

University of North Carolina at Chapel Hill

Publications: 89

Graham Neubig

Graham Neubig

Carnegie Mellon University

Publications: 84

Hai Zhao

Hai Zhao

Shanghai Jiao Tong University

Publications: 84

Percy Liang

Percy Liang

Stanford University

Publications: 81

Yejin Choi

Yejin Choi

Allen Institute for Artificial Intelligence

Publications: 80

Iryna Gurevych

Iryna Gurevych

University of Paderborn

Publications: 80

Hannaneh Hajishirzi

Hannaneh Hajishirzi

University of Washington

Publications: 77

Jianfeng Gao

Jianfeng Gao

Microsoft (United States)

Publications: 77

Caiming Xiong

Caiming Xiong

Salesforce (United States)

Publications: 76

Maosong Sun

Maosong Sun

Tsinghua University

Publications: 76

Nan Duan

Nan Duan

Microsoft (United States)

Publications: 73

Benjamin Van Durme

Benjamin Van Durme

Johns Hopkins University

Publications: 71

Something went wrong. Please try again later.