Jaime G. Carbonell mainly focuses on Artificial intelligence, Machine learning, Natural language processing, Information retrieval and Active learning. His Artificial intelligence research includes elements of Context, Analogy and Pattern recognition. Jaime G. Carbonell works mostly in the field of Machine learning, limiting it down to topics relating to Sampling and, in certain cases, Range, Reduction and Density estimation, as a part of the same area of interest.
His work on Natural language as part of general Natural language processing research is frequently linked to Tree transducers, thereby connecting diverse disciplines of science. His Information retrieval study combines topics in areas such as Baseline and Set. Jaime G. Carbonell focuses mostly in the field of Ranking, narrowing it down to matters related to Sentiment analysis and, in some cases, Language model.
Jaime G. Carbonell spends much of his time researching Artificial intelligence, Natural language processing, Machine learning, Information retrieval and Machine translation. The study incorporates disciplines such as Task, Data mining and Pattern recognition in addition to Artificial intelligence. His Natural language processing research integrates issues from Speech recognition and Word.
The Machine learning study combines topics in areas such as Multi-task learning and Active learning. His work in Automatic summarization, Relevance, Multi-document summarization, Query expansion and Document retrieval is related to Information retrieval. His Machine translation research is multidisciplinary, relying on both Translation and Rule-based machine translation.
Jaime G. Carbonell focuses on Artificial intelligence, Natural language processing, Machine learning, Transfer of learning and Cross lingual. His Artificial intelligence study incorporates themes from Named-entity recognition, Task and Set. When carried out as part of a general Natural language processing research project, his work on Automatic summarization and Hindi is frequently linked to work in Structure, therefore connecting diverse disciplines of study.
The concepts of his Machine learning study are interwoven with issues in Question answering and Interval estimation. His Transfer of learning research incorporates themes from Sampling, Active learning and Benchmark. His Cross lingual research focuses on subjects like Knowledge base, which are linked to Information retrieval, Bridge, Zero and Scripting language.
His primary scientific interests are in Artificial intelligence, Natural language processing, Language model, Cross lingual and Named-entity recognition. His Artificial intelligence research is multidisciplinary, incorporating elements of Machine learning and Task. The study incorporates disciplines such as Bootstrapping and Autoregressive model in addition to Machine learning.
His Natural language processing research is multidisciplinary, incorporating perspectives in Semi-supervised learning and Component. His Language model research includes elements of Transformer, Question answering, Dependency, Ranking and Noise reduction. His research in Cross lingual intersects with topics in Entity linking and Knowledge base.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang;Zihang Dai;Yiming Yang;Jaime G. Carbonell.
neural information processing systems (2019)
Machine Learning: An Artificial Intelligence Approach
R. S. Michalski;J. G. Carbonell;T. M. Mitchell.
(2013)
The Use of MMR and Diversity-Based Reranking for Reodering Documents and Producing Summaries
Jaime G. Carbonell;Jade Goldstein.
international acm sigir conference on research and development in information retrieval (1998)
Transformer-XL: Attentive Language Models beyond a Fixed-Length Context.
Zihang Dai;Zhilin Yang;Yiming Yang;Jaime G. Carbonell.
meeting of the association for computational linguistics (2019)
Topic Detection and Tracking Pilot Study Final Report
James Allan;Jaime Carbonell;George Doddington;Jonathan Yamron.
Proceedings of the Broadcast News Transcription and Understanding Workshop (Sponsored by DARPA) (1998)
A study of retrospective and on-line event detection
Yiming Yang;Tom Pierce;Jaime Carbonell.
international acm sigir conference on research and development in information retrieval (1998)
Derivational analogy: a theory of reconstructive problem solving and expertise acquisition
Jaime G. Carbonell.
(1993)
Temporal Collaborative Filtering with Bayesian Probabilistic Tensor Factorization
Liang Xiong;Xi Chen;Tzu-Kuo Huang;Jeff G. Schneider.
siam international conference on data mining (2010)
Summarizing text documents: sentence selection and evaluation metrics
Jade Goldstein;Mark Kantrowitz;Vibhu Mittal;Jaime Carbonell.
international acm sigir conference on research and development in information retrieval (1999)
Learning by Analogy: Formulating and Generalizing Plans from Past Experience
Jaime G. Carbonell.
Machine Learning#R##N#An Artificial Intelligence Approach, Volume I (1983)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Carnegie Mellon University
Arizona State University
Carnegie Mellon University
Carnegie Mellon University
Carnegie Mellon University
University of Southern California
Columbia University
Carnegie Mellon University
University of Graz
Carnegie Mellon University
University of Liverpool
University of Leeds
Chinese Academy of Sciences
Tsinghua University
Children's Hospital of Eastern Ontario
University of Oxford
University of Greifswald
New York Hospital Queens
Aristotle University of Thessaloniki
Georgetown University
University of New South Wales
Seoul National University Hospital
Keio University
Stoke Mandeville Hospital
University of Missouri–Kansas City
University of Maryland, Baltimore County