Jianfeng Gao mainly focuses on Artificial intelligence, Natural language processing, Language model, Information retrieval and Artificial neural network. His Artificial intelligence study frequently links to related topics such as Machine learning. His work carried out in the field of Natural language processing brings together such families of science as Context, Recurrent neural network, Speech recognition, Word and Generative grammar.
He works mostly in the field of Language model, limiting it down to topics relating to Natural language understanding and, in certain cases, Deep neural networks, as a part of the same area of interest. His work deals with themes such as Ranking and Probabilistic latent semantic analysis, which intersect with Information retrieval. His study in Artificial neural network is interdisciplinary in nature, drawing from both Conversation and Bilinear interpolation.
Jianfeng Gao mainly investigates Artificial intelligence, Natural language processing, Language model, Machine learning and Information retrieval. His study brings together the fields of Pattern recognition and Artificial intelligence. His Natural language processing study incorporates themes from Context, Speech recognition and Word.
He has researched Language model in several fields, including Trigram, Set, Transformer and Word error rate. Jianfeng Gao regularly links together related areas like Ranking in his Information retrieval studies. His Reinforcement learning study combines topics from a wide range of disciplines, such as Task completion and Human–computer interaction.
His main research concerns Artificial intelligence, Natural language processing, Language model, Transformer and Human–computer interaction. His work on Machine learning expands to the thematically related Artificial intelligence. His study in the field of Question answering is also linked to topics like Set.
His Language model research includes themes of Range, Embedding and Code. His Transformer research is multidisciplinary, incorporating elements of Object detection, Encoder, Inference, Residual and Transfer of learning. His Human–computer interaction study combines topics in areas such as Dialog box and Chatbot.
His primary areas of investigation include Artificial intelligence, Natural language processing, Language model, Transformer and Human–computer interaction. His Artificial intelligence study frequently links to adjacent areas such as Machine learning. The various areas that Jianfeng Gao examines in his Natural language processing study include Context, Generative grammar and Shot.
His studies in Language model integrate themes in fields like Information retrieval, Scale, Robustness and Product. His studies examine the connections between Transformer and genetics, as well as such issues in Closed captioning, with regards to Vocabulary and Unsupervised learning. His Human–computer interaction research integrates issues from Representation, Dialog system, Dialog box, Task analysis and Visualization.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Learning deep structured semantic models for web search using clickthrough data
Po-Sen Huang;Xiaodong He;Jianfeng Gao;Li Deng.
conference on information and knowledge management (2013)
MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
Yandong Guo;Lei Zhang;Yuxiao Hu;Xiaodong He.
european conference on computer vision (2016)
Stacked Attention Networks for Image Question Answering
Zichao Yang;Xiaodong He;Jianfeng Gao;Li Deng.
computer vision and pattern recognition (2016)
A Diversity-Promoting Objective Function for Neural Conversation Models
Jiwei Li;Michel Galley;Chris Brockett;Jianfeng Gao.
north american chapter of the association for computational linguistics (2016)
From captions to visual concepts and back
Hao Fang;Saurabh Gupta;Forrest Iandola;Rupesh K. Srivastava.
computer vision and pattern recognition (2015)
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li;Will Monroe;Alan Ritter;Dan Jurafsky.
empirical methods in natural language processing (2016)
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni;Michel Galley;Michael Auli;Chris Brockett.
north american chapter of the association for computational linguistics (2015)
Scalable training of L1-regularized log-linear models
Galen Andrew;Jianfeng Gao.
international conference on machine learning (2007)
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li;Will Monroe;Alan Ritter;Michel Galley.
arXiv: Computation and Language (2016)
Embedding Entities and Relations for Learning and Inference in Knowledge Bases
Bishan Yang;Wen-tau Yih;Xiaodong He;Jianfeng Gao.
international conference on learning representations (2015)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
Jingdong (China)
Citadel
Microsoft (United States)
Tsinghua University
Microsoft (United States)
Facebook (United States)
University of Montreal
Sinovation Ventures
Microsoft (United States)
International Digital Economy Academy
University of Passau
University College Dublin
Sapienza University of Rome
University of Hong Kong
Heriot-Watt University
University of Florida
Rutherford Appleton Laboratory
University of Bordeaux
University of Illinois at Urbana-Champaign
Chinese Academy of Sciences
Brown University
Washington University in St. Louis
University of Milan
University of Valencia
South African Medical Research Council
University of Delaware