His primary areas of investigation include Artificial intelligence, Natural language processing, Embedding, Machine learning and Support vector machine. His Artificial intelligence research is multidisciplinary, relying on both Coordinate descent and Pattern recognition. His biological study spans a wide range of topics, including Word and Robustness.
The various areas that Kai-Wei Chang examines in his Support vector machine study include Algorithm and Dual. His Linear classifier study incorporates themes from Network representation learning and Data mining. His Linear svm research incorporates themes from Native-language identification, Degree of a polynomial, Sparse data sets and Open source.
His primary areas of study are Artificial intelligence, Natural language processing, Machine learning, Word and Word embedding. Kai-Wei Chang integrates Artificial intelligence and Gender bias in his research. Kai-Wei Chang has researched Natural language processing in several fields, including Transfer of learning and Leverage.
Kai-Wei Chang combines subjects such as Object and Image with his study of Machine learning. He interconnects Similarity, Component and Contrast in the investigation of issues within Word embedding. The concepts of his Support vector machine study are interwoven with issues in Data mining and Coordinate descent.
Kai-Wei Chang mostly deals with Artificial intelligence, Natural language processing, Transformer, Sentence and Machine learning. His Artificial intelligence study combines topics from a wide range of disciplines, such as Correctness and Source document. His work deals with themes such as Event, Writing style and Word, which intersect with Natural language processing.
His Transformer study combines topics in areas such as Sentiment analysis, Artificial neural network, Data mining and Source code. Kai-Wei Chang usually deals with Sentence and limits it to topics linked to Syntax and Paraphrase, Natural language, Semantics, Semantics and Image. His work carried out in the field of Machine learning brings together such families of science as Object and Relationship extraction.
His primary areas of investigation include Artificial intelligence, Transformer, Natural language processing, Sentiment analysis and Automatic summarization. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Structure and Machine learning. His Machine learning research incorporates elements of Relationship extraction, Enhanced Data Rates for GSM Evolution, Graph and Knowledge base.
His Natural language processing study frequently draws connections to adjacent fields such as Image. His studies deal with areas such as Artificial neural network, Verification problem, Parse tree and Emotion classification as well as Sentiment analysis. His research in Automatic summarization intersects with topics in Syntax, Data mining, Natural language and Source code.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
LIBLINEAR: A Library for Large Linear Classification
Rong-En Fan;Kai-Wei Chang;Cho-Jui Hsieh;Xiang-Rui Wang.
Journal of Machine Learning Research (2008)
Man is to computer programmer as woman is to homemaker? debiasing word embeddings
Tolga Bolukbasi;Kai-Wei Chang;James Zou;Venkatesh Saligrama.
neural information processing systems (2016)
A dual coordinate descent method for large-scale linear SVM
Cho-Jui Hsieh;Kai-Wei Chang;Chih-Jen Lin;S. Sathiya Keerthi.
international conference on machine learning (2008)
VisualBERT: A Simple and Performant Baseline for Vision and Language.
Liunian Harold Li;Mark Yatskar;Da Yin;Cho-Jui Hsieh.
arXiv: Computer Vision and Pattern Recognition (2019)
Training and Testing Low-degree Polynomial Data Mappings via Linear SVM
Yin-Wen Chang;Cho-Jui Hsieh;Kai-Wei Chang;Michael Ringgaard.
Journal of Machine Learning Research (2010)
Generating Natural Language Adversarial Examples
Moustafa Alzantot;Yash Sharma;Ahmed Elgohary;Bo-Jhang Ho.
empirical methods in natural language processing (2018)
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
Jieyu Zhao;Tianlu Wang;Mark Yatskar;Vicente Ordonez.
empirical methods in natural language processing (2017)
Large Linear Classification When Data Cannot Fit in Memory
Hsiang-Fu Yu;Cho-Jui Hsieh;Kai-Wei Chang;Chih-Jen Lin.
ACM Transactions on Knowledge Discovery From Data (2012)
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
Jieyu Zhao;Tianlu Wang;Mark Yatskar;Vicente Ordonez.
north american chapter of the association for computational linguistics (2018)
Coordinate Descent Method for Large-scale L2-loss Linear Support Vector Machines
Kai-Wei Chang;Cho-Jui Hsieh;Chih-Jen Lin.
Journal of Machine Learning Research (2008)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
University of California, Los Angeles
University of Pennsylvania
National Taiwan University
University of California, Los Angeles
University of California, Los Angeles
Microsoft (United States)
University of California, Los Angeles
Boston University
University of California, Los Angeles
Tsinghua University
Bank of England
Ludwig-Maximilians-Universität München
Kyushu University
Peking University
École Polytechnique Fédérale de Lausanne
Yale University
Columbia University
New York University
Northwestern University
University College London
University of Oregon
University of Tokyo
University College London
Fred Hutchinson Cancer Research Center
New York Medical College
University of Bristol