His main research concerns Speech recognition, Artificial intelligence, Valence, Emotion classification and Natural language processing. He integrates several fields in his works, including Speech recognition and Emotion perception. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Metadata, Music information retrieval, Machine learning, Melody and Pattern recognition.
His Valence research focuses on Categorical variable and how it connects with Support vector machine and Regression analysis. His Emotion classification research is multidisciplinary, relying on both Variation, Feeling, Class and Fuzzy logic. In his study, which falls under the umbrella issue of Natural language processing, Timbre and Musical is strongly linked to Arousal.
His primary scientific interests are in Artificial intelligence, Speech recognition, Machine learning, Music information retrieval and Natural language processing. The various areas that he examines in his Artificial intelligence study include Context and Pattern recognition. His Spectrogram study in the realm of Speech recognition interacts with subjects such as Emotion perception.
His work in the fields of Machine learning, such as Recommender system, Feature learning and Support vector machine, intersects with other areas such as TRECVID. Within one scientific family, Yi-Hsuan Yang focuses on topics pertaining to Multimedia under Music information retrieval, and may sometimes address concerns connected to Pop music automation. His studies examine the connections between Natural language processing and genetics, as well as such issues in Set, with regards to Feature.
Yi-Hsuan Yang mainly focuses on Artificial intelligence, Speech recognition, Machine learning, Musical and Deep learning. His work often combines Artificial intelligence and MIDI studies. His Speech recognition research incorporates themes from Generative grammar and Set.
His Machine learning research is multidisciplinary, incorporating perspectives in Embedding, Source separation, Generative model and Jazz. Yi-Hsuan Yang interconnects Web application, Interactivity and Rendering in the investigation of issues within Musical. His studies in Deep learning integrate themes in fields like Musical composition, Music information retrieval and Natural language processing.
Yi-Hsuan Yang focuses on Artificial intelligence, Speech recognition, Musical, Task analysis and Polyphony. His Artificial intelligence study integrates concerns from other disciplines, such as Frame and Machine learning. He has researched Machine learning in several fields, including Embedding, Graph and Bipartite graph, Graph.
His Speech recognition study frequently draws parallels with other fields, such as Feature extraction. His work deals with themes such as Pipeline, Generative grammar, Inference and Speech synthesis, which intersect with Musical. Yi-Hsuan Yang has included themes like Code, Composition and Natural language processing in his Timbre study.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
A Regression Approach to Music Emotion Recognition
Yi-Hsuan Yang;Yu-Ching Lin;Ya-Fan Su;H.H. Chen.
IEEE Transactions on Audio, Speech, and Language Processing (2008)
Machine Recognition of Music Emotion: A Review
Yi-Hsuan Yang;Homer H. Chen.
ACM Transactions on Intelligent Systems and Technology (2012)
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
Hao-Wen Dong;Wen-Yi Hsiao;Li-Chia Yang;Yi-Hsuan Yang.
national conference on artificial intelligence (2018)
Music emotion classification: a fuzzy approach
Yi-Hsuan Yang;Chia-Chu Liu;Homer H. Chen.
acm multimedia (2006)
Music Emotion Recognition
Yi-Hsuan Yang;Homer H. Chen.
(2011)
1000 songs for emotional analysis of music
Mohammad Soleymani;Micheal N. Caro;Erik M. Schmidt;Cheng-Ya Sha.
acm multimedia (2013)
Ranking-Based Emotion Recognition for Music Organization and Retrieval
Yi-Hsuan Yang;Homer H Chen.
IEEE Transactions on Audio, Speech, and Language Processing (2011)
Developing a benchmark for emotional analysis of music
Anna Aljanaki;Yi Hsuan Yang;Mohammad Soleymani.
PLOS ONE (2017)
Vocal activity informed singing voice separation with the iKala dataset
Tak-Shing Chan;Tzu-Chun Yeh;Zhe-Cheng Fan;Hung-Wei Chen.
international conference on acoustics, speech, and signal processing (2015)
Automatic chord recognition for music classification and retrieval
Heng-Tze Cheng;Yi-Hsuan Yang;Yu-Ching Lin;I-Bin Liao.
international conference on multimedia and expo (2008)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
National Taiwan University
Academia Sinica
National Taiwan University
National Chengchi University
National Taiwan University
University of Hong Kong
Academia Sinica
Academia Sinica
Pompeu Fabra University
National Taiwan University
Queensland University of Technology
University of California, San Diego
University of Maryland, College Park
National Institute of Advanced Industrial Science and Technology
Southern University of Science and Technology
Rensselaer Polytechnic Institute
RIKEN
Jeonbuk National University
Ruhr University Bochum
Michigan State University
KU Leuven
Kansas State University
University of Zurich
William & Mary
Erasmus University Rotterdam
George Washington University