H-Index & Metrics Top Publications

H-Index & Metrics

Discipline name H-index Citations Publications World Ranking National Ranking
Computer Science H-index 34 Citations 5,606 189 World Ranking 6412 National Ranking 45

Overview

What is he best known for?

The fields of study he is best known for:

  • Artificial intelligence
  • Machine learning
  • Artificial neural network

His main research concerns Speech recognition, Artificial intelligence, Valence, Emotion classification and Natural language processing. He integrates several fields in his works, including Speech recognition and Emotion perception. His study in Artificial intelligence is interdisciplinary in nature, drawing from both Metadata, Music information retrieval, Machine learning, Melody and Pattern recognition.

His Valence research focuses on Categorical variable and how it connects with Support vector machine and Regression analysis. His Emotion classification research is multidisciplinary, relying on both Variation, Feeling, Class and Fuzzy logic. In his study, which falls under the umbrella issue of Natural language processing, Timbre and Musical is strongly linked to Arousal.

His most cited work include:

  • A Regression Approach to Music Emotion Recognition (315 citations)
  • Machine Recognition of Music Emotion: A Review (237 citations)
  • Music emotion classification: a fuzzy approach (125 citations)

What are the main themes of his work throughout his whole career to date?

His primary scientific interests are in Artificial intelligence, Speech recognition, Machine learning, Music information retrieval and Natural language processing. The various areas that he examines in his Artificial intelligence study include Context and Pattern recognition. His Spectrogram study in the realm of Speech recognition interacts with subjects such as Emotion perception.

His work in the fields of Machine learning, such as Recommender system, Feature learning and Support vector machine, intersects with other areas such as TRECVID. Within one scientific family, Yi-Hsuan Yang focuses on topics pertaining to Multimedia under Music information retrieval, and may sometimes address concerns connected to Pop music automation. His studies examine the connections between Natural language processing and genetics, as well as such issues in Set, with regards to Feature.

He most often published in these fields:

  • Artificial intelligence (45.45%)
  • Speech recognition (42.42%)
  • Machine learning (19.05%)

What were the highlights of his more recent work (between 2018-2021)?

  • Artificial intelligence (45.45%)
  • Speech recognition (42.42%)
  • Machine learning (19.05%)

In recent papers he was focusing on the following fields of study:

Yi-Hsuan Yang mainly focuses on Artificial intelligence, Speech recognition, Machine learning, Musical and Deep learning. His work often combines Artificial intelligence and MIDI studies. His Speech recognition research incorporates themes from Generative grammar and Set.

His Machine learning research is multidisciplinary, incorporating perspectives in Embedding, Source separation, Generative model and Jazz. Yi-Hsuan Yang interconnects Web application, Interactivity and Rendering in the investigation of issues within Musical. His studies in Deep learning integrate themes in fields like Musical composition, Music information retrieval and Natural language processing.

Between 2018 and 2021, his most popular works were:

  • Collaborative Similarity Embedding for Recommender Systems (26 citations)
  • Dilated Convolution with Dilated GRU for Music Source Separation. (22 citations)
  • Learning to Match Transient Sound Events Using Attentional Similarity for Few-shot Sound Recognition (21 citations)

In his most recent research, the most cited papers focused on:

  • Artificial intelligence
  • Machine learning
  • Artificial neural network

Yi-Hsuan Yang focuses on Artificial intelligence, Speech recognition, Musical, Task analysis and Polyphony. His Artificial intelligence study integrates concerns from other disciplines, such as Frame and Machine learning. He has researched Machine learning in several fields, including Embedding, Graph and Bipartite graph, Graph.

His Speech recognition study frequently draws parallels with other fields, such as Feature extraction. His work deals with themes such as Pipeline, Generative grammar, Inference and Speech synthesis, which intersect with Musical. Yi-Hsuan Yang has included themes like Code, Composition and Natural language processing in his Timbre study.

This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.

Top Publications

A Regression Approach to Music Emotion Recognition

Yi-Hsuan Yang;Yu-Ching Lin;Ya-Fan Su;H.H. Chen.
IEEE Transactions on Audio, Speech, and Language Processing (2008)

499 Citations

Machine Recognition of Music Emotion: A Review

Yi-Hsuan Yang;Homer H. Chen.
ACM Transactions on Intelligent Systems and Technology (2012)

320 Citations

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Hao-Wen Dong;Wen-Yi Hsiao;Li-Chia Yang;Yi-Hsuan Yang.
national conference on artificial intelligence (2018)

245 Citations

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

Li-Chia Yang;Szu-Yu Chou;Yi-Hsuan Yang.
arXiv: Sound (2017)

219 Citations

Music emotion classification: a fuzzy approach

Yi-Hsuan Yang;Chia-Chu Liu;Homer H. Chen.
acm multimedia (2006)

219 Citations

Music Emotion Recognition

Yi-Hsuan Yang;Homer H. Chen.
(2011)

199 Citations

1000 songs for emotional analysis of music

Mohammad Soleymani;Micheal N. Caro;Erik M. Schmidt;Cheng-Ya Sha.
acm multimedia (2013)

173 Citations

Ranking-Based Emotion Recognition for Music Organization and Retrieval

Yi-Hsuan Yang;Homer H Chen.
IEEE Transactions on Audio, Speech, and Language Processing (2011)

144 Citations

Vocal activity informed singing voice separation with the iKala dataset

Tak-Shing Chan;Tzu-Chun Yeh;Zhe-Cheng Fan;Hung-Wei Chen.
international conference on acoustics, speech, and signal processing (2015)

106 Citations

Developing a benchmark for emotional analysis of music

Anna Aljanaki;Yi Hsuan Yang;Mohammad Soleymani.
PLOS ONE (2017)

103 Citations

Profile was last updated on December 6th, 2021.
Research.com Ranking is based on data retrieved from the Microsoft Academic Graph (MAG).
The ranking h-index is inferred from publications deemed to belong to the considered discipline.

If you think any of the details on this page are incorrect, let us know.

Contact us

Top Scientists Citing Yi-Hsuan Yang

Björn Schuller

Björn Schuller

Imperial College London

Publications: 30

Markus Schedl

Markus Schedl

Johannes Kepler University of Linz

Publications: 26

Mark Sandler

Mark Sandler

Google (United States)

Publications: 17

Wei Li

Wei Li

Chinese Academy of Sciences

Publications: 15

Wenwu Wang

Wenwu Wang

University of Surrey

Publications: 15

Juan Pablo Bello

Juan Pablo Bello

New York University

Publications: 15

Winston H. Hsu

Winston H. Hsu

National Taiwan University

Publications: 15

Mark D. Plumbley

Mark D. Plumbley

University of Surrey

Publications: 15

Xiao Hu

Xiao Hu

University of Hong Kong

Publications: 14

Sicheng Zhao

Sicheng Zhao

University of California, Berkeley

Publications: 14

Gerhard Widmer

Gerhard Widmer

Johannes Kepler University of Linz

Publications: 13

Tuomas Eerola

Tuomas Eerola

Durham University

Publications: 13

Shrikanth S. Narayanan

Shrikanth S. Narayanan

University of Southern California

Publications: 12

Simon Dixon

Simon Dixon

Queen Mary University of London

Publications: 11

Hsin-Min Wang

Hsin-Min Wang

Academia Sinica

Publications: 10

Something went wrong. Please try again later.