The world of big data exhibits a rich and complex set of cross-media contents, such as text, image, video, audio and graphics. Thus far, great research efforts have been separately dedicated to big data processing and cross-media mining, with well theoretical underpinnings and great practical success. However, studies jointly considering cross-media big data analytics are relatively sparse. This research gap needs our more attention, since it will benefit lots of real-world applications. Despite its significance and value, it is non-trivial to analyze cross-media big data due to their heterogeneity, large-scale volume, increasing size, unstructured, correlations, and noise. Multi-modal Information Learning, which can be treated as the most significant breakthrough in the past 10 years, has greatly affected the methodology of computer vision and achieved terrific progress in both academy and industry. From then on, deep learning has been adopted in all kinds of computer vision applications and many breakthroughs have achieved in sub-areas, like DeepFace on LFW competition for face verification, GoogleNet for ImageNet Competition for object categorization. It can be expected that more and more computer vision applications will benefit from Multi-modal Information Learning.
This special issue focuses on learning methods to achieve high performance Multi-modal Information analysis and understanding under uncontrolled environments in large scale, which is also a very challenging problem. Moreover, it attracts much attention from both the academia and the industry. We hope this topic will aggregate top level works on the new advances in Multi-modal Information from cross-media data. The purpose of this SI is to provide a forum for researchers and practitioners to exchange ideas and progress in related areas. Topics of interests include, but are not limited to:
The submitted papers must be original and must not be under consideration in any other venue. All submitted papers will be reviewed by at least three reviewers and selected based on their originality, significance, relevance, and clarity of presentation. The editors will approve final decisions on accepted papers. Manuscripts must be prepared according to the following journal’s Author Guidelines. Prospective authors should submit full manuscripts with MS Word format or PDF format.
Guest Editors:
Xiaomeng Ma (Lead Guest Editor)Associate Professor, Shenzhen Polytechnic, Shenzhen, China, [email protected], [email protected] SunAssociate Professor, Shanghai University, China, [email protected]
Important Dates:
Manuscript due: December 30, 2020First round of reviews: February 30, 2021Final decision: May 31, 2021
Peer Review Process:
All the papers will go through peer review, and will be reviewed by at least three reviewers. A thorough check will be completed, and the guest editors will check any significant similarity between the manuscript under consideration and any published paper or submitted manuscripts of which they are aware. In such case, the article will be directly rejected without proceeding further. Guest editors will make all reasonable effort to receive the reviewer’s comments and recommendation on time.
The submitted papers must provide original research that has not been published nor currently under review by other venues. Previously published conference papers should be clearly identified by the authors at the submission stage and an explanation should be provided about how such papers have been extended to be considered for this special issue (with at least 30% difference from the original works).
Submission Guidelines:
Paper submissions for the special issue should strictly follow the submission format and guidelines (https://www.springer.com/journal/521/submission-guidelines). Each manuscript should not exceed 16 pages in length (inclusive of figures and tables).
Manuscripts must be submitted to the journal online system at https://www.editorialmanager.com/ncaa/default.aspx.Authors should select “TC: Multi-modal Information Learning and Analytics on Big Data” during the submission step ‘Additional Information’.