Ranking & Metrics Conference Call for Papers Other Conferences in United States
ACM SIGIR 2022: 45th International ACM SIGIR Conference on Research and Development in Information Retrieval

ACM SIGIR 2022: 45th International ACM SIGIR Conference on Research and Development in Information Retrieval

Madrid, Spain

Conference Dates: Jul 11, 2022 - Jul 15, 2022

Research
Impact Score 11.30

OFFICIAL WEBSITE

Conference Organizers: Deadline extended?
Click here to edit

Ranking & Metrics Impact Score is a novel metric devised to rank conferences based on the number of contributing the best scientists in addition to the h-index estimated from the scientific papers published by the best scientists. See more details on our methodology page.

Research Impact Score: 11.30
Contributing Best Scientists: 292
H5-index:
Papers published by Best Scientists 688
Research Ranking (Computer Science) 28

Conference Call for Papers

The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR). The 45th ACM SIGIR conference, will be run as a hybrid conference – in person in Madrid, Spain, with support for remote participation, from July 11th to 15th, 2022. We welcome contributions related to any aspect of information retrieval and access, including theories, foundations, algorithms, evaluation, analysis, and applications. The conference and program chairs invite those working in areas related to IR to submit high-impact original papers for review. Please note a CFP for other paper tracks, as well as workshops, tutorials, doctoral consortium, industry day, and other SIGIR 2022 venues will be released separately.

Important Dates
Time zone: Anywhere on Earth(AoE)

Full paper abstracts due: January 21, 2022

Full papers due: January 28, 2022

Full paper notifications: March 31, 2022

Publication date: accepted papers will be published open access on the ACM Digital Library up to two weeks prior to the conference opening date (exact date TBA)

Full paper authors are required to submit an abstract by midnight January 21, 2022 AoE. Paper submission (deadline: midnight January 28, 2022 AoE) is not possible without a submitted abstract. Immediately after the abstract deadline, PC Chairs will desk reject submissions that lack informative titles and abstracts (“placeholder abstracts”).

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.

Submission Guidelines
See this brief checklist to strengthen an IR paper, for authors and reviewers.

Full research papers must describe original work that has not been previously published, not accepted for publication elsewhere, and not simultaneously submitted or currently under review in another journal or conference (including the short paper track of SIGIR 2022).

Submissions of full research papers must be in English, in PDF format, and be at most 9 pages (including figures, tables, proofs, appendixes, acknowledgments, and any content except references) in length, with unrestricted space for references, in the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word). ACM’s CCS concepts and keywords are not required for review but may be required if accepted and published by the ACM.

For LaTeX, the following should be used:

\documentclass[sigconf,natbib=true,anonymous=true]{acmart}
Submissions must be anonymous and should be submitted electronically via EasyChair:

https://easychair.org/conferences?conf=sigir22

At least one author of each accepted paper is required to register for, and present the work at the conference.

Anonymity
The full paper review process is double-blind. Authors are required to take all reasonable steps to preserve the anonymity of their submission. The submission must not include author information and must not include citations or discussion of related work that would make the authorship apparent. However, it is acceptable to refer to companies or organizations that provided datasets, hosted experiments or deployed solutions if there is no implication that the authors are currently affiliated with these organizations. While authors can upload to institutional or other preprint repositories such as arXiv.org before reviewing is complete, we generally discourage this since it places anonymity at risk. If the paper is already on arXiv, please change the title and abstract so that it is not immediately obvious they are the same. Do not upload the paper to a preprint site after submission to SIGIR—wait until a review decision to avoid reviewers seeing the paper in daily digests or other places. Breaking anonymity puts the submission at risk of being desk rejected.

Authors should carefully go through ACM’s authorship policy before submitting a paper. Please ensure that all authors are clearly identified in EasyChair before the submission deadline. To support the identification of reviewers with conflicts of interest, the full author list must be specified at submission time. No changes to authorship will be permitted for the camera-ready submission under any circumstance or after submissions close. So please, make sure you have them listed correctly when submissions close.

Desk Rejection Policy
Submissions that violate the preprint policy, anonymity, length, or formatting requirements, or are determined to violate ACM’s policies on academic dishonesty, including plagiarism, author misrepresentation, falsification, etc., are subject to desk rejection by the chairs. Any of the following may result in desk rejection:

Figures, tables, proofs, appendixes, acknowledgements, or any other content after page 9 of the submission.
Formatting not in line with the guidelines provided above.
Authors or authors’ institutional affiliations clearly named or easily discoverable.
Links to source repositories that reveal author identities code or extended versions of the current paper. It is recommended to hold these for the final published version and submit source code for artifact review.
Addition of authors after abstract submission.
Content that has been determined to have been copied from other sources.
Any form of academic fraud or dishonesty.
Lack of topical fit for SIGIR.
Relevant Topics
Relevant topics include:

Search and ranking. Research on core IR algorithmic topics, including IR at scale, such as:

Queries and query analysis (e.g., query intent, query understanding, query suggestion and prediction, query representation and reformulation, spoken queries).
Web search (e.g., ranking at web scale, link analysis, sponsored search, search advertising, adversarial search and spam, vertical search).
Retrieval models and ranking (e.g., ranking algorithms, learning to rank, language models, retrieval models, combining searches, diversity, aggregated search, dealing with bias).
Efficiency and scalability (e.g., indexing, crawling, compression, search engine architecture, distributed search, metasearch, peer-to-peer search, search in the cloud).
Theoretical models and foundations of information retrieval and access (e.g., new theory, fundamental concepts, theoretical analysis).
Content recommendation, analysis and classification. Research focusing on recommender systems, rich content representations and content analysis, such as:

Filtering and recommendation (e.g., content-based filtering, collaborative filtering, recommender systems, recommendation algorithms, zero-query and implicit search, personalized recommendation).
Document representation and content analysis (e.g., summarization, text representation, linguistic analysis, readability, NLP for search, cross-lingual and multilingual search, information extraction, opinion mining and sentiment analysis, clustering, classification, topic models).
Knowledge acquisition (e.g. information extraction, relation extraction, event extraction, query understanding, human-in-the-loop knowledge acquisition).
Machine Learning and NLP for Search and Recommendation. Research bridging ML, NLP, and IR.

Core ML (e.g. deep learning for IR, embeddings, intelligent personal assistants and agents, unbiased learning).
Question answering (e.g., factoid and non-factoid question answering, interactive question answering, community-based question answering, question answering systems).
Conversational systems (e.g., conversational search interaction, dialog systems, spoken language interfaces, intelligent chat systems).
Explicit semantics (e.g. semantic search, named-entities, relation and event extraction).
Knowledge representation and reasoning (e.g., link prediction, knowledge graph completion, query understanding, knowledge-guided query and document representation, ontology modeling).
Humans and interfaces. Research into user-centric aspects of IR including user interfaces, behavior modeling, privacy, interactive systems, such as:

Mining and modeling users (e.g., user and task models, click models, log analysis, behavioral analysis, modeling and simulation of information interaction, attention modeling).
Interactive search (e.g., search interfaces, information access, exploratory search, search context, whole-session support, proactive search, personalized search).
Social search (e.g., social media search, social tagging, crowdsourcing).
Collaborative search (e.g., human-in-the-loop, knowledge acquisition).
Information security (e.g., privacy, surveillance, censorship, encryption, security).
User studies comparing theory to human behaviour for search and recommendation.
Evaluation. Research that focuses on the measurement and evaluation of IR systems, such as:

User-centered evaluation (e.g., user experience and performance, user engagement, search task design).
System-centered evaluation (e.g., evaluation metrics, test collections, experimental design, evaluation pipelines, crowdsourcing).
Beyond Cranfield (e.g., online evaluation, task-based, session-based, multi-turn, interactive search).
Beyond labels (e.g., simulation, implicit signals, eye-tracking and physiological signals).
Beyond effectiveness (e.g., value, utility, usefulness, diversity, novelty, urgency, freshness, credibility, authority).
Methodology (e.g., statistical methods, reproducibility, dealing with bias, new experimental approaches, metrics for metrics).
Fairness, Accountability, Transparency, Ethics, and Explainability (FATE) in IR. Research on aspects of fairness and bias in search and recommender systems.

Fairness, accountability, transparency (e.g. confidentiality, representativeness, discrimination and harmful bias).
Ethics, economics, and politics (e.g., studies on broader implications, norms and ethics, economic value, political impact, social good).
Two-sided search and recommendation scenarios (e.g. matching users and providers, marketplaces).
Domain-specific applications. Research focusing on domain-specific IR challenges, such as:

Local and mobile search (e.g., location-based search, mobile usage understanding, mobile result presentation, audio and touch interfaces, geographic search, location context in search).
Social search (e.g., social networks in search, social media in search, blog and microblog search, forum search).
Search in structured data (e.g., XML search, graph search, ranking in databases, desktop search, email search, entity-oriented search).
Multimedia search (e.g., image search, video search, speech and audio search, music search).
Education (e.g., search for educational support, peer matching, info seeking in online courses).
Legal (e.g., e-discovery, patents, other applications in law).
Health (e.g., medical, genomics, bioinformatics, other applications in health).
Knowledge graph applications (e.g. conversational search, semantic search, entity search, KB question answering, knowledge-guided NLP, search and recommendation).
Other applications and domains (e.g., digital libraries, enterprise, expert search, news search, app search, archival search, new retrieval problems including applications of search technology for social good).

Overview

Top Research Topics at International ACM SIGIR Conference on Research and Development in Information Retrieval?

  • Information retrieval (51.23%)
  • Artificial intelligence (28.69%)
  • World Wide Web (16.15%)

The conference mainly deals with areas of study such as Information retrieval, Artificial intelligence, World Wide Web, Natural language processing and Relevance (information retrieval). It focuses on Information retrieval as well as the interrelated topic of Ranking. The main emphasis of it is the subject of Ranking, focusing on Learning to rank.

Topics in Artificial intelligence were tackled in line with various other fields like Machine learning, Task (project management), Data mining and Pattern recognition. The Machine learning study tackling the subject of Recommender system is the focus of International ACM SIGIR Conference on Research and Development in Information Retrieval. The World Wide Web research dealing mostly with Web page is the focus of the conference.

Specifically, studies on Question answering are prevalent in the Natural language processing works discussed. The research on Relevance (information retrieval) discussed in the conference draws on the closely related field of Document retrieval. The work on Query expansion tackled in the event brings together disciplines like Query language, Web query classification, Sargable, Query optimization and Concept search.

What are the most cited papers published at the conference?

  • Probabilistic latent semantic indexing (4179 citations)
  • An algorithmic framework for performing collaborative filtering (2554 citations)
  • A language modeling approach to information retrieval (2538 citations)

Research areas of the most cited articles at International ACM SIGIR Conference on Research and Development in Information Retrieval:

The published articles focus largely on the fields of Information retrieval, Artificial intelligence, Data mining, Machine learning and Natural language processing. While the primary focus in the most cited publications is Information retrieval, they also dissect topics surrounding Ranking and Quality (business) as a whole. In addition to Artificial intelligence research, the conference articles aim to explore topics under Recommender system and Pattern recognition.

What topics the last edition of the conference is best known for?

  • Artificial intelligence
  • Operating system
  • The Internet

The previous edition focused in particular on these issues:

The conference mostly deals with topics like Artificial intelligence, Information retrieval, Machine learning, Recommender system and Task (project management). The conference focused on Artificial intelligence research but expanded to cover Natural language processing. The concepts on Information retrieval presented in the conference can also apply to other research fields, including Graph (abstract data type), Context (language use) and Representation (mathematics).

The event focuses on Machine learning but the discussions also offer insight into other areas such as Generalization, Inference and Robustness (computer science). In addition to Recommender system research, the event aims to explore topics under Counterfactual thinking, Key (cryptography), Preference and Personalization. The concepts on Ranking (information retrieval) presented in the event can also apply to other research fields, including Matching (statistics) and Transformer (machine learning model).

The most cited articles from the last conference are:

  • Deconfounded Video Moment Retrieval with Causal Intervention (16 citations)
  • Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method (16 citations)
  • CauseRec: Counterfactual User Sequence Synthesis for Sequential Recommendation (16 citations)

Papers citation over time

A key indicator for each conference is its effectiveness in reaching other researchers with the papers published at that venue.

The chart below presents the interquartile range (first quartile 25%, median 50% and third quartile 75%) of the number of citations of articles over time.

Research.com

The top authors publishing at International ACM SIGIR Conference on Research and Development in Information Retrieval (based on the number of publications) are:

  • Maarten de Rijke (121 papers) published 12 papers at the last edition, 1 less than at the previous edition,
  • W. Bruce Croft (113 papers) published 2 papers at the last edition, 3 less than at the previous edition,
  • ChengXiang Zhai (73 papers) published 3 papers at the last edition the same number as at the previous edition,
  • Iadh Ounis (70 papers) published 2 papers at the last edition, 2 less than at the previous edition,
  • Ryen W. White (70 papers) absent at the last edition.

The overall trend for top authors publishing at this conference is outlined below. The chart shows the number of publications at each edition of the conference for top authors.

Research.com

Only papers with recognized affiliations are considered

The top affiliations publishing at International ACM SIGIR Conference on Research and Development in Information Retrieval (based on the number of publications) are:

  • Microsoft (588 papers) published 26 papers at the last edition, 6 less than at the previous edition,
  • University of Massachusetts Amherst (247 papers) published 9 papers at the last edition, 1 less than at the previous edition,
  • University of Glasgow (232 papers) published 10 papers at the last edition, 1 less than at the previous edition,
  • Yahoo! (220 papers) published 1 paper at the last edition, 1 less than at the previous edition,
  • University of Amsterdam (212 papers) published 12 papers at the last edition, 4 less than at the previous edition.

The overall trend for top affiliations publishing at this conference is outlined below. The chart shows the number of publications at each edition of the conference for top affiliations.

Research.com

Publication chance based on affiliation

The publication chance index shows the ratio of articles published by the best research institutions at the conference edition to all articles published within that conference. The best research institutions were selected based on the largest number of articles published during all editions of the conference.

The chart below presents the percentage ratio of articles from top institutions (based on their ranking of total papers).Top affiliations were grouped by their rank into the following tiers: top 1-10, top 11-20, top 21-50, and top 51+. Only articles with a recognized affiliation are considered.

Research.com

During the most recent 2021 edition, 5.21% of publications had an unrecognized affiliation. Out of the publications with recognized affiliations, 23.56% were posted by at least one author from the top 10 institutions publishing at the conference. Another 19.37% included authors affiliated with research institutions from the top 11-20 affiliations. Institutions from the 21-50 range included 15.45% of all publications and 41.62% were from other institutions.

Returning Authors Index

A very common phenomenon observed among researchers publishing scientific articles is the intentional selection of conferences they have already attended in the past. In particular, it is worth analyzing the case when the authors participate in the same conference from year to year.

The Returning Authors Index presented below illustrates the ratio of authors who participated in both a given as well as the previous edition of the conference in relation to all participants in a given year.

Research.com

Returning Institution Index

The graph below shows the Returning Institution Index, illustrating the ratio of institutions that participated in both a given and the previous edition of the conference in relation to all affiliations present in a given year.

Research.com

The experience to innovation index

Our experience to innovation index was created to show a cross-section of the experience level of authors publishing at a conference. The index includes the authors publishing at the last edition of a conference, grouped by total number of publications throughout their academic career (P) and the total number of citations of these publications ever received (C).

The group intervals were selected empirically to best show the diversity of the authors' experiences, their labels were selected as a convenience, not as judgment. The authors were divided into the following groups:

  • Novice - P < 5 or C < 25 (the number of publications less than 5 or the number of citations less than 25),
  • Competent - P < 10 or C < 100 (the number of publications less than 10 or the number of citations less than 100),
  • Experienced - P < 25 or C < 625 (the number of publications less than 25 or the number of citations less than 625),
  • Master - P < 50 or C < 2500 (the number of publications less than 50 or the number of citations less than 2500),
  • Star - P ≥ 50 and C ≥ 2500 (both the number of publications greater than 50 and the number of citations greater than 2500).

Research.com

The chart below illustrates experience levels of first authors in cases of publications with multiple authors.

Research.com

Other Conferences in Spain

DH 2022 : International Conference on Digital Health

Mar 14, 2022 - Mar 14, 2022

Barcelona, Spain

Deadline: Tuesday 01 Jan 2999

Previous Editions

Something went wrong. Please try again later.