Ranking & Metrics Conference Call for Papers Other Conferences in United States
ISSTA 2021 : International Symposium on Software Testing and Analysis

ISSTA 2021 : International Symposium on Software Testing and Analysis

Aarhaus , Denmark

Submission Deadline: Friday 29 Jan 2021

Conference Dates: Jul 12, 2021 - Jul 12, 2021

Research
Impact Score 5.10

OFFICIAL WEBSITE

Conference Organizers: Deadline extended?
Click here to edit

Ranking & Metrics Impact Score is a novel metric devised to rank conferences based on the number of contributing the best scientists in addition to the h-index estimated from the scientific papers published by the best scientists. See more details on our methodology page.

Research Impact Score: 5.10
Contributing Best Scientists: 64
H5-index:
Papers published by Best Scientists 90
Research Ranking (Computer Science) 87

Conference Call for Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis, or tools are welcome.

Experience Papers
Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Reproducibility Studies
ISSTA would like to encourage researchers to reproduce results from previous papers. A reproducibility study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A reproducibility study should clearly report on results that the authors were able to reproduce as well as on aspects of the work that were irreproducible. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches. In particular, reproducibility studies should follow the ACM guidelines on reproducibility (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. This means that it is also insufficient to focus on repeatability (i.e., same experiment) alone. Reproducibility Studies will be evaluated according to the following standards:

Depth and breadth of experiments
Clarity of writing
Appropriateness of conclusions
Amount of useful, actionable insights
Availability of artifacts
We expect reproducibility studies to clearly point out the artifacts the study is built on, and to submit those artifacts to artifact evaluation (see below). Artifacts evaluated positively will be eligible to obtain the highly prestigious badges Results Replicated or Results Reproduced.

Overview

Top Research Topics at International Symposium on Software Testing and Analysis?

  • Programming language (24.92%)
  • Software engineering (16.83%)
  • Software (15.55%)

International Symposium on Software Testing and Analysis was organized to reinforce research efforts on Programming language, Software engineering, Software, Theoretical computer science and Test case. International Symposium on Software Testing and Analysis concentrates on Programming language topics that focus on Java, Static analysis, Debugging, Model checking and Concurrency. It dives deep in exploring the relationship between the study of Static analysis and Program analysis.

While International Symposium on Software Testing and Analysis focused on Software engineering, it was also able to explore topics like Software system, Software construction, Software testing and Code (cryptography). International Symposium on Software Testing and Analysis explores research in Software and the adjacent study of Real-time computing. In addition to Theoretical computer science research, the event aims to explore topics under Algorithm and Symbolic execution.

While Test case is the focus of it, it also provided insights into the studies of Data mining and Code coverage. It focuses on Test (assessment) but sometimes tackles the closely related topic of Reliability engineering which is concerned with White-box testing. The conference explores topics in White-box testing which can be helpful for research in disciplines like Regression testing and Non-regression testing.

What are the most cited papers published at the conference?

  • Defects4J: a database of existing faults to enable controlled testing studies for Java programs (610 citations)
  • Korat: automated testing based on Java predicates (596 citations)
  • Dytan: a generic dynamic taint analysis framework (456 citations)

Research areas of the most cited articles at International Symposium on Software Testing and Analysis:

The conference articles facilitate discussions on Programming language, Data mining, Reliability engineering, Theoretical computer science and Java. The most cited papers focus on Data mining but sometimes tackle the closely related topic of Cluster analysis which is concerned with Scalability. The published papers facilitate discussions on Reliability engineering that incorporate concepts from other fields like Test (assessment), Software, Test case, Code coverage and Regression testing.

Papers citation over time

A key indicator for each conference is its effectiveness in reaching other researchers with the papers published at that venue.

The chart below presents the interquartile range (first quartile 25%, median 50% and third quartile 75%) of the number of citations of articles over time.

Research.com

The top authors publishing at International Symposium on Software Testing and Analysis (based on the number of publications) are:

  • Alessandro Orso (17 papers) published 2 papers at the last edition,
  • Michael D. Ernst (17 papers) published 3 papers at the last edition,
  • Darko Marinov (16 papers) published 1 paper at the last edition the same number as at the previous edition,
  • Andreas Zeller (16 papers) absent at the last edition,
  • Tao Xie (13 papers) absent at the last edition.

The overall trend for top authors publishing at this conference is outlined below. The chart shows the number of publications at each edition of the conference for top authors.

Research.com

Only papers with recognized affiliations are considered

The top affiliations publishing at International Symposium on Software Testing and Analysis (based on the number of publications) are:

  • University of Illinois at Urbana–Champaign (27 papers) published 1 paper at the last edition, 2 less than at the previous edition,
  • IBM (25 papers) absent at the last edition,
  • Saarland University (24 papers) published 1 paper at the last edition, 3 less than at the previous edition,
  • Microsoft (23 papers) absent at the last edition,
  • Georgia Institute of Technology (22 papers) published 2 papers at the last edition.

The overall trend for top affiliations publishing at this conference is outlined below. The chart shows the number of publications at each edition of the conference for top affiliations.

Research.com

Publication chance based on affiliation

The publication chance index shows the ratio of articles published by the best research institutions at the conference edition to all articles published within that conference. The best research institutions were selected based on the largest number of articles published during all editions of the conference.

The chart below presents the percentage ratio of articles from top institutions (based on their ranking of total papers).Top affiliations were grouped by their rank into the following tiers: top 1-10, top 11-20, top 21-50, and top 51+. Only articles with a recognized affiliation are considered.

Research.com

During the most recent 2018 edition, 2.50% of publications had an unrecognized affiliation. Out of the publications with recognized affiliations, 30.77% were posted by at least one author from the top 10 institutions publishing at the conference. Another 12.82% included authors affiliated with research institutions from the top 11-20 affiliations. Institutions from the 21-50 range included 15.38% of all publications and 41.03% were from other institutions.

Returning Authors Index

A very common phenomenon observed among researchers publishing scientific articles is the intentional selection of conferences they have already attended in the past. In particular, it is worth analyzing the case when the authors participate in the same conference from year to year.

The Returning Authors Index presented below illustrates the ratio of authors who participated in both a given as well as the previous edition of the conference in relation to all participants in a given year.

Research.com

Returning Institution Index

The graph below shows the Returning Institution Index, illustrating the ratio of institutions that participated in both a given and the previous edition of the conference in relation to all affiliations present in a given year.

Research.com

The experience to innovation index

Our experience to innovation index was created to show a cross-section of the experience level of authors publishing at a conference. The index includes the authors publishing at the last edition of a conference, grouped by total number of publications throughout their academic career (P) and the total number of citations of these publications ever received (C).

The group intervals were selected empirically to best show the diversity of the authors' experiences, their labels were selected as a convenience, not as judgment. The authors were divided into the following groups:

  • Novice - P < 5 or C < 25 (the number of publications less than 5 or the number of citations less than 25),
  • Competent - P < 10 or C < 100 (the number of publications less than 10 or the number of citations less than 100),
  • Experienced - P < 25 or C < 625 (the number of publications less than 25 or the number of citations less than 625),
  • Master - P < 50 or C < 2500 (the number of publications less than 50 or the number of citations less than 2500),
  • Star - P ≥ 50 and C ≥ 2500 (both the number of publications greater than 50 and the number of citations greater than 2500).

Research.com

The chart below illustrates experience levels of first authors in cases of publications with multiple authors.

Research.com

Other Conferences in Denmark

17th International conference on Universal Access in Human-Computer Interaction

Jul 23, 2023 - Jul 28, 2023

Copenhagen , Denmark, Denmark

Deadline: Friday 21 Oct 2022

25th International Conference on Human-Computer Interaction

Jul 23, 2023 - Jul 28, 2023

Copenhagen , Denmark, Denmark

Deadline: Friday 21 Oct 2022

9th International Conference on Human Aspects of IT for the Aged Population

Jul 23, 2023 - Jul 28, 2023

Copenhagen , Denmark, Denmark

Deadline: Friday 21 Oct 2022

Previous Editions

Something went wrong. Please try again later.