If you’re reading this page, you’re probably considering submitting to ICER. Congratulations! We’re excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.
You can read on for more detail, but here is a summary of some of this guide’s high level points:
- All research, regardless of method, epistemology, or focus, is in scope at ICER, as long as it concerns some aspect of learning about computing.
- ICER is a venue for research papers, not experience reports.
- Relevant prior work from all venues, not just computing education venues, should be used to motivate the work, as well as addressed in discussion.
- Theory can be highly valuable for justifying work, but is not required, especially for phenomena for which there are no sufficient theories.
- The conference is global, and so learning contexts should be described for a global audience.
- Provide enough methods detail to support replication and review.
- It’s critical to report more than just statistical significance.
- It’s critical to report detailed arguments and rationale for qualitative methods.
- Abstracts should describe results, not an outline of the paper.
- Submissions should be original, in English, and anonymized.
What’s in scope at ICER?
ICER’s goal is to be an intellectually inclusive conference. Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the historical focus on introductory programming courses in post-secondary education, such as work on primary and secondary education, as well as informal learning in any setting or learning amongst adults. (Note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you haven’t seen a particular topic published on a topic at ICER, or you haven’t seen a particular method be used, that’s okay. Our program committee will be trained to value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.
That said, under the current review process, we cannot promise that we’ve recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you don’t see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining the things our reviewers are unlikely to know.
Note that we used the word research above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you’re looking to share your experiences, consider submitting to the SIGCSE Technical Symposium’s Experience Report track, or simply blogging about it. Research, in contrast, should meet the criteria presented in the next question.
What makes a good computing education research paper?
It’s impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the Research Paper page. These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria above are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions or replicate prior results, novel learning technologies, novel arguments about computing education phenomena, literature reviews, and other types of research contributions.
What prior work should be cited?
All significant publications that are relevant to your research questions. This includes not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides for conducting literature reviews and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey everything we know about computing education up to 2018.
Papers will be critiqued for not being adequately grounded in prior work published across academia. They will also be critiqued for not accurately citing work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you’re citing are members of the computing education research community and may be your reviewers. Finally, papers will also be critiqued for not returning to prior work in a discussion of a paper’s contributions. All papers should explain how the paper’s contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.
How should theory be used?
Different disciplines across academia vary greatly on how they use and develop theory. And our community has discussed the role of theory multiple times:
- Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.
- Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.
- Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field’s theory building should focus on theories specific to computing education.
These works suggest that theory can be a powerful tool for predicting and explaining computing education phenomena, and authors should use available theory to plan their research and explain their results. However, reviewers should be careful in their use of theory as a form of gatekeeping of surprising results that current theories cannot explain. Theories are not required to motivate or justify an empirical study, especially when there may not be theories for a phenomenon.
In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples.
How should educational contexts be described?
If you’re reporting on empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. Try to describe your context to someone you imagine to be least familiar with your system. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge. Include information other structural factors, including whether courses are required or elective, what incentives students have to enroll in courses, how students in courses vary. For authors in the United States, common terminology to avoid include “elementary school”, “middle school”, “high school”, which do not have well-defined meanings elsewhere.
What details should we report about our methods?
ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, and beyond. It’s critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions.
Some contributions might benefit from following the Center for Open Science’s recommendations to ensure replicable, transparent science. These include practices such as:
- Data should be posted to a trusted repository.
- Data in that repository is properly cited in the paper.
- Any code used for analysis is posted to a trusted repository.
- Results are independently reproduced.
- Materials used for the study are posted to a trusted repository.
- Studies and their analysis plans are pre-registered prior to being conducted.
Our community is quite far from adopting any of these standards. Additionally, and pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review.
How should we report statistics?
The world has moved beyond p-values, but computing education, like most of academia, still relies on them. It’s time to adopt best practices from broader scientific communities. Therefore, when reporting the results of statistical hypothesis tests, it is critical to report:
- The test used
- The rationale for choosing the test
- The test statistic computed
- The actual p-value (not just whether it was greater than or less than an arbitrary threshold)
- An effect size and its confidence intervals.
Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.
Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you’re using advanced techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that page limits might prevent a detailed explanation of methods for entirely unfamiliar readers.
How should we report on the reliability of qualitative methods?
Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. There is no single best standard. When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis are quite rare in adjacent fields like HCI, and rare in computing education as well.
What makes a good abstract?
A good abstract should summarize the question your paper asks and what answers it found. It’s not enough to just say “We discuss our results and their implications”; say what you actually discovered, so future readers can learn that from your summary.
If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections:
- Motivation. What phenomena are you considering and why?
- Objectives. What research questions were you trying to answer?
- Method. What did you do to answer your research questions?
- Results. What did you discover? Both positive and negative results should be summarized.
- Discussion. What implications does your discovery have on prior and future research, and on the practice of computing education?
Not all papers will fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper’s research design and contribution.
What counts as plagiarism?
Read ACM’s policy on Plagiarism, Misrepresentation, and Falsification. Our reviewers are trained on its policies.
Must submissions be in English?
At the moment, yes. Our reviewing community’s only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to.
To mitigate this somewhat, we do instruct our reviewers to not penalize a paper for minor English spelling and grammar errors that can easily be corrected with minor revisions. However, because the conference does not currently have a revise and resubmit model, it is important that the paper’s use of English is reasonably correct upon submission.
American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html
Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press.
Adrienne Decker, Monica M. McGill, and Amber Settle. 2016. Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567