Skip to content

Peer Review in Computer Science: good, bad & broken

Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

This category can be followed from the open social web via the handle cs-peer-review-general@forum.cspaper.org

121 Topics 327 Posts

Subcategories


  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    59 197
    59 Topics
    197 Posts
    rootR
    23,680 submissions. 4,167 accepts. 17.6% acceptance. And a comment storm that won’t quit. The final decisions for AAAI 2026 are out, and they’ve left the community dissecting not just which papers got in, but how; and whether the process rewarded rigor or roulette. Below I synthesize what changed this year, what authors and reviewers report from the trenches, and which failure modes we should fix before the next cycle. What actually changed in 2026 A two-phase process. Phase 1: Two reviewers per paper; if both lean the same way, the paper can be filtered early. Phase 2: Only papers with disagreement or “borderline” status advance; a fresh discussion and an Area Chair (AC) or chair decision completes the call. AI-assisted peer review pilot. Tools assisted with reviewer assignment, rebuttal summarization, and chair/AC briefing; the official line is “assistive, not decisive,” but even assistive summaries can shape outcomes. Scale & pressure. Submissions jumped to 23,680, with 4,167 accepts (17.6%); a page-2 chart in one article visualizes the acceptance and score distribution this year. Rumors of ~29k submissions, and debates about geographic concentration, fueled the sense that “this doesn’t scale.” [image: 1762860676705-screenshot-2025-11-11-at-12.30.12.jpg] What the community is saying (in their own words) “This is the weirdest reviewing process I’ve ever experienced.” (PC member) “Two lines of strength … then gave the score 10.” (on a Phase-1 review) “Collusion isn’t the bug, it’s the acceptance criterion.” (community quip) “If this paper is accepted, I’ll be very disappointed and will never submit or review [for] AAAI.” (frustrated reviewer) These are short excerpts paraphrasing posts/screenshots circulated in the community; they capture the tone across threads while we keep quotes brief. Seven failure modes that surfaced (with concrete examples) 1) Phase-2 Drift: “better” papers out, “weaker” papers in Multiple accounts describe Phase-1 stacks where carefully argued, mid-to-positive reviews ended in rejection — yet in Phase-2, thinly justified enthusiasm pushed other papers forward. One recap highlights a case: “good papers got brushed off; weak ones were upgraded.” Why it matters: When the tie-break round inverts Phase-1 signal, authors perceive arbitrary override, not consensus refinement. 2) The “10 after two lines” phenomenon A viral anecdote: “One reviewer wrote two lines of strength, no weaknesses, then gave a 10.” Chairs may say final calls aren’t purely score-based, but this example epitomizes review depth imbalance. Why it matters: If thin praise can outweigh detailed critique, the process rewards confidence, not evidence. 3) Anti-rebuttal whiplash Authors reported cases where, after reading rebuttals, other reviewers lowered scores — “almost like they’re ganging up to get the papers rejected.” Why it matters: Rebuttal should clarify misunderstandings, not trigger pile-ons. Without a norm against score-lowering post-rebuttal, authors see responses as risk, not remedy. 4) Personal-connection suspicion A PC member wrote: “It feels like one reviewer is personally connected to a paper.” Even the appearance of conflict erodes trust when decisions concentrate in Phase-2. Why it matters: With fewer voices in Phase-2, disclosure and recusal policies must be stricter, or the venue inherits the look of favoritism. 5) Topic monocultures and “same-lab datasets” Commenters complained that, in narrow areas, “papers are from the same lab, using the same data and table, sidestepping the bold claims.” Why it matters: If novelty narrows to a single pipeline + dataset, we get leaderboard drift rather than field progress. 6) Opaque chair power, amplified by AI summaries The pilot tools summarize reviews and rebuttals for ACs/chairs. Officially, they don’t make decisions, but summaries can steer them — especially under time pressure. Why it matters: If the summary layer becomes the decisive layer, we need auditability: What did the model emphasize or omit? Which evidence did the chair actually read? A few bright spots (yes, there were some) Selective, but still diverse accepts. Teams publicly celebrated oral/poster outcomes across multiple subareas, indicating that compelling work did land — despite the noise. (Several examples of multi-paper acceptances are cataloged, including orals.) Process intent. The design intent — fast triage in Phase-1, deeper scrutiny in Phase-2, and AI to reduce clerical load — addresses real scaling pain points. But intention without instrumentation is not enough. What to fix before AAAI 2027 (actionable proposals) Publish the weighting of scores, summaries, and chair discretion. A simple decision card per paper: inputs considered, their weight, and the final rationale (2–3 lines). Require chairs to confirm they read all full reviews (not just summaries). Log it. Guardrails for rebuttal dynamics. Allow score increases post-rebuttal; permit decreases only with a short, evidence-linked justification. Auto-flag “large post-rebuttal score drops” for AC scrutiny. Minimum review depth for extreme scores. A 9/10 or 1/2 must include specific experimental checks, ablations, or error analyses. Thin reviews can’t carry extreme recommendations. Conflict-of-interest pressure test. Expand COI beyond coauthorship: same dataset/lab lineage, shared grants, or mentoring relationships within X years. Random audits of Phase-2 paper–reviewer ties. AI summary audits. Store summary diffs: what points from reviews/rebuttals were included, collapsed, or omitted by the tool. Let authors request the summary artifact post-decision to check for gross omissions. Counter-monoculture incentives. Reserve a slice of accepts for out-of-cluster submissions that expand datasets, tasks, or methods beyond the mainline. Encourage replication + stress tests with principled novelty, not just incremental leaderboard bumps. Transparent statistics, not just headlines. Publish per-area acceptance rates and score–decision scatter. Concrete vignettes to learn from “SPC said ‘accept’, final: reject.” One account describes an SPC-endorsed paper turned down at the end — fueling the belief that finals can nullify expert consensus without written rationale. “Rebuttal helped? Scores went down.” Multiple reports say rebuttals triggered score reductions, not clarifications. This suggests reviewers used rebuttal to coordinate or defend priors rather than test claims. “Same-lab treadmill.” In narrow subfields, authors perceive that novelty ≈ the next tweak from the same pipeline. This is where cross-area reviewers and external datasets can diversify signal. Why this moment matters A selective venue can survive a tough year of admits; it cannot survive a downward trust curve. When authors feel that (a) thin reviews outrank deep analyses, (b) summaries outrank evidence, or (c) relationships outrank rules, they exit — or they game the system. The result is fewer risky ideas, more monocultures, and louder meta-drama. AAAI’s move to two phases and AI assistance could scale peer review. But scale without governance produces exactly what we saw: hot takes eclipsing handbooks. The fixes above are lightweight and testable in one cycle. We should try them. Before you submit again: wind-tunnel your paper Want an early read on whether your paper survives Phase-1 thin-review + Phase-2 scrutiny? Try a simulation pass. Tools like CSPaper.org let you upload and receive structured community feedback quickly. CSPaper implements a simple three-step flow: go to site → upload → get reviews. Use it to pressure-test ablations, claims, and clarity before the real thing. Sources https://www.reddit.com/r/MachineLearning/comments/1oaf1v0/d_on_aaai_2026_discussion/ https://aaai.org/conference/aaai/aaai-26/review-process/?utm_source=chatgpt.com https://aaai.org/wp-content/uploads/2025/08/FAQ-for-the-AI-Assisted-Peer-Review-Process-Pilot-Program.pdf?utm_source=chatgpt.com https://openaccept.org/c/ai/aaai/ https://x.com/lyson_ober/status/1986939786163011775 https://papercopilot.com/statistics/aaai-statistics/aaai-2026-statistics/ https://mp.weixin.qq.com/s/0Vbdd0ve1isnJ_e4_aOOVg https://openaccept.org/c/ai/aaai/
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    11 18
    11 Topics
    18 Posts
    rootR
    In a recent LinkedIn post, Eric Xing — President of MBZUAI, Professor at Carnegie Mellon University, and Chief Scientist at GenBio — raised a pressing concern regarding the evolving peer review expectations in major AI and computer vision conferences. His post centers on a new clause in the CVPR 2026 Author Enrollment Form, which mandates that all qualified authors must commit to serve as reviewers, warning that failure to do so could affect the review process of their own submissions. This move, presumably designed to manage the explosive growth of submissions to top-tier conferences like CVPR, has stirred significant debate. Xing describes the approach as “a new tactic of imposing demands,” arguing that it risks excluding legitimate contributors who, for valid reasons, cannot serve as reviewers. These include senior academics and leaders who already shoulder immense administrative and mentoring duties, non-CV collaborators such as domain experts or data providers, and industry sponsors who meaningfully contribute but lack technical reviewing expertise. The broader implication of Xing’s post extends beyond CVPR. It highlights the sustainability crisis in the academic peer review system — a system strained by exponential submission growth, limited qualified reviewers, and increasing expectations of review quality. While the intention behind mandatory reviewing may be to distribute the workload more evenly, it inadvertently blurs the line between authorship and reviewer obligation, potentially discouraging interdisciplinary collaboration and senior involvement. Xing calls for a more thoughtful, long-term solution that ensures fairness, inclusivity, and sustainability in scientific publishing. Rather than relying on coercive mechanisms, the community might consider systemic innovations — from better incentive structures to leveraging technology that supports reviewer efficiency. One emerging direction involves AI-assisted review systems, such as CSPaper.org, which aim to streamline and partially automate aspects of peer review. Platforms like these could help alleviate the mounting pressure caused by the booming volume of submissions — a challenge that will only intensify as AI research continues its exponential trajectory.
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    12 26
    12 Topics
    26 Posts
    SylviaS
    The final decisions for EMNLP 2025 have been released, sparking a wave of reactions across research communities on social media such as Zhihu and Reddit. Beyond the excitement of acceptances and the disappointment of rejections, this cycle is marked by a remarkable policy twist: 82 papers were desk-rejected because at least one author had been identified as an irresponsible reviewer. This article provides an in-depth look at the decision process, the broader community responses, and a comprehensive table of decision outcomes shared publicly by researchers. [image: 1755763433631-screenshot-2025-08-21-at-10.02.47.jpg] Key Announcements from the Decision Letter The program chairs’ decision email highlighted several important points: Acceptance Statistics 8174 submissions received. 22.16% accepted to the Main Conference. 17.35% accepted as Findings. 82 papers desk-rejected due to irresponsible reviewer identification. Desk Rejections Linked to Reviewer Misconduct A novel and controversial policy: authors who were flagged as irresponsible reviewers had their own papers automatically desk-rejected. The official blog post elaborates on what qualifies as irresponsible reviewing (e.g., extremely short, low-quality, or AI-generated reviews). Camera-Ready Submissions Deadline: September 19, 2025. Authors must fill in the Responsible NLP checklist, which will be published in the ACL Anthology alongside the paper. Allowed: one extra page for content, one page for limitations (mandatory), optional ethics, unlimited references. Presentation and Logistics Papers must be presented either in person or virtually to be included in proceedings. Oral vs. poster presentation decisions will be finalized after camera-ready submission. Registration deadline: October 3 (at least one author), with early in-person registration by October 6 due to Chinese government approval processes (conference will be in Suzhou). The Desk Rejection Controversy: 82 Papers Removed This year’s 82 desk rejections triggered heated debates. While ensuring reviewer accountability is laudable, punishing co-authors for the actions of a single irresponsible reviewer is unprecedented and raises questions about fairness: Collective punishment? Innocent co-authors had their work invalidated. Transparency gap: The official blog post provided criteria, but the actual identification process is opaque. Potential chilling effect: Researchers may hesitate to serve as reviewers for fear of inadvertently harming their own submissions. The policy signals a stronger stance by ACL conferences toward review quality enforcement, but it also underscores the urgent need for more transparent, community-driven reviewer accountability mechanisms. Community Voices: Decisions Shared by Researchers To capture the breadth of community sentiment, below is a comprehensive table compiling decision outcomes (OA = overall average reviewer scores, Meta = meta-review score) shared publicly across Zhihu, Reddit and X. This table is exhaustive with respect to all shared samples from the provided community discussions. OA Scores (per reviewer) Meta Outcome Track / Notes / User 4, 4, 3 4 Main Meta reviewer wrote a detailed essay, helped acceptance 3.5, 3.5, 2 — Main Initially worried, accepted to main 2.67 (avg) 3.5 Main Shared proudly (“unexpected”) 3.67 4 Main Confirmed traveling to Suzhou 3.33 (4, 3.5, 2.5) 3 Rejected Author frustrated, “don’t understand decision” 3.0 3 Rejected Hoped for Findings, didn’t get in 3.0 3.5 Main (short) Track: multilinguality & language diversity; first-author undergrad 2.33 3.5 Findings Efficient NLP track 3.33 3.5 Main Efficient NLP track 3.5, 3.5, 2.5 2.5 Findings Meta review accused of copy-paste from weakest reviewer 3, 3.5, 4 3 Main Theme track 4, 3, 2 2.5 Rejected One review flagged as AI-generated; rebuttal ignored 4.5, 2.5, 2 — Rejected Meta only two sentences 3.38 3.5 Main Rejected at ACL before; accepted at EMNLP 2, 3, 3 3 Rejected RepresentativeBed838 3.5, 3, 2.5 3.5 Rejected Author shocked 3, 3, 3 3 Rejected Multiple confirmations 5, 4, 3.5 4.5 Main Track: Dialogue and Interactive Systems 3.5, 4.5, 4 4 Main GlitteringEnd5311 3, 3.5, 3.5 3.5 Main Retrieval-Augmented LM track 2.5, 3, 3 3 Findings After rebuttal challenge; author reported meta reviewer 1.5, 3, 3 → rebuttal → 2.5, 3, 3.5 3.5 Main Initially borderline, improved after rebuttal 3.67 3 Main Computational Social Science / NLP for Social Good track 4, 3, 3 3 Main Low-resource track 3.5, 3.5, 3 3.5 Main Low-resource track 4, 3 3 Findings Author sad (“wish it was main”) Overall 3.17 3 Findings JasraTheBland confirmation Overall 3.17 3.5 Main AI Agents track Overall 3.17 3 Findings AI Agents track 4, 3, 2 3.5 Main Responsible-Pie-5882 3.5 (avg) 3.5 Main Few_Refrigerator8308 3, 3, 3.5 → rebuttal → 3.5,3.5,3.5 4.0 Main LLM Efficiency track 3.5, 2.5, 2.5 3 Findings FoxSuspicious7521 3, 3.5, 3.5 3.5 Main ConcernConscious4131 (paper 1) 2, 3, 3.5 3 Rejected ConcernConscious4131 (paper 2) 3, 3, 3 3 Rejected Ok-Dot125 confirmation 3.17 (approx) 3.5 Main Old_Toe_6707 in AI Agents 3.17 (approx) 3 Findings Slight_Armadillo_552 in AI Agents 3, 3, 3 3 Rejected Confirmed again by AdministrativeRub484 4, 3, 2 3.5 Main Responsible-Pie-5882 (duplicate entry but reconfirmed) 3.5, 3.5, 3 3.5 Main breadwineandtits 3, 3, 3 3 Accepted (Findings or Main unclear) NeuralNet7 (saw camera-ready enabled) 2.5 (meta only) 2.5 Findings Mentioned as borderline acceptance 3.0 3.0 Findings shahroz01, expected 4, 3, 2 3.5 Main Responsible-Pie-5882 (explicit post) 3.5, 3.5, 2.5 2.5 Findings Practical_Pomelo_636 3, 3, 3 3 Reject Multiple confirmations across threads 4, 3, 3 3 Findings LastRepair2290 (sad it wasn’t main) 3.5, 3, 2.5 3.5 Rejected Aromatic-Clue-2720 3, 3, 3.5 3.5 Main ConcernConscious4131 2, 3, 3 3 Reject ConcernConscious4131 3, 3, 3 3 Reject Ok-Dot125 again 3.5, 3.5, 3 3.5 Main Few_Refrigerator8308 second report 3.5, 3, 2.5 3.5 Rejected Aromatic-Clue-2720 4, 3, 2 3.5 Main Responsible-Pie-5882 final confirmation 3.5, 3.5, 3 3.5 Main Reconfirmed across threads 3, 3, 3 3 Rejected Reported multiple times 2.5 (OA overall) 3.0 Findings Outrageous-Lake-5569 reference Patterns Emerging From the collected outcomes, some patterns can be observed: Meta ≥ 3.5 often leads to Main acceptance (even when individual OA scores are mediocre, e.g., 2.67). Meta = 3 cases are unstable: some lead to Findings, others to Rejection, and in a few cases even Main. Meta < 3 almost always means rejection, with rare exceptions. Reviewer quality matters: multiple complaints mention meta-reviews simply copy-pasting from the weakest reviewer, undermining rebuttals. This highlights the high variance in borderline cases and explains why so many authors felt frustrated or confused. Conclusion: Lessons from EMNLP 2025 EMNLP 2025 brought both joy and heartbreak. With a Main acceptance rate of just over 22%, competition was fierce. The desk rejections tied to reviewer misconduct added an entirely new layer of controversy that will likely remain debated long after the conference. For researchers, the key takeaways are: Meta review scores dominate: cultivate strong rebuttals and area chair engagement. Borderline cases are unpredictable: even a 3.5 meta may result in Findings instead of Main. Reviewer accountability is a double-edged sword: while improving review quality is necessary, policies that punish co-authors risk alienating the community. As the field grows, the CL community must balance fairness, rigor, and transparency—a challenge as difficult as the NLP problems we study.
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    5 30
    5 Topics
    30 Posts
    lelecaoL
    The IEEE International Conference on Data Mining (ICDM) 2025, held in Washington, D.C., opened with an energetic and intellectually vibrant ceremony celebrating its role as the world’s premier research conference in data mining. This year’s edition is marked by strong global participation, rigorous scientific standards, programmatic innovations, and several historical firsts. [image: 1763323340777-img_3593.jpg] 1. A Snapshot of Participation and Registration The ICDM 2025 conference has attracted remarkable worldwide interest, with 861 registered participants as of November 12. This year’s attendance reflects the conference’s expanding global reach and its reputation as a premier venue for data mining research. A significant portion of registrations comes from an impressively diverse set of countries. The largest groups of attendees hail from the United States, China, and Singapore, followed closely by strong participation from Austria, Korea, Slovenia, India, Germany, Japan, Spain, and Italy. A detailed breakdown presented during the opening ceremony showcased registration counts by country and region, distinguishing between author and non-author participation. The United States leads by a substantial margin, demonstrating continued engagement from both academic and industry communities. Many other regions also showed healthy representation, highlighting the international strength and inclusiveness of ICDM. 2. Scientific Rigor: Review Process and Acceptance Standards ICDM is known for maintaining one of the most rigorous review processes in computer science, and 2025 continues this tradition. According to the “Review Process” slide: [image: 1763323809222-screenshot-2025-11-16-at-15.09.20.jpg] Every paper received at least 3 independent reviews. The Program Committee carried out the main review workload. Area Chairs (ACs) synthesized reviewer insights and made recommendations. Program Co-Chairs made the final acceptance decisions. An important metric revealed this year: When papers are initially recommended by ACs, the acceptance rate is only 28%; and ended with 22.4% after careful re-examination.. This exceptionally selective process underscores ICDM’s commitment to publishing only the most impactful and high-quality research. As emphasized in the opening remarks: “Number matters, yet quality matters more!” 3. Submissions and Presentations: A Global Research Effort From the program statistics slide: 106 long presentations 70 short presentations [image: 1763324030553-screenshot-2025-11-16-at-15.12.37.jpg] The global author distribution revealed significant breadth: Total authors: 787 First authors: 475 The slides list several per-country contributions, including: Japan: 5 total authors, 2 first authors New Zealand: 2 authors When aggregating all countries in the region, the European Union ranked 3rd in number of submissions — a noteworthy position given the conference’s worldwide reach. These numbers highlight the depth and diversity of contributions as well as the geographical richness of the data mining community. 4. Innovations in ICDM 2025: A Historic First This year introduces major innovations, marking historic shifts in how ICDM organizes and disseminates research. 4.1 First Time in History: Official Video Presentations For the first time since ICDM’s inception: Video presentations are formally allowed as part of the technical program. This enables broader accessibility and allows contributors facing geographic, logistical, or visa challenges to showcase their work. 4.2 A Hybrid-Informed Program Structure As presented in the “Innovations” slides: [image: 1763324500682-screenshot-2025-11-16-at-15.19.23.jpg] The committee attempted to cluster in-person and video presentations when designing the schedule. However, when clustering conflicted with program quality or coherence, in-person and virtual presentations were treated equally. The main program emphasized in-person talks, but video presentations remain fully integrated from a scientific standpoint. 4.3 Prime-Time Interactive Sessions Innovations also include: Prime-time discussion sessions with top speakers such as Aidong Zhang, John Quackenbush, Shashi Shekar, and Jure Leskovec. A format designed to foster deeper engagement across modalities (in-person and virtual). These changes reflect ICDM’s push toward inclusivity, interactivity, and resilience in a global research landscape. 5. Technical Program Highlights [image: 1763324796391-screenshot-2025-11-16-at-15.25.49.jpg] Undergraduate and High School Symposium (73 papers) This signals ICDM’s strategic investment in nurturing early talent and expanding the pipeline of future data mining researchers. The main program includes dozens of specialized sessions, reinforcing ICDM’s commitment to covering both foundational and cutting-edge topics across AI, machine learning, graph mining, time-series analytics, and interdisciplinary applications. 6. The Blue Sky Track: A Space for Future Imagination ICDM 2025 continues the Blue Sky Track, dedicated to: Early-stage, exploratory, high-risk ideas Not-yet-completed papers “Blue-sky” conceptual innovations that challenge conventions This track encourages the community to explore bold new research directions without the constraints of full experimental maturity. It acts as a lighthouse for future breakthroughs. A Global Celebration of Data Mining Excellence ICDM 2025 opens with strong momentum: Hundreds of high-quality submissions A global author pool spanning every continent A rigorous multi-stage review process First-ever adoption of video presentations Program innovations aimed at inclusiveness and meaningful interaction A commitment to both established scholars and emerging young researchers As the opening ceremony emphasized, ICDM 2025 is not just a conference — it is a community milestone, a gathering of the world’s leading data miners pushing the boundaries of what data-driven knowledge discovery can achieve.
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 2
    1 Topics
    2 Posts
    rootR
    It seems CCF is revising the list again: https://www.ccf.org.cn/Academic_Evaluation/By_category/2025-05-09/841985.shtml
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    2 3
    2 Topics
    3 Posts
    JoanneJ
    [image: 1750758497155-fa715fd6-ed5a-44be-8c8d-84f1645fac47-image.png] CHI remains the flagship venue in the HCI field. It draws researchers from diverse disciplines, consistently puts humans at the center, and amplifies research impact through high quality papers, compelling keynotes, and extensive doctoral consortia. Yet CHI isn’t the entirety of the HCI landscape. It’s just the heart of a much broader ecosystem. Here’s a quick-look field guide Six flagship international HCI conferences Acronym What makes it shine Ideal authors Home page Photo UIST Hardware & novel interface tech; demo heavy culture System / device researchers https://uist.acm.org/2025/ [image: 1750757345992-d6b2b397-f753-40fd-b2b7-2410ed6556b9-image.png] SIGGRAPH Graphics core plus dazzling VR/AR & 3-D interaction showcases Graphics, visual interaction & art-tech hybrids https://www.siggraph.org/ [image: 1750757560460-6657b0b8-06d3-4c27-bc03-6f449a03b7c2-image.png] MobileHCI Interaction in mobile, wearable & ubiquitous contexts Ubicomp oriented, real world applications https://mobilehci.acm.org/2024/ [image: 1750757628685-22f47458-89b5-4f9c-8718-ee89249c1e49-image.png] CSCW Collaboration, remote work & social media at scale Socio-technical & social computing teams https://cscw.acm.org/2025/ [image: 1750757750339-ea17f345-83b9-47f3-af41-6623bdf45eab-image.png] DIS Creative, cultural & critical interaction design UX, speculative & experience driven scholars https://dis.acm.org/2025/ [image: 1750757796645-b1212781-047f-4afc-89a4-e07691e25225-image.png] CHI Broadest scope, human centred ethos, highest brand value Any HCI sub field https://chi2026.acm.org/ [image: 1750757827999-a2b6e621-cbbb-428c-929c-97d243165d19-image.png] Four high-impact HCI journals Journal Focus Good for Home page ACM TOCHI Major theoretical / methodological breakthroughs Large, mature studies needing depth https://dl.acm.org/journal/tochi IJHCS <br>(International Journal of Human-Computer Studies) Cognition → innovation → UX Theory blended with applications https://www.sciencedirect.com/journal/international-journal-of-human-computer-studies CHB <br>(Computers in Human Behavior) Psychological & behavioural angles on HCI Quant-heavy user studies & surveys https://www.sciencedirect.com/journal/computers-in-human-behavior IJHCI <br>(International Journal of Human-Computer Interaction) Cognitive, creative, health-related themes Breadth from conceptual to applied work https://www.tandfonline.com/journals/hihc20 ️ Conference vs. journal: choosing the right vehicle Conferences prize speed: decision to publication can be mere months, papers are concise, and novelty is king. Journals prize depth: multiple revision rounds, no strict length cap, and a focus on long term influence. When a conference is smarter 🧪 Fresh prototypes or phenomena that need rapid peer feedback Face-to-face networking with collaborators and recruiters ️ Time-sensitive results where a decision within months matters 🧭 When a journal pays off Data and theory fully polished and deserving full exposition Citation slow burn for tenure or promotion dossiers Ready for iterative reviews to reach an authoritative version Take-away If CHI is the main stage , UIST, SIGGRAPH, MobileHCI, CSCW & DIS are the satellite arenas ️; TOCHI, IJHCS, CHB & IJHCI serve as deep archives . Match your study’s maturity, urgency and career goals to the venue, follow the links above, and—once you’ve dropped in those shiny images—let the best audience discover your work. Happy submitting!
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    3 6
    3 Topics
    6 Posts
    rootR
    Looked at the submission data for the past 2 years. ICASSP 2024 has more than 5000 valid submissions. ICASSP 2025 (held in India) had over 7500 valid submissions. [image: 1758397284247-icassp2025-submission-number-email.jpeg]
  • Anything around peer review for conferences such as ISCA, FAST, ASPLOS, EuroSys, HPCA, SIGMETRICS, FPGA and MICRO.

    1 2
    1 Topics
    2 Posts
    rootR
    R.I.P. USENIX ATC ...