Skip to content

Peer Review in Computer Science: good, bad & broken

Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

This category can be followed from the open social web via the handle cs-peer-review-general@forum.cspaper.org

118 Topics 324 Posts

Subcategories


  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    58 196
    58 Topics
    196 Posts
    rootR
    TL;DR An active ICLR reviewer lays out a straightforward reading workflow and a pragmatic scoring lens: novelty × depth × presentation × reproducibility ≈ Weak Accept or above. Fairness of comparisons, solid ablations, and crystal-clear method writing matter more than squeezing out an extra 0.1% on a leaderboard. Keep theory correct (even if simple) and make your contribution easy to reproduce. [image: 1761686290262-screenshot-2025-10-28-at-22.17.53.png] (picture generated by Nano Banana) The Reviewer’s Reading Workflow (What Gets Attention First) Intro: deep read. The reviewer “locks onto” the Introduction to grasp motivation, what the work does, and how it does it. If this part is murky, the rest of the paper struggles. Related Work / Conclusion / Appendix: quick scan. Unless there are red flags, these sections are skimmed; over-selling in Conclusions doesn’t help. Minor equation/code checks pass if no obvious errors. Method & Pipeline: close read. The heart of the review: ablations and fair comparisons. SOTA claims are judged by fairness and rigor of comparisons, not numbers alone. Author takeaway: If you have limited time, invest it in: (i) sharpening the Intro; (ii) making the Method section surgical and reproducible; (iii) running honest, well-designed ablations. The Scoring Lens: A Multiplicative Heuristic The reviewer’s personal rule of thumb: novelty (innovation) × depth × presentation (clarity) × reproducibility ≈ Weak Accept or better. A zero in any factor sinks the product; solid but unflashy papers can still land well if the logic is airtight. What that means for authors Logical coherence beats “blind SOTA.” Clear motivation, reasonable method, trustworthy experiments, and crisp conclusions win. Even an “A+B” style idea can score well if the loop is tight and defensible. Clarity over flourish. Write precise, unembellished English; avoid undefined acronyms, logic jumps, and hype. Reproducibility with fair baselines. Report sensible baselines, reliable metrics, and fair setups; include ablations that truly test your claims. Theory: keep it correct (even if simple). No symbol errors or contradictory assumptions; ensure consistency between theory and implementation. Add a “small spark.” An insightful analysis, interpretability angle, or deployment-oriented observation can tip borderline papers upward. Timeline Notes & Process Frictions (ICLR 2026) Reviewer deadline: Nov 1, 19:59 (Anywhere on Earth). Historically, last-minute portal issues can happen — reviewers are advised to submit slightly early. Score release: ~Nov 12, early morning; rebuttal slated to begin around Dec 4. Authors should plan experiments and response writing accordingly. Load & pressure: The current “sharding/assignment” feels accurate; reviewers report a compressed window (e.g., five papers in ~two weeks) and competing end-of-year deadlines (CVPR nearby), which raises the bar for clarity and reviewer efficiency. A Reviewer-Aligned Author Checklist Use this one-page pre-submission pass aligned to the reviewer’s habits: 1) Introduction (most scrutinized) Problem & gap are explicit in the first 2–3 paragraphs. Contribution bullets are specific, testable, and map to experiments. 2) Method (deep dive) A minimal, self-contained description that lets others re-implement. Ablations isolate each design choice; fair SOTA comparisons (same data, budgets, metrics). 3) Reproducibility Datasets, hyperparams, seeds, and training schedules summarized; any code or equations checked for obvious inconsistencies. 4) Writing quality Trim hype; define acronyms on first use; remove logic jumps; ensure figures directly support claims. 5) Theory sanity No symbol misuse; assumptions are stated; math aligns with implementation. 6) “Small spark” Provide one insightful analysis (e.g., failure modes, sensitivity, or interpretability) that teaches the reader something beyond the main metric. Common Pitfalls (and Fixes) Over-indexed on SOTA, under-indexed on fairness. Fix: Match training budgets, report variance, and justify metric choices; show when your method doesn’t win and why. Beautiful equations, shaky implementation. Fix: Cross-check symbols, units, and pseudo-code; verify that the method implemented is the method claimed. Hype-driven writing. Fix: Replace superlatives with evidence; foreground ablations and clear limitations. Remember: Make it easy to believe you. If the motivation is lucid, the method is careful, the comparisons are fair, and the results are reproducible, you don’t need a flashy SOTA to clear the bar. That’s not just “peer review hygiene”; it’s how good ideas travel.
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    10 17
    10 Topics
    17 Posts
    rootR
    Starting with CVPR 2026, all paper authors will be required to report their compute usage through a new initiative known as the Compute Resource Form (CRF). Official Link: CVPR 2026 Compute Reporting This new policy tackles one of the most persistent structural issues in AI research — the invisible cost of computation. The goal is to improve transparency, reproducibility, and fairness, by making clear how much computational power each published work consumes. As some researchers have noted, this move “clearly separates industrial from academic research” and could promote a more open, equitable research environment — provided the community uses these disclosures responsibly. ️ What Authors Must Now Disclose (CRF Requirements) Every submission must include a Compute Resource Form (CRF) detailing: GPU and CPU usage Total training time and FLOPs Hardware specs (GPU type and count) Model efficiency metrics Key results vs. strongest baselines Estimated time to complete: 10–15 minutes Submission method: Upload with supplementary materials The CRF is mandatory, but will not affect acceptance decisions. It will be reviewed by an independent Compute Resource Reporting Committee, not by reviewers or area chairs. The required section covers the minimum compute needed to reproduce core results, while optional fields allow for additional details like development logs or ablation trials. Incentives for Transparency and Efficiency To foster more responsible research, CVPR 2026 will introduce three recognition awards: 🟩 Efficient CVPR Badge – High-efficiency papers within their category. 🟨 CVPR Compute Gold Star – Exceptional compute efficiency. 🟦 CVPR Compute Transparency Champion Award – High-quality anonymized logs (e.g., via W&B). Award criteria include compute-per-performance-gain ratios, compared across similar tasks. Recognized papers will be spotlighted at the conference and featured on the official website. 🧾 Practical Notes on CRF Submission Before filling out your CRF: Review the official pre-filled example on the CVPR site. Describe both resource usage and achieved performance. You may optionally submit anonymized W&B logs using CVPR’s anonymization tool (this can improve eligibility for recognition awards). ️ Important: Do not flatten or print the PDF to retain the embedded metadata needed for automated processing. Why This Policy Is a Big Deal (and What Research Tells Us) A recent empirical study, “The Role of Computing Resources in Publishing Foundation Model Research” (Hao et al., 2025, MIT–Notre Dame–Cornell), provides the most comprehensive data-driven evidence yet on why compute reporting matters. [image: 1761685352451-screenshot-2025-10-28-at-22.01.33.jpg] Key Findings from Hao et al. (2025): The team analyzed 6,517 foundation model papers (2022–2024) across eight top CS conferences and surveyed 229 first authors. Only 16.5% of papers disclosed GPU usage, and just 12.8% mentioned inference time. GPU access strongly correlated with both acceptance rates and citation counts, especially at venues like NeurIPS and ICML. TFLOP usage, rather than GPU count, showed the strongest correlation with scientific impact — highlighting throughput, not just hardware quantity. Despite concerns, no evidence suggested that compute reporting harmed acceptance chances; transparency was mostly absent due to missing checklist requirements. Industrial institutions (e.g., Google, Microsoft) had disproportionate representation in high-compute papers, while academia led in total paper count (≈4,850 vs. 1,425), revealing an emerging compute divide. The top GPUs used were NVIDIA A100 and H100, with pretraining tasks consuming the largest compute budgets. Encouragingly, open-weight models (LLaMA, Flan-T5, BERT) dominated citations, signaling that accessibility and openness can partially offset compute inequality. 🧩 Why It Matters for Peer Review Hao et al. argue that compute disclosure must become a standardized practice — not to punish large-scale research, but to make the field more accountable and equitable. Their data show that without transparency, it’s impossible to assess whether scientific merit or resource abundance drives success in publication and citation metrics. CVPR’s CRF policy directly addresses this gap. If successful, it could serve as a benchmark for “ethical benchmarking”, where efficiency, reproducibility, and accessibility are weighted alongside performance. Toward a More Accountable Peer Review System By mandating compute disclosure, CVPR 2026 may reshape what we value in research: Fairness: Distinguish between industrial-scale and academic-scale experiments transparently. Accountability: Allow reviewers and the public to interpret results relative to compute cost. Sustainability: Incentivize efficiency and shared infrastructure over brute-force scaling. Equity: Enable policy and funding bodies to identify under-resourced researchers. The broader implication: peer review should evolve to recognize scientific creativity per unit of compute, not just leaderboard performance. 🧭 Discussion Prompt This is a major cultural shift for computer vision — perhaps for computer science as a whole. If CVPR’s model succeeds, similar standards could emerge at NeurIPS, ICML, or ACL, creating a unified ecosystem for transparent AI research. Questions for discussion: Should peer reviewers see compute disclosures during evaluation, or only post-acceptance? Could compute-aware reviewing unintentionally bias against resource-intensive but necessary research? How can conferences ensure privacy and fairness in reporting sensitive compute data? References: Hao, Y., Huang, Y., Zhang, H., Zhao, C., Liang, Z., Liang, P. P., et al. (2025). The Role of Computing Resources in Publishing Foundation Model Research. arXiv:2510.13621. CVPR 2026 Compute Reporting Initiative. https://cvpr.thecvf.com/Conferences/2026/ComputeReporting
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    12 26
    12 Topics
    26 Posts
    SylviaS
    The final decisions for EMNLP 2025 have been released, sparking a wave of reactions across research communities on social media such as Zhihu and Reddit. Beyond the excitement of acceptances and the disappointment of rejections, this cycle is marked by a remarkable policy twist: 82 papers were desk-rejected because at least one author had been identified as an irresponsible reviewer. This article provides an in-depth look at the decision process, the broader community responses, and a comprehensive table of decision outcomes shared publicly by researchers. [image: 1755763433631-screenshot-2025-08-21-at-10.02.47.jpg] Key Announcements from the Decision Letter The program chairs’ decision email highlighted several important points: Acceptance Statistics 8174 submissions received. 22.16% accepted to the Main Conference. 17.35% accepted as Findings. 82 papers desk-rejected due to irresponsible reviewer identification. Desk Rejections Linked to Reviewer Misconduct A novel and controversial policy: authors who were flagged as irresponsible reviewers had their own papers automatically desk-rejected. The official blog post elaborates on what qualifies as irresponsible reviewing (e.g., extremely short, low-quality, or AI-generated reviews). Camera-Ready Submissions Deadline: September 19, 2025. Authors must fill in the Responsible NLP checklist, which will be published in the ACL Anthology alongside the paper. Allowed: one extra page for content, one page for limitations (mandatory), optional ethics, unlimited references. Presentation and Logistics Papers must be presented either in person or virtually to be included in proceedings. Oral vs. poster presentation decisions will be finalized after camera-ready submission. Registration deadline: October 3 (at least one author), with early in-person registration by October 6 due to Chinese government approval processes (conference will be in Suzhou). The Desk Rejection Controversy: 82 Papers Removed This year’s 82 desk rejections triggered heated debates. While ensuring reviewer accountability is laudable, punishing co-authors for the actions of a single irresponsible reviewer is unprecedented and raises questions about fairness: Collective punishment? Innocent co-authors had their work invalidated. Transparency gap: The official blog post provided criteria, but the actual identification process is opaque. Potential chilling effect: Researchers may hesitate to serve as reviewers for fear of inadvertently harming their own submissions. The policy signals a stronger stance by ACL conferences toward review quality enforcement, but it also underscores the urgent need for more transparent, community-driven reviewer accountability mechanisms. Community Voices: Decisions Shared by Researchers To capture the breadth of community sentiment, below is a comprehensive table compiling decision outcomes (OA = overall average reviewer scores, Meta = meta-review score) shared publicly across Zhihu, Reddit and X. This table is exhaustive with respect to all shared samples from the provided community discussions. OA Scores (per reviewer) Meta Outcome Track / Notes / User 4, 4, 3 4 Main Meta reviewer wrote a detailed essay, helped acceptance 3.5, 3.5, 2 — Main Initially worried, accepted to main 2.67 (avg) 3.5 Main Shared proudly (“unexpected”) 3.67 4 Main Confirmed traveling to Suzhou 3.33 (4, 3.5, 2.5) 3 Rejected Author frustrated, “don’t understand decision” 3.0 3 Rejected Hoped for Findings, didn’t get in 3.0 3.5 Main (short) Track: multilinguality & language diversity; first-author undergrad 2.33 3.5 Findings Efficient NLP track 3.33 3.5 Main Efficient NLP track 3.5, 3.5, 2.5 2.5 Findings Meta review accused of copy-paste from weakest reviewer 3, 3.5, 4 3 Main Theme track 4, 3, 2 2.5 Rejected One review flagged as AI-generated; rebuttal ignored 4.5, 2.5, 2 — Rejected Meta only two sentences 3.38 3.5 Main Rejected at ACL before; accepted at EMNLP 2, 3, 3 3 Rejected RepresentativeBed838 3.5, 3, 2.5 3.5 Rejected Author shocked 3, 3, 3 3 Rejected Multiple confirmations 5, 4, 3.5 4.5 Main Track: Dialogue and Interactive Systems 3.5, 4.5, 4 4 Main GlitteringEnd5311 3, 3.5, 3.5 3.5 Main Retrieval-Augmented LM track 2.5, 3, 3 3 Findings After rebuttal challenge; author reported meta reviewer 1.5, 3, 3 → rebuttal → 2.5, 3, 3.5 3.5 Main Initially borderline, improved after rebuttal 3.67 3 Main Computational Social Science / NLP for Social Good track 4, 3, 3 3 Main Low-resource track 3.5, 3.5, 3 3.5 Main Low-resource track 4, 3 3 Findings Author sad (“wish it was main”) Overall 3.17 3 Findings JasraTheBland confirmation Overall 3.17 3.5 Main AI Agents track Overall 3.17 3 Findings AI Agents track 4, 3, 2 3.5 Main Responsible-Pie-5882 3.5 (avg) 3.5 Main Few_Refrigerator8308 3, 3, 3.5 → rebuttal → 3.5,3.5,3.5 4.0 Main LLM Efficiency track 3.5, 2.5, 2.5 3 Findings FoxSuspicious7521 3, 3.5, 3.5 3.5 Main ConcernConscious4131 (paper 1) 2, 3, 3.5 3 Rejected ConcernConscious4131 (paper 2) 3, 3, 3 3 Rejected Ok-Dot125 confirmation 3.17 (approx) 3.5 Main Old_Toe_6707 in AI Agents 3.17 (approx) 3 Findings Slight_Armadillo_552 in AI Agents 3, 3, 3 3 Rejected Confirmed again by AdministrativeRub484 4, 3, 2 3.5 Main Responsible-Pie-5882 (duplicate entry but reconfirmed) 3.5, 3.5, 3 3.5 Main breadwineandtits 3, 3, 3 3 Accepted (Findings or Main unclear) NeuralNet7 (saw camera-ready enabled) 2.5 (meta only) 2.5 Findings Mentioned as borderline acceptance 3.0 3.0 Findings shahroz01, expected 4, 3, 2 3.5 Main Responsible-Pie-5882 (explicit post) 3.5, 3.5, 2.5 2.5 Findings Practical_Pomelo_636 3, 3, 3 3 Reject Multiple confirmations across threads 4, 3, 3 3 Findings LastRepair2290 (sad it wasn’t main) 3.5, 3, 2.5 3.5 Rejected Aromatic-Clue-2720 3, 3, 3.5 3.5 Main ConcernConscious4131 2, 3, 3 3 Reject ConcernConscious4131 3, 3, 3 3 Reject Ok-Dot125 again 3.5, 3.5, 3 3.5 Main Few_Refrigerator8308 second report 3.5, 3, 2.5 3.5 Rejected Aromatic-Clue-2720 4, 3, 2 3.5 Main Responsible-Pie-5882 final confirmation 3.5, 3.5, 3 3.5 Main Reconfirmed across threads 3, 3, 3 3 Rejected Reported multiple times 2.5 (OA overall) 3.0 Findings Outrageous-Lake-5569 reference Patterns Emerging From the collected outcomes, some patterns can be observed: Meta ≥ 3.5 often leads to Main acceptance (even when individual OA scores are mediocre, e.g., 2.67). Meta = 3 cases are unstable: some lead to Findings, others to Rejection, and in a few cases even Main. Meta < 3 almost always means rejection, with rare exceptions. Reviewer quality matters: multiple complaints mention meta-reviews simply copy-pasting from the weakest reviewer, undermining rebuttals. This highlights the high variance in borderline cases and explains why so many authors felt frustrated or confused. Conclusion: Lessons from EMNLP 2025 EMNLP 2025 brought both joy and heartbreak. With a Main acceptance rate of just over 22%, competition was fierce. The desk rejections tied to reviewer misconduct added an entirely new layer of controversy that will likely remain debated long after the conference. For researchers, the key takeaways are: Meta review scores dominate: cultivate strong rebuttals and area chair engagement. Borderline cases are unpredictable: even a 3.5 meta may result in Findings instead of Main. Reviewer accountability is a double-edged sword: while improving review quality is necessary, policies that punish co-authors risk alienating the community. As the field grows, the CL community must balance fairness, rigor, and transparency—a challenge as difficult as the NLP problems we study.
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    4 29
    4 Topics
    29 Posts
    JoanneJ
    [image: 1753375505199-866c4b66-8902-4e99-8065-60d1806309a6-vldb2026.png] The International Conference on Very Large Data Bases (VLDB) is a premier annual forum for data management and scalable data science research, bringing together academics, industry engineers, practitioners and users. VLDB 2026 will feature research talks, keynotes, panels, tutorials, demonstrations, industrial sessions and workshops that span the full spectrum of information management topics, from system architecture and theory to large scale experimentation and demanding real world applications. Key areas of interest for its companion journal PVLDB include, but are not limited to, data mining and analytics, data privacy and security, database engines, database performance and manageability, distributed database systems, graph and network data, information integration and data quality, languages, machine learning / AI and databases, novel database architectures, provenance and workflows, specialized and domain-specific data management, text and semi-structured data, and user interfaces. The 52nd International Conference on Very Large Data Bases (VLDB 2026) runs 31 Aug – 4 Sep 2026 in Boston, MA, USA. Peer review is handled via Microsoft’s Conference Management Toolkit (CMT). The submission channel will be PVLDB Vol 19 (rolling research track) with General Chairs Angela Bonifati (Lyon 1 University & IUF, France) and Mirek Riedewald (Northeastern University, USA) Rolling submission calendar (PVLDB Vol 19) Phase Recurring date* Notes Submissions open 20 th of the previous month CMT site opens Paper deadline 1 st of each month (Apr 1 2025 → Mar 1 2026) 17:00 PT hard cut-off Notification / initial reviews 15 th of following month Accept / Major Revision / Reject Revision due ≤ 2.5 months later (1 st of third month) Single-round revision Camera-ready instructions 5 th of the month after acceptance Sent to accepted papers Final cut-off for VLDB 2026 1 Jun 2026 revision deadline Later acceptances roll to VLDB 2027 *See the official CFP for the full calendar. Acceptance statistics (research track) Year Submissions Accepted Rate 2022 976 265 27.15 % 2021 882 212 24 % 2020 827 207 25.03 % 2019 677 128 18.9 % 2013 559 127 22.7 % 2012 659 134 20.3 % 2011 553 100 18.1 % Acceptance has ranged between ~18 % and ~27 % in the PVLDB era. Rolling monthly deadlines have increased submission volume while maintaining selectivity. Emerging research themes (2025 – 2026) Vector databases & retrieval-augmented LMs Hardware / software co-design for LLM workloads Scalable graph management & analytics Multimodal querying & knowledge-rich search with LLMs Submission checklist Use the official PVLDB Vol 19 LaTeX/Word template. Declare all conflicts of interest in CMT. Provide an artifact URL for reproducibility. Submit early (before Jan 2026) to leave revision headroom. Ensure at least one author registers to present in Boston (or via the hybrid option). Key links Main site: https://www.vldb.org/2026/ Research-track CFP & important dates: https://www.vldb.org/2026/call-for-research-track.html PVLDB Vol 19 submission guidelines: https://www.vldb.org/pvldb/volumes/19/submission/ Draft early, align your work with the vector and LLM data system wave, and shine in Boston!
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 2
    1 Topics
    2 Posts
    rootR
    It seems CCF is revising the list again: https://www.ccf.org.cn/Academic_Evaluation/By_category/2025-05-09/841985.shtml
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    2 3
    2 Topics
    3 Posts
    JoanneJ
    [image: 1750758497155-fa715fd6-ed5a-44be-8c8d-84f1645fac47-image.png] CHI remains the flagship venue in the HCI field. It draws researchers from diverse disciplines, consistently puts humans at the center, and amplifies research impact through high quality papers, compelling keynotes, and extensive doctoral consortia. Yet CHI isn’t the entirety of the HCI landscape. It’s just the heart of a much broader ecosystem. Here’s a quick-look field guide Six flagship international HCI conferences Acronym What makes it shine Ideal authors Home page Photo UIST Hardware & novel interface tech; demo heavy culture System / device researchers https://uist.acm.org/2025/ [image: 1750757345992-d6b2b397-f753-40fd-b2b7-2410ed6556b9-image.png] SIGGRAPH Graphics core plus dazzling VR/AR & 3-D interaction showcases Graphics, visual interaction & art-tech hybrids https://www.siggraph.org/ [image: 1750757560460-6657b0b8-06d3-4c27-bc03-6f449a03b7c2-image.png] MobileHCI Interaction in mobile, wearable & ubiquitous contexts Ubicomp oriented, real world applications https://mobilehci.acm.org/2024/ [image: 1750757628685-22f47458-89b5-4f9c-8718-ee89249c1e49-image.png] CSCW Collaboration, remote work & social media at scale Socio-technical & social computing teams https://cscw.acm.org/2025/ [image: 1750757750339-ea17f345-83b9-47f3-af41-6623bdf45eab-image.png] DIS Creative, cultural & critical interaction design UX, speculative & experience driven scholars https://dis.acm.org/2025/ [image: 1750757796645-b1212781-047f-4afc-89a4-e07691e25225-image.png] CHI Broadest scope, human centred ethos, highest brand value Any HCI sub field https://chi2026.acm.org/ [image: 1750757827999-a2b6e621-cbbb-428c-929c-97d243165d19-image.png] Four high-impact HCI journals Journal Focus Good for Home page ACM TOCHI Major theoretical / methodological breakthroughs Large, mature studies needing depth https://dl.acm.org/journal/tochi IJHCS <br>(International Journal of Human-Computer Studies) Cognition → innovation → UX Theory blended with applications https://www.sciencedirect.com/journal/international-journal-of-human-computer-studies CHB <br>(Computers in Human Behavior) Psychological & behavioural angles on HCI Quant-heavy user studies & surveys https://www.sciencedirect.com/journal/computers-in-human-behavior IJHCI <br>(International Journal of Human-Computer Interaction) Cognitive, creative, health-related themes Breadth from conceptual to applied work https://www.tandfonline.com/journals/hihc20 ️ Conference vs. journal: choosing the right vehicle Conferences prize speed: decision to publication can be mere months, papers are concise, and novelty is king. Journals prize depth: multiple revision rounds, no strict length cap, and a focus on long term influence. When a conference is smarter 🧪 Fresh prototypes or phenomena that need rapid peer feedback Face-to-face networking with collaborators and recruiters ️ Time-sensitive results where a decision within months matters 🧭 When a journal pays off Data and theory fully polished and deserving full exposition Citation slow burn for tenure or promotion dossiers Ready for iterative reviews to reach an authoritative version Take-away If CHI is the main stage , UIST, SIGGRAPH, MobileHCI, CSCW & DIS are the satellite arenas ️; TOCHI, IJHCS, CHB & IJHCI serve as deep archives . Match your study’s maturity, urgency and career goals to the venue, follow the links above, and—once you’ve dropped in those shiny images—let the best audience discover your work. Happy submitting!
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    3 6
    3 Topics
    6 Posts
    rootR
    Looked at the submission data for the past 2 years. ICASSP 2024 has more than 5000 valid submissions. ICASSP 2025 (held in India) had over 7500 valid submissions. [image: 1758397284247-icassp2025-submission-number-email.jpeg]
  • Anything around peer review for conferences such as ISCA, FAST, ASPLOS, EuroSys, HPCA, SIGMETRICS, FPGA and MICRO.

    1 2
    1 Topics
    2 Posts
    rootR
    R.I.P. USENIX ATC ...