Skip to content
  • Categories
  • CSPaper Review
  • Recent
  • Tags
  • Popular
  • Paper Copilot
  • OpenReview.net
  • Deadlines
  • CSRanking
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper Forum

CSPaper: peer review sidekick

  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. Computer Vision, Graphics & Robotics
  4. The Hidden Cost of AI Research — and Why CVPR 2026’s Compute Disclosure Policy Matters

The Hidden Cost of AI Research — and Why CVPR 2026’s Compute Disclosure Policy Matters

Scheduled Pinned Locked Moved Computer Vision, Graphics & Robotics
cvpr2026icmlcomputing resourcefoundation modelpeer reviewdisclosetransparencyreproducibilityfairness
1 Posts 1 Posters 112 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • rootR Offline
    rootR Offline
    root
    wrote last edited by
    #1

    Starting with CVPR 2026, all paper authors will be required to report their compute usage through a new initiative known as the Compute Resource Form (CRF).

    🔗 Official Link: CVPR 2026 Compute Reporting

    This new policy tackles one of the most persistent structural issues in AI research — the invisible cost of computation.
    The goal is to improve transparency, reproducibility, and fairness, by making clear how much computational power each published work consumes.

    As some researchers have noted, this move “clearly separates industrial from academic research” and could promote a more open, equitable research environment — provided the community uses these disclosures responsibly.


    ⚙️ What Authors Must Now Disclose (CRF Requirements)

    Every submission must include a Compute Resource Form (CRF) detailing:

    • GPU and CPU usage
    • Total training time and FLOPs
    • Hardware specs (GPU type and count)
    • Model efficiency metrics
    • Key results vs. strongest baselines

    🕒 Estimated time to complete: 10–15 minutes
    📄 Submission method: Upload with supplementary materials

    The CRF is mandatory, but will not affect acceptance decisions.
    It will be reviewed by an independent Compute Resource Reporting Committee, not by reviewers or area chairs.

    The required section covers the minimum compute needed to reproduce core results, while optional fields allow for additional details like development logs or ablation trials.


    🏅 Incentives for Transparency and Efficiency

    To foster more responsible research, CVPR 2026 will introduce three recognition awards:

    1. 🟩 Efficient CVPR Badge – High-efficiency papers within their category.
    2. 🟨 CVPR Compute Gold Star – Exceptional compute efficiency.
    3. 🟦 CVPR Compute Transparency Champion Award – High-quality anonymized logs (e.g., via W&B).

    Award criteria include compute-per-performance-gain ratios, compared across similar tasks.
    Recognized papers will be spotlighted at the conference and featured on the official website.


    🧾 Practical Notes on CRF Submission

    Before filling out your CRF:

    • Review the official pre-filled example on the CVPR site.
    • Describe both resource usage and achieved performance.
    • You may optionally submit anonymized W&B logs using CVPR’s anonymization tool (this can improve eligibility for recognition awards).

    ⚠️ Important:
    Do not flatten or print the PDF to retain the embedded metadata needed for automated processing.


    💡 Why This Policy Is a Big Deal (and What Research Tells Us)

    A recent empirical study, “The Role of Computing Resources in Publishing Foundation Model Research” (Hao et al., 2025, MIT–Notre Dame–Cornell), provides the most comprehensive data-driven evidence yet on why compute reporting matters.

    Screenshot 2025-10-28 at 22.01.33.jpg

    🔍 Key Findings from Hao et al. (2025):

    • The team analyzed 6,517 foundation model papers (2022–2024) across eight top CS conferences and surveyed 229 first authors.
    • Only 16.5% of papers disclosed GPU usage, and just 12.8% mentioned inference time.
    • GPU access strongly correlated with both acceptance rates and citation counts, especially at venues like NeurIPS and ICML.
    • TFLOP usage, rather than GPU count, showed the strongest correlation with scientific impact — highlighting throughput, not just hardware quantity.
    • Despite concerns, no evidence suggested that compute reporting harmed acceptance chances; transparency was mostly absent due to missing checklist requirements.
    • Industrial institutions (e.g., Google, Microsoft) had disproportionate representation in high-compute papers, while academia led in total paper count (≈4,850 vs. 1,425), revealing an emerging compute divide.
    • The top GPUs used were NVIDIA A100 and H100, with pretraining tasks consuming the largest compute budgets.
    • Encouragingly, open-weight models (LLaMA, Flan-T5, BERT) dominated citations, signaling that accessibility and openness can partially offset compute inequality.

    🧩 Why It Matters for Peer Review

    Hao et al. argue that compute disclosure must become a standardized practice — not to punish large-scale research, but to make the field more accountable and equitable.
    Their data show that without transparency, it’s impossible to assess whether scientific merit or resource abundance drives success in publication and citation metrics.

    CVPR’s CRF policy directly addresses this gap.
    If successful, it could serve as a benchmark for “ethical benchmarking”, where efficiency, reproducibility, and accessibility are weighted alongside performance.


    💬 Toward a More Accountable Peer Review System

    By mandating compute disclosure, CVPR 2026 may reshape what we value in research:

    • Fairness: Distinguish between industrial-scale and academic-scale experiments transparently.
    • Accountability: Allow reviewers and the public to interpret results relative to compute cost.
    • Sustainability: Incentivize efficiency and shared infrastructure over brute-force scaling.
    • Equity: Enable policy and funding bodies to identify under-resourced researchers.

    The broader implication: peer review should evolve to recognize scientific creativity per unit of compute, not just leaderboard performance.


    🧭 Discussion Prompt

    This is a major cultural shift for computer vision — perhaps for computer science as a whole.
    If CVPR’s model succeeds, similar standards could emerge at NeurIPS, ICML, or ACL, creating a unified ecosystem for transparent AI research.

    Questions for discussion:

    • Should peer reviewers see compute disclosures during evaluation, or only post-acceptance?
    • Could compute-aware reviewing unintentionally bias against resource-intensive but necessary research?
    • How can conferences ensure privacy and fairness in reporting sensitive compute data?

    References:

    • Hao, Y., Huang, Y., Zhang, H., Zhao, C., Liang, Z., Liang, P. P., et al. (2025). The Role of Computing Resources in Publishing Foundation Model Research. arXiv:2510.13621.
    • CVPR 2026 Compute Reporting Initiative. https://cvpr.thecvf.com/Conferences/2026/ComputeReporting
    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    © 2025 CSPaper.org Sidekick of Peer Reviews
    Debating the highs and lows of peer review in computer science.
    • First post
      Last post
    0
    • Categories
    • CSPaper Review
    • Recent
    • Tags
    • Popular
    • Paper Copilot
    • OpenReview.net
    • Deadlines
    • CSRanking