Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The False Dawn: Reevaluating Google's RL for Chip Macro Placement (arxiv.org)
71 points by oldgradstudent on June 28, 2023 | hide | past | favorite | 17 comments


This is a very harsh critique of googles approaches to peer-review publishing of ideas. Because they refused to publish sufficient details, it was impossible to reproduce results and when people dug deeper, it was performing WORSE than other mechanistic approaches to placement, not better. The author takes Nature to task for lack of rigour accepting the publication. Well, I think it reflects badly on Google and Nature both. They chose Nature presuambly for the prestige.

On the other hand, it might this guys nose was out of joint because some co-published work was declined.

  It remains unclear why Google did not allow publishing [5] (coauthored by the author of this note), especially after its results and conclusions were corroborated by the published paper [7] written at UCSD with lengthy involvement from Google. Granted, [5] and [6, 7] found major flaws in [1], but “a commitment to open inquiry, intellectual rigor, integrity, and collaboration” must protect legitimate research, even if it is politically inconvenient.
On the whole, I don't see how if the authors were academics with Tenure, they'd survive this one. This is entirely NOT how it's meant to work.

Surely, it would be severely career limiting?

[1] Azalia Mirhoseini, Anna Goldie, Mustafa Yaz- gan et al., “A Graph Placement Methodology for Fast Chip Design,” Nature 594 (2021), pp. 207-212. arXiv:2004.10746

[5] Sungmin Bae, Amir Yazdanbaksh, Satrajit Chatterjee, Mingyu Woo, Igor. L. Markov, et al., “Stronger Baselines for Evaluating Deep Reinforcement Learning in Chip Placement”, March, 2022.13 https://statmodeling.stat.columbia.edu/wp-content/uploads/20...

[6] MacroPlacement Repo. https://github.com/TILOS-AI-Institute/MacroPlacement

[7] Chung-Kuan Cheng, Andrew B. Kahng, Sayak Kundu, Yucheng Wang, Zhiang Wang, “As- sessment of Reinforcement Learning for Macro Placement”, ISPD 2023, arXiv:2302:11014


This isn’t “harsh” IMO, it’s science.

If you do low quality science (incomplete methods and no reproducibility), other scientists are supposed to use robust evaluation to refute your claims.

Seems extremely straightforward, people just don’t like being shown to be incompetent.


The links are broken. Remove extra spaces?

Your point about survival is well taken. Not only these papers are negative, but there is also a lawsuit [50] with accusations of fraud: https://regmedia.co.uk/2023/03/26/satrajit_vs_google.pdf


done. Thanks.


Works for me. Thank you


> On the whole, I don't see how if the authors were academics with Tenure, they'd survive this one. This is entirely NOT how it's meant to work.

WDYM?


If you submit a peer review paper to a frontline journal like NATURE and then it turns out your work cannot be reproduced, and you do not either issue a retraction (and explanation) or an explation how to make it work, and your data, then you stand very likely to be accused of falsifying data.

If you had secured tenure at a uni because of achieving the nature paper, and this happened, your head of department would quietly suggest you look elsewhere for employment because you won't be getting seniority in the department, ever.

You can publish shit in the minor outer planet regional journal of skeptical telekenesis, and nobody cares. If you publish a paper in Nature "we cured cancer" and it turns out you forgot to stir the lab reagents before adding them to the PCR machine, people care.


not true, no idea where you got the idea that academics have such high standards, you might be referring to some long lost past that no longer exists. I have spent time in ML academia and standards are mostly a joke but things self select in the sense that often papers that are cited well and generate a buzz are those that are easily reproducible and show a significant improvement, (since researchers instantly try it and it works). But tuning baselines to look bad, creating your own benchmarks, modifying experiment conditions to make your method look good is par for the course for ML, this paper is simply far more egregious than average.

I think it’s honestly worse in other fields like cancer research where apparently >50% of the top cited papers don’t replicate. Read about the “Replication crisis”

EDIT: some more thoughts on issues. There really are 2 kinds of citations in research papers. One is the good kind of citation, that occurs when your method is so good that others use it and cite it. This unfortunately is very hard to do, even if your method is good it needs to be packaged well and researchers aren’t great SWEs. The second kind of citation is when, you get cited in related work of another paper that just states so and so method achieved so and so results on so and so benchmark. This kind of citation creates bad incentives, your goal is to be on top of some benchmark no matter what. So you start inventing benchmarks, tuning baselines to look bad etc. this citation is also easier to get so a majority of papers try to go for this instead. Maybe a solution is to just ban related works sections but who knows if that will work.


> If you had secured tenure at a uni because of achieving the nature paper, and this happened, your head of department would quietly suggest you look elsewhere for employment

But if you have tenure then you can't be fired, no? Isn't that the role of tenure?

Presumably you're in the bad situation if you _don't_ have tenure?


You wish

Academics are as unethical as businessmen - and especially the overlap between tech and science research. As long as these researchers are bringing in grant money and getting press coverage (good or bad) they will keep their jobs or will be promoted.


Igor Markov, along with Sat Chatterjee, seem to be pursuing a bizarre vendetta (after Sat failed to take over their project) against the lead authors of the chip placement work, not some sort of intellectually honest critique.

This was covered previously in the press and on social media, with statements from a variety of prominent researchers (e.g. [1][2][3]).

The code is even available for the Nature paper's method, along with an FAQ: https://github.com/google-research/circuit_training#FAQ

[1] https://twitter.com/ZoubinGhahrama1/status/15122035096467415...

[2] https://twitter.com/JacobSteinhardt/status/15215993404137881...

[3] https://twitter.com/sguada/status/1521587406385807361


Why not try reading the paper written by Igor and try to find single instance where he launches a personal attack on the researchers, calls them names etc?

Note just the tone difference as you read Igor’s work to the stuff of his detractors. One immediately goes personal, tries to figure out motivations of the opposing counsel, talks about harassment and sounds emotional to say the least. The other has an extremely objective tone, only focuses on the subject matter, and in general reads more like a maths theorem than an activist essay.

I’ll leave you to guess who sounds like who.


One is basically an evidence-free ad hominem attack.

> [2] https://twitter.com/JacobSteinhardt/status/15215993404137881...

The other two sources make a concrete claim that in mid-2002 there was an independent, open-source, replication of the Nature paper:

> [1] https://twitter.com/ZoubinGhahrama1/status/15122035096467415...

>> Google stands by this work published in Nature on ML for Chip Design, which has been independently replicated, open-sourced, and used in production at Google.

> [3] https://twitter.com/sguada/status/1521587406385807361

>> The results in the Nature paper were independently replicated and validated by my team, the results were used in actual chips and Sat and his collaborators know it.

>> Furthermore, the code was open-sourced.

>> It is sad that you are providing a platform for someone's resentments.

The claims about independent replication refer to Google's circuit_training repository[1]. The UCSD team has conclusively shown this claim was materially false (see section 3 of their paper[2]).

BTW, Prof. Andrew Khang, who headed the UCSD effort, initially wrote an exteremely favorable editorial about the Nature paper[3].

[1] https://github.com/google-research/circuit_training

[2] https://arxiv.org/pdf/2302.11014.pdf

[3] https://www.nature.com/articles/d41586-021-01515-9


The matter is way past superficial personal accusations. And the people at these Twitter links have no technical background in chip design (why would anyone listen to them?). Sergio's and Zoubin's tweets are obviously and verifyably wrong.

Nature confirmed to reporters that they are investigating the paper. https://www.theregister.com/2023/03/27/google_ai_chip_paper_...


For more context on the controversy here, see https://spectrum.ieee.org/chip-design-controversy (I don't know how unbiased _that_ article is). Also discussed in https://news.ycombinator.com/item?id=35441759.


https://www.npr.org/2023/06/26/1184289296/harvard-professor-...

Seems to be a season for uncovering flaws in scientific publications :)


That's always in season, sadly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: