Curious if anyone can point to some resources that summarize the pros/cons arguments of this legislation. Reading this article, my first thought is that I definitely agree it sounds impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.
At the same time,
> Computer scientists Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative-AI wave is based, were outspoken supporters. In addition, 119 current and former employees at the biggest AI companies signed a letter urging its passage.
These are obviously highly intelligent people (though I've definitely learned in my life that intelligence in one area, like AI and science, doesn't mean you should be trusted to give legal advice), so I'm curious to know why Hinton and Bengio supported the legislation so strongly.
> impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.
Nope, that's entirely standard legal stuff. Tort law deals exactly with those kinds of things, for instance. Yes it can certainly wind up in litigation, but the entire point is that if there's a gray area, a company should make sure it's operating entirely within the OK area -- or know it's taking a legal gamble if it tries to push the envelope.
But it's generally pretty easy to stay in the clear if you establish common-sense processes around these things, with a clear paper trail and decisions approved by lawyers.
Now the legislation can be bad for lots of other reasons, but "reasonable care" and "unreasonable risk" are not problematic.
> but "reasonable care" and "unreasonable risk" are not problematic.
Still strongly disagree, at least when it comes to AI legislation. Yes, I fully realize that there are "reasonableness" standards in lots of places of US jurisprudence, but when it comes to AI, given how new the tech is and how, perhaps more than any other recent technology, it is largely a "black box", meaning we don't really know how it works and we aren't really sure what its capabilities will ultimately be, I don't think anybody really knows what "reasonableness" means in this context.
Exactly. It's about as meaningful as passing a law making it illegal to be a criminal. Right, so what does that actually mean apart from "we'll decide when it happens"?
The concern is that near future systems will be much more capable than current systems, and by the time they arrive, it may be too late to react. Many people from the large frontier AI companies believe that world-changing AGI is 5 years or less away; see Situational Awareness by Aschbrenner, for example. There's also a parallel concern that AIs could make terrorism easier[1].
Yoshua Bengio has written in detail about his views on AI safety recently[2][3][4]. He seems to put less weight on human level AI being very soon, but says superhuman intelligence is plausible in 5-20 years and says:
> Faced with that uncertainty, the magnitude of the risk of catastrophes or worse, extinction, and the fact that we did not anticipate the rapid progress in AI capabilities of recent years, agnostic prudence seems to me to be a much wiser path.
Hinton also has a detailed lecture he's been giving recently about the loss of control risk.
In general, proponents see this as narrowly tailored bill to somewhat address the worst case worries about loss of control and misuse.
Thank you! Your post was really helpful in aiding my understanding, so I greatly appreciate it.
Also, while reading your article I also fell onto https://www.brookings.edu/articles/misrepresentations-of-cal... while trying to understand some terms, and that also gave some really good info, e.g. the difference between a "reasonable assurance" language that was dropped from an earlier version of the bill and replaced with "reasonable care".
Here's a post by the computer scientist Scott Aaronson on his blog, in support: https://scottaaronson.blog/?p=8269 -- it links to some earlier explainers, has some pro-con arguments, and further discussion in the comments.
Oh, wow, thanks very much! Not only was that a very informative article, but it also has links to other detailed opinions on the topic (and some of those had links...), which left me feeling much better informed. Much appreciated!
At the same time,
> Computer scientists Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative-AI wave is based, were outspoken supporters. In addition, 119 current and former employees at the biggest AI companies signed a letter urging its passage.
These are obviously highly intelligent people (though I've definitely learned in my life that intelligence in one area, like AI and science, doesn't mean you should be trusted to give legal advice), so I'm curious to know why Hinton and Bengio supported the legislation so strongly.