SNEAK PEEK
- SB 1047 introduces a “critical harm” category for AI, focusing on catastrophic risks.
- Elon Musk supports SB 1047, aligning it with his long-standing call for AI regulation.
- Debate centers on the bill’s impact on open-source AI models and industry practices.
Elon Musk and Vitalik Buterin have expressed their support for California’s proposed SB 1047 AI safety bill, a move that has sparked widespread discussion in the tech community. The bill, which aims to regulate the development and deployment of artificial intelligence, has drawn attention for its introduction of a “critical harm” category.
Elon Musk and Vitalik both expressed support for the California SB 1047 AI safety bill. Vitalik said he liked that the bill introduced a "critical harm" category and explicitly separates between that and other bad things and the charitable read of the bill is that the (medium-…
— Wu Blockchain (@WuBlockchain) August 27, 2024
Vitalik Buterin highlighted this aspect of the bill, noting its importance in AI safety discourse. He appreciated that the bill clearly separates catastrophic harm from other less severe risks.
According to Buterin, the charitable interpretation of the bill’s medium-term goal is to mandate comprehensive safety testing. This would ensure that the model would not be released if world-threatening capabilities or behaviors are discovered during testing.
Elon Musk, who has been a vocal advocate for AI regulation for over two decades, echoed Buterin’s sentiments. In a public statement, Musk acknowledged that the bill might upset some people but ultimately supported its passage.
According to Buterin, the charitable interpretation of the bill’s medium-term goal is to mandate comprehensive safety testing. This would ensure that if world-threatening capabilities or behaviors are discovered during testing, the model would not be released.
Consistent withholding of the opinion or criticism could lead in the formulation of major constituents regarding the scope of the aforementioned bill and undermines support for the Bill. Responding to the impact of SB 1047, Danielle Fong remarked that it would be the AI giants who will be largely affected, particularly Meta who have been vocal about open sourcing their models.
Thus, Tunstall foresaw that xAI company owned by Musk would yet relatively be safe as it is presently not headed towards that direction.
Buterin was equally concerned that the bill might undermine open-weight models in future. He wanted to see any particular evidence that may demonstrate how that bill could be utilized to go after such models, remembering that the previous versions of this bill contained requirements for total closure of the provisions which are in direct contradiction to the made available weights, but which has since been omitted from the document.
What's the best evidence that the bill is going to be used to go after open weights?
I know that earlier versions of the bill had a full shutdown req that's incompatible with open weights, but that's been removed. And I know that some AI safety people have expressed support for…
— vitalik.eth (@VitalikButerin) August 27, 2024
Buterin also questioned whether the people making decisions about the “duty of reasonable care” under the bill, such as regulators and courts, are influenced by AI safety advocates who support banning open weights.