Six months later, the call to slow AI development is more crucial than ever

I endorse this plan (with a minor caveat for liability, which I have to think more about).

The U.S. must immediately establish a detailed registry of giant AI experiments, maintained by a U.S. federal agency. This agency should also build awareness of the huge clusters of specialized hardware that are used in these experiments, and work with the manufacturers of that hardware to include safety and verification features at the chip level. The U.S. government should at minimum ensure that it has the capability to trigger a pause. It has become clear that corporations are not merely reluctant to hit the brakes — the brake pedal does not even exist.

If we are going to reap the revolutionary potential of AI, regulators must enforce standards to ensure safety and security during development. They must require that developers take on the burden of proof, and demonstrate that their new systems are safe before deployment — just like they do for new drugs, cars or airplanes. Lawmakers must take proactive steps to ensure that developers are legally liable for the harm their products cause.

These efforts cannot stop at home. The large-scale risks of AI affect everyone everywhere, and the upcoming UK summit is an opportunity to start the crucial task of addressing them at a global level in a way that transcends national borders and geopolitical rivalries. This kind of international cooperation is possible. We coordinated on cloning. We banned bioweapons. We signed treaties about nuclear weapons even at the height of the Cold War. We can work together on AI.

Six months later, our call to slow AI development is more crucial than ever