This document signifies agreement by the countries to identify potential harms caused by AI and to "building respective risk-based policies across our countries to ensure safety in light of such risks," relying on " increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research." As Ben Werdmuller says, "The onus will be on AI developers to police themselves. We will see how that works out in practice." No. We know how that will work out in practice. See also: Reuters, the Guardian, BBC.
Today: 2 Total: 161 [Share]
] [