Stephen Downes

Knowledge, Learning, Community

The Chinese government has proposed a comprehensive set of principles for the oversight of generative AI (in Chinese, here (12 page PDF)). "A significant portion of the draft regulation focuses on the safety of the corpus, which is the data used to train AI models," including source safety, traceability, content security and data labeling. Also, " the draft specifies 31 security risks across 5 categories" including violations of socialist values, discriminatory content, commercial violations, infringing the rights of others, and insufficient safeguards for sensitive services. These might be more struct than we might want in the western world, but are certainly more credible than the 'anything goes' policy evidently supported by western corporations. See also Digital Policy Alert.

Today: 0 Total: 542 [Direct link]

files/images/china_ai.jpg


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Feb 27, 2024 1:57 p.m.

Canadian Flag Creative Commons License.

Force:yes