A 'foundation model' is "an AI model that can be adapted to a wide range of downstream tasks." For example, you can build one foundation model for "runways" and then adapt it in different ways for different tasks, like crack detection or maintenance issues. This report (29 page PDF) takes a risk-based approach to ethical issues involving foundation models, including especially new risks inherent to the use of foundation models specifically. The risks are about what you would expect - false reports, bias, privacy loss, etc. The risks to focus on are the ones labeled 'new' (in the far-right column). The biggest new type of risk is data being retained in the foundation model that might be exposed in the application model, a risk amplified by an inability to trace output's source or provenance. Worth noting is that the only way a corporation knows something is ethically wrong is through "fines, reputational harms, disruption to operations, and other legal consequences." Image: Nvidia.
Today: 0 Total: 5 [Share]
] [