Google's Enhanced Bug Bounty Initiative Takes Aim at AI Security Risks
Google is expanding its Vulnerability Rewards Program (VRP) to reward researchers who uncover potential threats specific to generative artificial intelligence (AI) systems. This initiative aims to enhance AI safety and security by addressing concerns like unfair bias, model manipulation, and data misinterpretations associated with generative AI. The program covers various categories, including prompt injections, data leakage from training datasets, model manipulation, adversarial attacks, and model theft.
In addition to this, Google has established an AI Red Team to combat AI system threats under its Secure AI Framework (SAIF). They are also working to reinforce the AI supply chain by participating in open-source security initiatives like Supply Chain Levels for Software Artifacts (SLSA) and Sigstore, which provide tools for software verification and identifying vulnerabilities.
This announcement coincides with OpenAI's creation of a Preparedness team to address catastrophic risks related to generative AI, including cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats. Furthermore, Google, OpenAI, Anthropic, and Microsoft have jointly launched a $10 million AI Safety Fund to support research in the field of AI safety.
Source: https://thehackernews.com/2023/10/google-expands-its-bug-bounty-program.html