Thursday, December 12, 2024

Our commitments to advance secure, safe, and reliable AI


As we speak, Microsoft is saying its help for brand new voluntary commitments crafted by the Biden-Harris administration to assist be sure that superior AI programs are secure, safe, and reliable. By endorsing the entire voluntary commitments offered by President Biden and independently committing to a number of others that help these important objectives, Microsoft is increasing its secure and accountable AI practices, working alongside different trade leaders. 

By transferring rapidly, the White Home’s commitments create a basis to assist make sure the promise of AI stays forward of its dangers. We welcome the President’s management in bringing the tech trade collectively to hammer out concrete steps that may assist make AI safer, safer, and extra useful for the general public. 

Guided by the enduring ideas of security, safety, and belief, the voluntary commitments deal with the dangers offered by superior AI fashions and promote the adoption of particular practices – corresponding to red-team testing and the publication of transparency experiences – that may propel the entire ecosystem ahead. The commitments construct upon sturdy pre-existing work by the U.S. Authorities (such because the NIST AI Threat Administration Framework and the Blueprint for an AI Invoice of Rights) and are a pure complement to the measures which were developed for high-risk functions in Europe and elsewhere. We stay up for their broad adoption by trade and inclusion within the ongoing world discussions about what an efficient worldwide code of conduct may appear to be. 

Chart of commitments Microsoft’s extra commitments deal with how we are going to additional strengthen the ecosystem and operationalize the ideas of security, safety, and belief. From supporting a pilot of the Nationwide AI Analysis Useful resource to advocating for the institution of a nationwide registry of high-risk AI programs, we imagine that these measures will assist advance transparency and accountability. We have now additionally dedicated to broad-scale implementation of the NIST AI Threat Administration Framework, and adoption of cybersecurity practices which might be attuned to distinctive AI dangers. We all know that this may result in extra reliable AI programs that profit not solely our clients, however the entire of society. 

You possibly can view the detailed commitments Microsoft has made right here. 

It takes a village to craft commitments corresponding to these and put them into observe at Microsoft. I want to take this chance to thank Kevin Scott, Microsoft’s Chief Know-how Officer, with whom I co-sponsor our accountable AI program, in addition to Natasha Crampton, Sarah Chook, Eric Horvitz, Hanna Wallach, and Ece Kamar, who’ve performed key management roles in our accountable AI ecosystem.  

Because the White Home’s voluntary commitments replicate, folks should stay on the heart of our AI efforts and I’m grateful to have sturdy management in place at Microsoft to assist us ship on our commitments and proceed to develop this system we’ve got been constructing for the final seven years. Establishing codes of conduct early within the growth of this rising expertise is not going to solely assist guarantee security, safety, and trustworthiness, it’ll additionally permit us to raised unlock AI’s optimistic impression for communities throughout the U.S. and around the globe. 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles