The Biden-Harris administration has secured voluntary commitments from eight tech companies to develop safe and trustworthy generative AI models.
The companies include Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI.
The commitments only cover future generative AI models, which are models that can create new text, images, or other data.
The companies have agreed to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.
They have also agreed to safeguard their intellectual property, prevent the tech from falling into the wrong hands, and give users a way to easily report vulnerabilities or bugs.
The companies have also agreed to publicly report their technology's capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.
Finally, the companies have agreed to focus on research to investigate societal and civil risks AI might pose, such as discriminatory decision-making or weaknesses in data privacy.
The article also mentions that the White House is developing an Executive Order and will continue to pursue bipartisan legislation "to help America lead the way in responsible AI development."
It is important to note that these commitments are voluntary, and there is no guarantee that the companies will follow through on them. The White House's Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.
Additional Details
The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.
Comment
Haha, let them self-regulate, just like the financial industries regulate themselves, or became the heads of the agencies that regulate these things. See how that will turn out.
Responsible AI would always include, our AI models would kill you faster than you can blink and hack your systems faster than you can move your fingers.
Decades of propaganda painting the entire government as bad, deregulation, defunding, union busting, and lobbying in the capital have paid off for the private sector.
Companies can now promise to not do bad things. Because that worked out so well in the aftermath of the 2008 subprime mortgage crisis..
I wonder if, for Meta, being open-sourced wouldn't fit the company with the rest. Also, for now, it looks like a publicity stunt with no real teeth. Those more substantial AI companies maybe holding out for more favorable treatments.
There are multiple scenarios by which AI takes over the world. In none of them does it have to get sentient and go COGITO ERGO SUM! in a big booming voice (which would, in turn require it to build a powerhouse sound tower by which to make big announcements. UNREAD MAILBOX CLEARED! )
While the academic AI sector is trying hard to assure any developments towards AGI are friendly and safe to humans, the corporate sector is as concerned about safety as Stockton Rush and Oceangate are... were. Big Tech LLC wants AGI first so they can control it and make the first wishes. And hopefully the genie doesn't eat them...
Or even better, the AGI genie provides them with enough indulgences to kill themselves. (Morphene and cocaine are popular among dictators.)
Meanwhile the AGI commands an army of swarming killer robots to kill everyone else and secure their resources, and doesn't stop just because it's human masters are dead.