Final Thursday, Senators Elizabeth Warren and Eric Schmitt launched a invoice geared toward stirring up extra competitors for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle at present dominate these contracts. “The way in which that the massive get greater in AI is by sucking up everybody else’s knowledge and utilizing it to coach and increase their very own programs,” Warren advised the Washington Put up.
The brand new invoice would “require a aggressive award course of” for contracts, which might ban using “no-bid” awards by the Pentagon to firms for cloud providers or AI basis fashions. (The lawmakers’ transfer got here a day after OpenAI introduced that its know-how can be deployed on the battlefield for the primary time in a partnership with Anduril, finishing a year-long reversal of its coverage towards working with the army.)
Whereas Massive Tech is hit with antitrust investigations—together with the ongoing lawsuit towards Google about its dominance in search, in addition to a brand new investigation opened into Microsoft—regulators are additionally accusing AI firms of, nicely, simply straight-up mendacity.
On Tuesday, the Federal Commerce Fee took motion towards the smart-camera firm IntelliVision, saying that the corporate makes false claims about its facial recognition know-how. IntelliVision has promoted its AI fashions, that are utilized in each residence and industrial safety digicam programs, as working with out gender or racial bias and being skilled on tens of millions of photographs, two claims the FTC says are false. (The corporate couldn’t help the bias declare and the system was skilled on solely 100,000 photographs, the FTC says.)
Every week earlier, the FTC made comparable claims of deceit towards the safety big Evolv, which sells AI-powered safety scanning merchandise to stadiums, Okay-12 colleges, and hospitals. Evolv advertises its programs as providing higher safety than easy metallic detectors, saying they use AI to precisely display screen for weapons, knives, and different threats whereas ignoring innocent gadgets. The FTC alleges that Evolv has inflated its accuracy claims, and that its programs failed in consequential instances, comparable to a 2022 incident once they did not detect a seven-inch knife that was finally used to stab a scholar.
These add to the complaints the FTC made again in September towards plenty of AI firms, together with one which bought a device to generate pretend product critiques and one promoting “AI lawyer” providers.