Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


Without defining the standards, “the process can become political,” Kreps said. This would create a system where “whoever has the power can shape how vetting works.”
So far, neither Biden nor the Trump administration has figured out how to prevent this, Kreps said.
Microsoft’s blog stated that “CAISI, Microsoft and NIST will collaborate on the development of analytical methods,” which suggests that the plan is to implement these standards quickly. According to Microsoft, “testing AI systems in ways that investigate unexpected behavior, misuse patterns, and failure modes” is like stress testing whether airbags, seatbelts, and brakes work properly and reliably in highly complex vehicles.
But Gregory Falco, a Cornell University assistant professor of mechanical and aerospace engineering and an expert in following the rule of AIhe insists there is a better way.
“Government oversight of AI cannot be a political assessment of a model’s results, nor should it be a way to determine whether a model says good or bad about the president or the administration,” Falco said.
Instead of relying on a political government to help control the AI systems people use, the US could create “an alternative way of computing itself,” Falco said.
Imagine, Falco points out, if AI companies understand that their models can be read at any time, how can accountability and punishment be created? Working in a similar way to the Internal Revenue Service (IRS), the AI-based system can create “real consequences for careless filing,” Falco said. For AI companies facing this challenge, the pressure will be to improve AI security testing internally, Falco said.
This appears to be “the only viable option,” said Falco, since “the government currently does not have the domestic technology, infrastructure, or day-to-day intelligence needed to directly evaluate these systems on its own.”