Biden administration wants your input on rules for AI models like ChatGPT
|American officials are taking further steps to set rules for AI systems like ChatGPT. The National Telecommunications and Information Administration (NTIA) is asking for public comments on possible regulations that hold AI creators accountable. The measures will ideally help the Biden administration ensure that these models work as promised “without causing harm,” the NTIA says.
While the request is open-ended, the NTIA suggests input on areas like incentives for trustworthy AI, safety testing methods and the amount of data access needed to assess systems. The agency is also wondering if different strategies might be necessary for certain fields, such as healthcare.
Comments are open on the AI accountability measure until June 10th. The NTIA sees rulemaking as potentially vital. There’s already a “growing number of incidents” where AI has done damage, the overseer says. Rules could not only prevent repeats of those incidents, but minimize the risks from threats that might only be theoretical.
ChatGPT and similar generative AI models have already been tied to sensitive data leaks and copyright violations, and have prompted fears of automated disinformation and malware campaigns. There are also basic concerns about accuracy and bias. While developers are tackling these issues with more advanced systems, researchers and tech leaders have been worried enough to call for a six-month pause on AI development to improve safety and address ethical questions.
Great deals on consumer electronics delivered straight to your inbox, curated by Engadget’s editorial team. See latest
Please enter a valid email address
Please select a newsletter
By subscribing, you are agreeing to Engadget’s Terms and Privacy Policy.
The Biden administration hasn’t taken a definitive stance on the risks associated with AI. President Biden discussed the topic with advisors last week, but said it was too soon to know if the technology was dangerous. With the NTIA move, the government is closer to a firm position — whether or not it believes AI is a major problem.