Tech giants are divided as they lobby regulators

Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they’re also lobbying hard to make sure the new rules work in their favor.
That’s not to say they all want the same thing.
Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that’s advocating for an “open science” approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.
These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI’s advances.
Open advocates favor an approach that is “not proprietary and closed,” said Darío Gil, a senior vice president at IBM who directs its research division. “So it’s not like a thing that is locked in a barrel and no one knows what they are.”
The term “open-source” comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.
Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.
The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies,” Gil said in an interview with The Associated Press ahead of its unveiling.
Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.
“To state the obvious, there are near-term and commercial incentives against open source,” said Ilya Sutskever, OpenAI’s chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there’s also a longer-term worry involving the potential for an AI system with “mind-bendingly powerful” capabilities that would be too dangerous to make publicly accessible, he said.

[Read More…]

Add a Comment

Your email address will not be published. Required fields are marked *

Skip to content