World

Big Tech bosses told to protect public from AI risks

May 05, 2023
The White House seen through gates outside.
The White House seen through gates outside.

WASHINGTON — Big Tech bosses were summoned to the White House on Thursday and told they must protect the public from the dangers of Artificial Intelligence (AI).

Sundar Pichai of Google, Satya Nadella of Microsoft, and OpenAI's Sam Altmann were told they had a "moral" duty to safeguard society.

The White House made it clear that it may regulate the sector further.

Recently launched AI products like ChatGPT and Bard, have captured the public's imagination.

They offer ordinary users the chance to interact with what is known as "generative AI", which can summarise information from multiple sources within seconds, debug computer code, write presentations, and even poetry, that sound plausibly as if they might have been human-generated.

Their rollout has sparked renewed debate over the role of AI in society, by offering a tangible illustration of the potential risks and rewards of the new technology.

Technology executives gathered at the White House on Thursday were told it was up to firms to "ensure the safety and security of their products" and were warned that the administration was open to new regulations and legislation to cover artificial intelligence.

Sam Altman, chief executive of OpenAi, the firm behind ChatGPT, told reporters that in terms of regulation, executives were "surprisingly on the same page on what needs to happen".

US Vice President Kamala Harris said in a statement following the meeting that the new technology could pose a risk to safety, privacy and civil rights, although it also had the potential to improve lives.

The private sector had "an ethical, moral, and legal responsibility to ensure the safety and security of their products", she said.

The White House announced a $140m (£111m) investment from the National Science Foundation to launch seven new AI research institutes.

Calls for the dramatic rise in emerging AI to be better regulated have been coming thick and fast, from both politicians and tech leaders.

Earlier this week, the "godfather" of AI, Geoffrey Hinton, quit his job at Google - saying he now regretted his work.

He told the BBC that some of the dangers of AI chatbots were "quite scary".

In March, a letter signed by Elon Musk and Apple founder Steve Wozniak, called for a pause to the rollout of the technology.

And on Wednesday, the head of the Federal Trade Commission (FTC), Lina Khan, outlined her views on how and why AI needed to be regulated.

There are concerns that AI could rapidly replace peoples' jobs, as well as worries that chatbots like ChatGPT and Bard can be inaccurate and lead to the dissemination of misinformation.

There are also concerns that generative AI could flout copyright law. Voice cloning AI could exacerbate fraud. AI generated videos can spread fake news.

However, advocates like Bill Gates have hit back against calls for an AI "pause" saying such a move would not "solve the challenges" ahead.

Gates argues it would be better to focus on how best to use the developments in AI.

And others believe there is a danger of over-regulating - which would give a strategic advantage to tech companies in China. — BBC


May 05, 2023
230 views
HIGHLIGHTS
World
12 hours ago

Angry India accuses Canada of 'preposterous' investigation

World
12 hours ago

Marine Le Pen to face judges in embezzlement trial

World
12 hours ago

Israeli attacks on UNIFIL are 'completely unacceptable,' Borrell says echoing EU ire