Google chief trusts AI makers to regulate the technology

Facebook
X
WhatsApp
Telegram
Email
Google chief Sundar Pichai, in an interview, said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself. Photo: AFP

LET’S READ SUARA SARAWAK/ NEW SARAWAK TRIBUNE E-PAPER FOR FREE AS ​​EARLY AS 2 AM EVERY DAY. CLICK LINK

Google chief Sundar Pichai, in an interview, said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself. Photo: AFP

Google chief Sundar Pichai, in an interview, said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself.

Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said in an interview with the Washington Post.

“I think tech has to realise it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”

The California-based internet giant is a leader in the development of AI, competing in the smart software race with titans such as Amazon, Apple, Microsoft, IBM and Facebook.

Pichai said worries about harmful uses of AI are “very legitimate” but that the industry should be trusted to regulate its use.

“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.

“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

See also  Answering a call to help others

Google in June published a set of internal AI principles, the first being that AI should be socially beneficial.

“We recognise that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.

“As a leader in AI, we feel a deep responsibility to get this right.”

Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights.

The company noted that it would continue to work with the military or governments in areas such as cybersecurity, training, recruitment, healthcare, and search-and-rescue.

AI is already used to recognise people in photos, filter unwanted content from online platforms, and enable cars to drive themselves.

The increasing capabilities of AI have triggered debate about whether computers that could think for themselves would help cure the world’s ills or turn on humanity as has been depicted in science fiction works. – AFP

Download from Apple Store or Play Store.