Listen Live
Listen Live

Microsoft exec rejects rogue generative AI risk

SHARE NOW

(The Center Square) – A Microsoft policy executive said to Pennsylvania lawmakers this week he’s “unaware” of the possibility that generative artificial intelligence could develop sentiency and become exploitive – even dangerous.

“This is not new to Microsoft,” said Tyler Clark, Microsoft’s director of state and local government affairs. “Humans need to guide this technology and that’s what we are committed to doing safely and responsibly.”

Clark’s response comes after lawmakers on the House Majority Policy Committee pressed him on the theory of technological singularity – which posits that artificial intelligence will outsmart human regulations and leave society at its whims.

Although it sounds like the plot of a dystopian novel, researchers and policymakers acknowledge the possibility, though not an inevitable one or even entirely negative one.

“What I fear most is not AI or singularity but human frailty,” said Dr. Nivash Jeevanandam, senior researcher and author for the National AI Portal of India, in an article published by Emeritus.

Jeevanandam said that humans may not realize the singularity has arrived until machines reject human intervention in their processes.

“Such a state of AI singularity will be permanent once computers understand what we so often tend to forget: making mistakes is part of being human,” he said.

That’s why experts believe policymakers must step in with stringent regulation to prevent unintended ethical consequences.

Dr. Deeptankar DeMazumder, a physicist and cardiologist at the McGowan Institute for Regenerative Medicine in Pittsburgh, said although he uses AI responsibily to predict better health outcomes for patients, he agrees there’s a dark side – particularly in the area of social and political discourse – that’s growing unfettered, sometimes amplifying misinformation or creating dangerous echo chambers.

“I like it that Amazon knows what I want to buy … it’s very helpful, don’t get me wrong,” he told the committee. “At the same time, I don’t like it when I’m watching the news on YouTube that it tries to predict what I want to watch … this is the point where you need a lot of regulation.”

Clark, too, said human guidance can shape AI into a helpful tool, not an apocalyptic threat. He pointed to its Copilot program that can help students learn to read and write, for example.

It also creates images, learns a user’s speaking and writing style so that it can return better search results, write emails and essays – all tools that can grow the workforce, not deplete it, Clark argued.

According to Microsoft’s research, Clark said about 70% of workers both want to unload as many tasks as possible to AI, but also fear its implications for job availability.

In November, research firm Forrester predicted that 2.4 million U.S. jobs – those it calls “white collar” positions – will be replaced by generative AI by 2030. Those with annual salaries in excess of $90,000 in the legal, scientific, and administrative professions face the most risk, according to the data.

“Generative AI has the power to be as impactful as some of the most transformative technologies of our time,” said Srividya Sridharan, VP and group research director at Forrester. “The mass adoption of generative AI has transformed customer and employee interactions and expectations.”

This shift means generative AI has transformed from a ‘nice-to-have’ “to the basis for competitive roadmaps.”

Jeevanandam said AI’s possibilities aren’t all bad. In his article, he writes that the technology’s ability to process and analyze information could “solve problems that have stumped humans for generations.”

“Let’s just say we need AI singularity to evolve from homo sapiens to homo deus!” he said.

Still, though, he warns that “political gumption,” at a global scale, is necessary to outline the ethical principles of using AI that “governs across borders.”