AI: How 'freaked out' should we be?

  • Published
Amy Webb during her keynote presentation at SXSWImage source, Getty Images
Image caption,

At SXSW, Amy Webb outlined her vision for where artificial intelligence could be headed in the next 10 years

Artificial intelligence has the awesome power to change the way we live our lives, in both good and dangerous ways. Experts have little confidence that those in power are prepared for what's coming.

Back in 2019, a research group called OpenAI created a software program that could generate paragraphs of coherent text and perform rudimentary reading comprehension and analysis without specific instruction.

OpenAI initially decided not to make its creation, called GPT-2, fully available to the public out of fear that people with malicious intent could use it to generate massive amounts disinformation and propaganda. In a press release announcing its decision, the group called the program "too dangerous".

Fast forward three years, and artificial intelligence capabilities have increased exponentially.

In contrast to that last limited distribution, the next offering, GPT-3, was made readily available in November. The Chatbot-GPT interface derived from that programming was the service that launched a thousand news articles and social media posts, as reporters and experts tested its capabilities - often with eye-popping results.

Chatbot-GPT scripted stand-up routines in the style of the late comedian George Carlin about the Silicon Valley Bank failure. It opined on Christian theology. It wrote poetry. It explained quantum theory physics to a child as though it were rapper Snoop Dogg. Other AI models, like Dall-E, generated visuals so compelling they have sparked controversy over their inclusion on art websites.

Machines, at least to the naked eye, have achieved creativity.

On Tuesday, OpenAI debuted the latest iteration of its program, GPT-4, which it says has robust limits on abusive uses. Early clients include Microsoft, Merrill Lynch and the government of Iceland. And at the South by Southwest Interactive conference in Austin, Texas, this week - a global gathering of tech policymakers, investors and executives - the hottest topic of conversation was the potential, and power, of artificial intelligence programs.

Arati Prabhakar, director of the White House's Office of Science and Technology Policy, says she is excited about the possibilities of AI, but she also had a warning.

"What we are all seeing is the emergence of this extremely powerful technology. This is an inflection point," she told a conference panel audience. "All of history shows that these kinds of powerful new technologies can and will be used for good and for ill."

Her co-panelist, Austin Carson, was a bit more blunt.

"If in six months you are not completely freaked the (expletive) out, then I will buy you dinner," the founder of SeedAI, an artificial intelligence policy advisory group, told the audience.

Media caption,

WATCH: Microsoft's Brad Smith says AI will affect generations to come

"Freaked out" is one way of putting it. Amy Webb, head of the Future Today Institute and a New York University business professor, tried to quantify the potential outcomes in her SXSW presentation. She said artificial intelligence could go in one of two directions over the next 10 years.

In an optimistic scenario, AI development is focused on the common good, with transparency in AI system design and an ability for individuals to opt-in to whether their publicly available information on the internet is included in the AI's knowledge base. The technology serves as a tool that makes life easier and more seamless, as AI features on consumer products can anticipate user needs and help accomplish virtually any task.

Ms Webb's catastrophic scenario involves less data privacy, more centralisation of power in a handful of companies and AI that anticipates user needs - and gets them wrong or, at least, stifles choices.

She gives the optimistic scenario only a 20% chance.

Which direction the technology goes, Ms Webb told the BBC, ultimately depends in large part on the responsibility with which companies develop it. Do they do so transparently, revealing and policing the sources from which the chatbots - which scientists call Large Language Models - draw their information?

The other factor, she said, is whether government - federal regulators and Congress - can move quickly to establish legal guardrails to guide the technological developments and prevent their misuse.

In this regard, government's experience with social media companies - Facebook, Twitter, Google and the like - is illustrative. And the experience is not encouraging.

"What I heard in a lot of conversations was concern that there aren't any guardrails," Melanie Subin, managing director of the Future Today Institute, says of her time at South by Southwest. "There is a sense that something needs to be done. And I think that social media as a cautionary tale is what's in people's minds when they see how quickly generative AI is developing."

Image source, Alamy

Read more from the BBC's coverage on AI

Image source, Alamy

Federal oversight of social media companies is largely based on the Communications Decency Act, which Congress passed in 1996, and a short but powerful provision contained in Section 230 of the law. That language protected internet companies from being held liable for user-generated content on their websites. It's credited for creating a legal environment which social media companies could thrive. But more recently, it's also being blamed for allowing these internet companies to gain too much power and influence.

Politicians on the right complain that it has allowed the Googles and Facebooks of the world to censure or diminish the visibility of conservative opinions. Those on the left accuse the companies of not doing enough to prevent the spread of hate speech and violent threats.

"We have an opportunity and responsibility to recognise that hateful rhetoric leads to hateful actions," said Jocelyn Benson, Michigan's secretary of state. In December 2020, her home was targeted for protest by armed Donald Trump supporters, organised on Facebook, who were challenging the results of the 2020 presidential election.

She has backed deceptive practices legislation in Michigan that would hold social media companies responsible for knowingly spreading harmful information. There have been similar proposals at both the federal level and in other states, along with legislation to require social media sites to provide more protection for underage users, be more open about their content moderation policies and take more active steps to reduce online harassment.

Image source, Getty Images
Image caption,

Jocelyn Benson, Michigan's secretary of state, has spoken out in support of regulating big tech to combat hateful rhetoric

Opinion is mixed, however, over the chances of success for such reform. Big tech companies have entire teams of lobbyists in Washington DC and state capitals as well as deep coffers with which to influence politicians through campaign donations.

"Despite copious evidence of problems at Facebook and other social media sites, it's been 25 years," says Kara Swisher, a tech journalist. "We've been waiting for any legislation from Congress to protect consumers, and they've abrogated their responsibility."

The danger, Swisher says, is that many of the companies that have been major players in social media - Facebook, Google, Amazon, Apple and Microsoft - are now leaders in artificial intelligence. And if Congress has been unable to successfully regulate social media, it will be a challenge for them to move quickly to address concerns about what Ms Swisher calls the "arms race" of artificial intelligence.

The comparisons between artificial intelligence regulation and social media aren't just academic, either. New AI technology could take the already troubled waters of websites like Facebook, YouTube and Twitter and turn them into a boiling sea of disinformation, as it becomes increasingly difficult to separate posts by real humans from fake - but entirely believable - AI-generated accounts.

Even if government succeeds in enacting new social media regulations, they may be pointless in the face of a flood of pernicious AI-generated content.

Among the countless panels at South by Southwest, there was one titled "How Congress is building AI policy from the ground". After roughly 15 minutes of waiting, the audience was told that the panel had been cancelled because the participants had gone to the wrong venue. It turns out there had been a miscommunication between South by Southwest and the panel's organisers.

For those at the conference hoping for signs of competence from humans in government, it was not an encouraging development.