The threat of AI is real. But there is a way to avoid it, this tech expert explains how

Gokula Nandhini K August 22, 2023 | 12:30 PM Technology

Many of the world’s leading voices in artificial intelligence (AI) have begun to express fears about the technology. They include two of the so-called ‘Godfathers of AI’ – Dr Geoffrey Hinton and Prof Yoshua Bengio, who both played a significant role in its development.

Hinton shocked the AI world in May 2023 by quitting his role as one of Google’s vice-presidents and engineering fellows, citing his concerns about the risks the tech could pose to humanity through the spread of misinformation. He even said he harbours some degree of regret regarding his contributions to the field.

Similarly, Nobel Prize-winning computer scientist Bengio recently told the BBC that he has been surprised by the speed that AI has evolved and felt ‘lost’ when looking back at his life’s work.

Figure 1. The threat of AI is real. But there is a way to avoid it, this tech expert explains how

The threat of AI is real. But there is a way to avoid it, this tech expert explains how is shown in Figure 1. Both have called for international regulations to enable us to keep tabs on the development of AI. Unfortunately, due to the fast pace at which the tech develops and the opaque ‘black box’ nature around how much of it operates, it’s much more difficult than it sounds.

Although the potential risks of generative AI, whether it’s bad actors using it for cybercrime or the mass production of misinformation, have become increasingly obvious, what we should do about them has not. One idea seems to be gathering momentum, though: global AI governance.

In an essay, published in The Economist on 18 April, Anka Reuel, a computer scientist at Stanford University, and I proposed the creation of an International Agency for AI. Since then, others have also expressed an interest in the idea. When I again raised the idea during the testimony I gave in the US Senate in May, both Sam Altman, CEO of OpenAI, and several senators seemed open to it.

Later, leaders of three top AI companies sat down with UK prime minister Rishi Sunak to have a similar conversation. Reports from the meeting suggested that they too seemed aligned on the need for international governance. A forthcoming white paper from the United Nations also points in the same direction. Many other people that I’ve spoken to also see the urgency in the situation. My hope is that we’ll be able to convert this enthusiasm into action.

At the same time, I want to call attention to a fundamental tension. We all agree on the need for transparency, fairness and accountability regarding AI, as emphasised by the White House, the Organisation for Economic Co-operation and Development (OECD), the Center for AI and Digital Policy (CAIDP) and the United Nations Educational, Scientific and Cultural Organization (UNESCO). In May, Microsoft even went so far as to directly ratify its commitment to transparency.

But the reality that few people seem to be willing to face is that large language models – the technology underlying the likes of ChatGPT and GPT-4 – are not transparent and are unlikely to be fair.

What can we do?

  • here are steps we can take now to make developing AI safer…
  • Governments should institute a Medicines and Healthcare/Food and Drug Administration-style approval for large-scale deployment of AI models, in which companies must satisfy regulators (ideally independent scientists) that their products are safe and that the benefits outweigh the risk.
  • Governments should compel AI companies to be transparent about their data and to cooperate with independent investigators.
  • AI companies should provide resources (for example processing time) to allow external audits.
  • We should find ways to incentivise companies to treat AI as a genuine public good, through both carrots and sticks.
  • Create a global agency for AI, which has multiple stakeholders that work together to ensure that the rules governing AI serve the public and not just the AI companies.
  • We should work towards something like a CERN (Conseil Eurpéen pour la Recherche Nucléaire) for AI that’s focused on safety and emphasises: (a) developing new technologies that are better than current technologies at honouring human values, and (b) developing tools and metrics to audit AI, track the risks and helps to directly mitigate those risks.[1]

Source:Sciencefocus

Cite this article:

Gokula Nandhini K (2023), The threat of AI is real. But there is a way to avoid it, this tech expert explains how, AnaTechmaz, pp.557