Placeholder canvas
10.9 C
London
Thursday, April 25, 2024
£0.00

No products in the basket.

HomeComputingArtificial IntelligenceAustralian MP demands white paper to investigate business risks of AI

Australian MP demands white paper to investigate business risks of AI

business risks of AI
Photo by Markus Winkler on Unsplash

The rise of Artificial Intelligence (AI) is a topic that attracts plenty of discussions and more than a little controversy. Naturally, our views are a little prejudiced by the seemingly endless parade of Hollywood blockbusters over the years, from The Matrix to Terminator to AI, Robot, where the central premise involves AI becoming cleverer than humans and deciding we are obsolete.

Of course, the uncomfortable truth is that AI is already smarter than us in many ways. It can certainly “think” faster than us when it comes to basic data analytics, and as AI becomes better at reasoning and drawing conclusions, we can see that its future potential is exciting – and maybe a little frightening. That can make AI a dangerous political topic – technological advances are good, but ones that render the human race obsolete could be less popular. One Australian politician has stuck his head above the proverbial parapet.

AI and mass destruction

It is not in the Australian nature to hold back and, to borrow a gambling phrase, Federal Labor Member of Parliament Julian Hill went all in, as he voiced his concerns about the power of AI. The Member for Bruce, Victoria, warned the Australian government that AI could be used for “mass destruction” and cited examples of AI’s power and people’s inability to tell the difference between human and AI interaction and communication.

In a clever twist, he then revealed that part of his speech had been written by ChatGPT. He reasoned that if he could use the software to write a speech about that very topic and still nobody noticed, then his point was made.

Far-reaching questions

ChatGPT has already been banned in many of Australia’s educational institutions, but, of course, that does not prevent students from accessing it through personal channels. The difficulty is that teachers simply have no way of knowing whether coursework is written by the students or the software.

That, of course, is serious, but it falls short of mass destruction. However, Mr Hill argues that if AI has the power to give an opinion, then it has the power to influence. He dared his fellow Members of Parliament to ask ChatGPT a question like “Is climate change real” or “What does my party stand for” to see how a piece of software can perpetuate biases and ultimately spread disinformation.

He argues that the biggest danger, though, is that we have limited time to act because AI will soon be so smart and so powerful that it could be beyond our capabilities to influence it. To demonstrate his point, let’s look at a simple example of how AI can beat us at games.

Out-bluffed by a poker bot?

Once AI “learns” the basics of how to do something, it can keep improving and improving as it has more data to draw on. It is why AI is already better than a human doctor at diagnosing medical conditions. It has instant recall to thousands of relevant case files and can cross reference them in a second, while a doctor might take an hour looking up three or four in medical journals.

Pluribus is the latest poker bot, and it has become so good at what it does that it recently went up against five of the best players in the world, all World Series of Poker pros who each had at least six-figure winnings under their belt. Pluribus won about as many poker games as each of the pros did. The implications for casual card players who like to play real money online poker in Australia are significant, and poker sites are now having to add controls to ensure players are human. Software is easier to copy or clone than people, and poker bots are already a reality in the online poker world.

It is only a matter of time before Pluribus is unbeatable and soon, it and whatever follows it will, perhaps, be playing poker with one another. They will have left humans far behind because they never stop getting better. It is something we have already seen with chess computers. There was excitement in the 1990s when AI in the form of Deep Blue was able to play chess at Grand Master standard. Today’s chess computers are vastly superior. Chess is a game of vast but finite possibilities. Grand Masters think 10 moves ahead, maybe even 12. The latest chess computer, Stockfish, thinks 80 moves ahead. It would be impossible for a human player to have any chance of winning – it would be like getting into your family sedan and trying to beat a Formula 1 car in a quarter-mile sprint race.

The intelligence explosion

The point is we have seen, through chess and poker, how AI starts slow but once it starts improving, it never stops and sooner or later surpasses the limitations of the human brain.

Now make no mistake, Julian Hill is not blind to the amazing benefits that AI can bring in terms of medical research and business efficiencies. However, history has already shown that there will, in time, come an “intelligence explosion” when AI surpasses human intelligence. He warns that the implications for humanity could be serious “if AI’s goals and motivations are not aligned with our own.”

He said that many scientists rate AI as a greater risk to humanity than “asteroids, runaway climate change, super-volcanoes, nuclear devastation solar flares or high-mortality pandemics.” It is worth remembering that almost a decade ago, Elon Musk said much the same thing, describing AI as the most serious existential threat to the human race.

Hill has urged his fellow politicians to come together for “a concerted, serious, urgent policy think,” starting this year, such as a white paper or a permanent commission on the risks associated with AI.

Recent Articles