From Prof Ioannis Pitas, President of the International AI Doctoral Academy (AIDA)
The positive impact of AI systems can far outweigh their negatives if proper regulatory measures are taken. Technophobia is neither justified nor a solution, writes prof. Ioannis Pitas.
Amidst growing fears that the increasingly present artificial intelligence may spiral out of control, perhaps it would be good to start by considering the following parable in the style of ancient peoples to help illustrate the narrative.
Once upon a time, the immensely prosperous AIcity let’s call it growing at an astonishing rate.
Its masons built beautiful sophisticated houses, then low ones. Since this was a highly profitable venture, they began building more complicated skyscrapers, using much the same technologies.
Here and there a few cracks began to appear, but no one paid much attention. The AImasons were so enthralled by their success that they began building very tall skyscrapers, aptly named “AI Towers of Babel,” by simply scaling the same construction techniques at a frantic pace.
Their AI towers could house many thousands of inhabitants. However, no AImason could really understand why such complex buildings worked so well.
At the same time, cracks and mishaps continued to occur at an alarming rate.
No one knows what to do, everyone expects the worst
Now, the AImasons have started to really worry: what is the origin of the technical problems? Is there any chance these AI towers will collapse? Have we exceeded the safe height limit yet?
The AI tower owners had more materialistic concerns: What if the towers collapse? Who will reimburse the victims?
What regulations and legislation apply in these cases? What is the competition doing? How can we outwit him?
Originally, the people of the city were very fascinated to live in these amazing AI towers. They were amazed at their sheer scale.
However, quite a few of them started worrying when they saw unexplainable problems here and there and projected them into the future.
They kept asking, are we really able to create such huge and complex constructions and are we safe in such a city?
The AIcity government was too busy with other pressing issues and didn’t bother addressing all these issues.
In short: no one knew what to do, but many began to fear the worst.
The parable ends there and I promise it wasn’t generated by an AI chat.
Enthusiasm for AI is steeped in technophobia
However, this is the current state of affairs when it comes to generative AI and large language models like ChatGPT. Enthusiasm for AI is, in fact, intertwined with technophobia.
This is natural for the general public: they like new and exciting things, but are afraid of the unknown.
What is new is that several prominent scientists have become technoskeptics, if not technophobes themselves.
The case of scientists and industrialists calling for a six-month ban on AI research, or the skepticism of top AI scientist Prof Geoffrey Hinton, are examples.
The only related historical equivalent I can remember is the criticism of atomic and nuclear bombs by part of the scientific community during the Cold War. Fortunately, humanity has managed to address these concerns rather satisfactorily.
Of course, everyone has the right to question the current state of affairs in AI. For one thing, no one knows why large language models work so well and whether they have a limitation.
There are also many dangers that bad guys could create “AI bombs,” particularly if governments remain passive bystanders in terms of regulations.
These are legitimate concerns that fuel fear of the unknown, even among eminent scientists. After all, they are human themselves.
We must maximize the positive impact of AI
However, can AI research stop, even temporarily? In my view not, as AI is humanity’s response to a global society and a physical world of ever increasing complexity.
As physical and social complexity increases, the processes run very deep and seem relentless. Artificial intelligence and citizen morphosis are our only hope for a smooth transition from the current information society to a knowledge society.
Otherwise, we may be faced with a catastrophic social implosion.
The solution is to deepen our understanding of AI advances, accelerate its development, and adjust its use to maximize its positive impact while minimizing obvious and hidden negative effects.
AI research can and must become different: more open, democratic, scientific and ethical. And to that end, there are ways we could approach this issue constructively.
For one thing, the first say on important AI research issues that have far-reaching societal impact should be delegated to elected parliaments and governments rather than corporations or individual scientists.
Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimize its negative aspects.
The positive impact of AI systems can far outweigh their negatives if proper regulatory measures are taken. Technophobia is neither justified nor a solution.
There are dangers to democracy and progress, but they can be addressed
In my view, the greatest current threat comes from the fact that such AI systems can remotely deceive too many citizens who have little or average education and/or poor investigative skills.
This can be extremely dangerous for democracy and any form of socio-economic progress.
In the near future, we should counter the great threat from using LLM and/or CAN in illegal activities (cheating in university exams is a rather benign use within the related criminal possibilities).
Furthermore, their impact on jobs and markets will be very positive in the medium to long term.
To help this, in my view, AI systems should: a) be required by international law to be registered in an “AI system registry” and b) notify their users that they are talking about or using the results of a artificial intelligence system.
As AI systems have a huge impact on society and aim to maximize socio-economic benefits and progress, advanced key technologies of AI systems should become open.
AI data should be (at least partially) democratised, again to maximize socio-economic benefits and progress.
We can enable progress while also maintaining the regulatory mechanisms
Adequate and robust financial compensation schemes should be in place for AI technology champions to compensate for any loss of profit due to the aforementioned openness and to secure future heavy investments in AI R&D, for example through technology patents and compulsory licenses.
The balance of AI research between academia and industry should be reworked to maximize research outcomes while maintaining competitiveness and rewarding R&D risks undertaken.
Educational practices should be reviewed at all education levels to maximize the benefits of AI technologies while creating a new generation of creative and adaptable citizens and scientists (AIs).
Finally, adequate AI regulation, supervision and funding mechanisms should be created and strengthened to ensure the above.
Perhaps then, the above allegory will be nothing more than a (moderately) funny fairy tale.
Dr. Ioannis Pitas is a professor at the Aristotle University of Thessaloniki AUTH and president of the International AI Doctoral Academy (AIDA), one of the leading pan-European tools of AI studies.
At Euronews, we believe that all opinions matter. Contact us at firstname.lastname@example.org to send proposals or contributions and join the conversation.
#Artificial #intelligence #Tower #Babel