
On March 22, 2023, thousands of researchers and technology leaders including Elon Musk and Apple co-founder Steve Wozniak released a open letter calling to slow down the race to artificial intelligence. Specifically, the letter recommended that labs suspend training for technologies stronger than OpenAIs GPT-4, the more sophisticated generation of today’s AI systems generating languages, for at least six months.
Sounding the alarm risks posed by AI it’s nothing new academics have issued warnings about the risks of super intelligent machines for decades now. There is no consensus yet the likelihood of creating artificial general intelligenceautonomous AI systems that match or surpass humans in the most economically valuable tasks. However, it is clear that current AI systems already pose many dangers, from racial bias onwards facial recognition technology the growing threat of disinformation e cheating student.
While the letter calls on industry and policy makers to cooperate, there is currently no mechanism to enforce such a pause. AS a philosopher who studies the ethics of technologyI’ve noticed that AI research exemplifies the free pilot problem. I would argue that this should drive how companies respond to its risks and that good intentions won’t be enough.
Ride for free
Free riding is a common consequence of what philosophers call collective action problems. These are situations where, as a group, everyone would benefit from a particular action, but as individuals, every member would benefit benefit from not doing so.
Such problems most commonly involve public goods. For example, suppose the residents of a city have a collective interest in funding a subway system, which would require each of them to pay a small amount in taxes or fares. Everyone would benefit, but it is in every individual’s best interest to save money and avoid paying their fair share. After all, they’ll still be able to enjoy the subway if most other people pay.
Hence the free rider problem: some individuals won’t contribute their fair share but still get a literally free ride, in the case of the Metro. If every individual did not pay, however, no one would benefit.
Philosophers tend to argue that it is unethical to go on the free ride, as free riders fail to reciprocate others by paying their fair share. Many philosophers also argue that free riders fail in their responsibilities as part of the social contract, the collectively agreed cooperative principles that govern a society. In other words, they fail to live up to their duty to be contributing members of society.
Pause or continue?
Like the subway, AI is a public good, given its potential to complete tasks much more efficiently than human operators: everything from diagnose patients from medical data analysis to acceptance high risk jobs in the military OR improve mining safety.
But both its benefits and its dangers will affect everyone, even people who don’t personally use AI. Reduce The risks of AI, everyone has an interest in that research in the sector is conducted with care, safety and with adequate supervision and transparency. For example, disinformation and fake news already pose serious threats to democracies, but AI has the potential to do that aggravate the problem spreading fake news faster and more effectively than people can.
Even if some tech companies voluntarily stop their experiments, however, other companies would have a monetary interest in continuing their own AI research, allowing them to move forward in the AI arms race. Additionally, voluntarily halting AI experiments would allow other companies to get a free ride and ultimately reap the benefits of safer and more transparent AI development, along with the rest of society.
Sam Altman, CEO of OpenAI, acknowledged that the company he is afraid of the risks placed by its chatbot system, ChatGPT. We have to be careful here, she said in an interview with ABC News, mentioning the potential for AI to produce disinformation. I think people should be happy that we’re a little afraid of that.
In a letter published on April 5, 2023, OpenAI said the company believes in powerful AI systems need for regulation ensure thorough safety assessments and actively engage with governments on the best form such regulation could take. However, OpenAI is continuing with the gradual diffusion of GPT-4 and the rest of the industry as well continues to develop and train advanced AIs.
Ripe for regulation
Decades of social science research on collective action problems he has shown that where trust and goodwill are insufficient to avoid free riders, regulation is often the only alternative. Voluntary compliance is the key factor that creates free-rider scenarios and government action sometimes it’s the way to nip it in the bud.
Further, these regulations must be enforceable. After all, would-be subway passengers are unlikely to pay for their fare unless there is a threat of punishment.
Take one of the most dramatic freerider problems in the world today: climate change. As a planet, we all have a very important interest in maintaining a habitable environment. In a system that allows free riders, however, there is little incentive for any country to actually follow greener guidelines.
The Paris Agreement, which is currently the most comprehensive global agreement on climate change, is voluntary and the United Nations has no recourse to enforce it. Even if the European Union and China voluntarily limit their emissions, for example, the United States and India could do without carbon dioxide reductions while still emitting.
Global challenge
Likewise, the free rider problem grounds arguments for regulating the development of AI. Indeed, climate change it is a particularly close parallel, as neither the risks posed by AI nor the greenhouse gas emissions are limited to one country of origin of the programme.
Furthermore, the race to develop more advanced AI is international. Even if the United States introduced federal regulation of AI research and development, China and Japan could travel freely and continue their domestic market Artificial intelligence programs.
Effective regulation and enforcement of AI would require global collective action and cooperation, just as with climate change. In the United States, strict application would require federal oversight of research and the ability to impose steep fines or shut down noncompliant AI experiments to ensure responsible development, whether through regulatory oversight committees, whistleblower protections, or, in extreme cases, or research laboratories and criminal charges.
Without enforcement, however, there will be free riders, and free riders mean the AI threat won’t be diminishing anytime soon.
Want to learn more about AI, chatbots and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides at The best free AI art generators AND Everything we know about OpenAIs ChatGPT.
Tim Juvshik is Visiting Assistant Professor of Philosophy at Clemson University. This article is republished by The conversation licensed under Creative Commons. Read the original article.
#problem #free #riders