LONDON (AP) The breathtaking development of artificial intelligence has dazzled users composing music, creating images and writing essays, also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern emerging technology have been caught off guard by the rapid rise of AI.
The 27-nation bloc has proposed the Western world’s first AI rules two years ago, focusing on controlling risky but tightly targeted applications. Generic AI systems like chatbots have hardly been mentioned. Lawmakers working on the AI Act weighed whether to include them but weren’t sure how, or even if it was necessary.
Then ChatGPT kind of boomed, said Dragos Tudorache, a Romanian member of the European Parliament who co-led the measure. If there was still someone who doubted whether we really needed anything, I think the doubt quickly vanished.
The release of ChatGPT last year it captured the world’s attention due to its ability to generate human-like responses based on what it learned from scanning vast amounts of online materials. With the concerns that emergeEuropean lawmakers have moved quickly in recent weeks to add terms about general AI systems as they put the finishing touches on the legislation.
The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations possibly deciding that the size of the single market for blocks would make compliance easier than developing different products for different regions.
Europe is the first regional bloc to make a meaningful attempt to regulate AIwhich is a huge challenge considering the wide range of systems that AI can broadly cover, said Sarah Chander, senior policy adviser at digital rights group EDRi.
Authorities around the world are trying to figure out how to control rapidly evolving technology to ensure it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and social risks posed by ChatGPT and other generic AI systems, which could transform daily life, from work to education to copyright and privacy.
The White House recently introduced the heads of tech companies working on artificial intelligence including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission warned it would not hesitate to crack down.
China has issued a draft regulation mandating security ratings for any product that uses generative AI systems like ChatGPT. The British competition watchdog has opened a review of the AI marketwhile Italy briefly banned ChatGPT for an invasion of privacy.
Sweeping EU regulations affecting any provider of AI services or products are expected to be approved on Thursday by a committee in the European Parliament, then negotiations between the 27 member countries, Parliament and the EU’s executive commission begin.
European rules affecting the rest of the world (the so-called Brussels effect) occurred earlier after the EU tightened data privacy and mandatory common phone charging cablesalthough such efforts have been criticized for stifling innovation.
Attitudes may be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month break to consider the risks.
Geoffrey Hinton, a computer scientist known as the Godfather of AI, and fellow AI pioneer Yoshua Bengio have voiced their concerns last week on unchecked AI development.
Tudorache said those warnings show the EU’s move to start crafting AI rules in 2021 was the right call.
Google, which responded to ChatGPT with its own Bard chatbot and is implementing artificial intelligence tools, he declined to comment. The company told the EU that artificial intelligence is too important not to regulate.
Microsoft, OpenAI advocate, did not respond to a request for comment. He welcomed the EU effort as an important step towards making trustworthy AI the norm in Europe and around the world.
Mira Murati, chief technology officer of OpenAI, said in an interview last month who believed that governments should be involved in regulating AI technology.
But asked if some of the tools OpenAIs should be classified as higher risk, in the context of the proposed European rules, he said is very nuanced.
It depends on where you apply the technology, he said, citing a high-risk medical use case or a legal use case versus an accounting or advertising application as an example.
OpenAI CEO Sam Altman plans to stop in Brussels and other European cities this month on a world tour to talk about the technology with users and developers.
Newly added provisions to the EU’s AI law would require the foundation’s AI models to disclose the copyrighted material used to train the systems, according to a recent partial draft of the legislation obtained by the Associated Press.
Basic models, also known as large language models, are a subcategory of generic AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online informationsuch as blog posts, digital books, scientific articles, and pop songs.
You have to make a significant effort to document the copyrighted material you use in training the algorithm, paving the way for artists, writers and other content creators to seek compensation, Tudorache said.
Officials crafting AI regulations must balance the risks the technology poses with the transformative benefits it promises.
Big tech companies developing AI systems and European national ministries looking to implement them are seeking to limit the reach of regulators, while civil society groups are pushing for greater accountability, said EDRis Chander.
We want more information about how these systems are developed, the levels of environmental and economic resources that are put into them, but also how and where these systems are being used so that we can effectively challenge them, he said.
Under the EU’s risk-based approach, uses of AI that threaten people’s security or rights are subject to strict controls.
Remote facial recognition a ban is foreseen. So are government social scoring systems who judge people based on their behavior. Indiscriminate scraping of Internet photos used for biometric matching and facial recognition is also prohibited.
Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.
Violations could result in fines of up to 6% of a company’s global annual revenue.
Even after receiving final approval, expected by the end of the year or early 2024 at the latest, the AI Act will not take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.
It’s possible the industry will push longer arguing that the final version of the AI Acts goes further than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.
They might argue that instead of one and a half or two years, we need two or three years, he said.
He noted that ChatGPT only launched six months ago and has already brought up a number of problems and benefits in that time.
If the AI Act does not come into full force for years, what will happen in these four years? DaSilva said. That’s really our concern, and that’s why we’re asking the authorities to be on top, just to really focus on this technology.
AP Technology writer Matt O’Brien of Providence, Rhode Island contributed.
#global #race #regulate #Europe #preparing #lead