×
×
homepage logo
SUBSCRIBE

Tech Matters: Will European rule spur US lawmakers to regulate AI?

By Leslie Meredith - Special to the Standard-Examiner | Jun 21, 2023

Photo supplied

Leslie Meredith

The European Parliament approved groundbreaking rules to regulate artificial intelligence. The EU AI Act, much like the EU’s data protection law known as GDPR, could become a blueprint or at least a reference for U.S. lawmakers to develop similar legislation. The act is the first of its kind for AI amid a public frenzy around the dangers associated with the technology.

Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have publicly warned that AI is developing too fast without any oversight. OpenAI founder Sam Altman, whose company developed ChatGPT, recently appeared before the U.S. Congress and urged AI development to be regulated. His concern focused on the possibility of a superintelligent AI rising up and taking over humanity in some capacity. However, he did not advocate for stopping AI development, which at this point would be impossible.

AI is not new. In fact, the term was first coined at a 1956 summer conference at Dartmouth College. But it was not until last fall with the release of ChatGPT that AI became a household word, escaping the realms of universities and science fiction. It’s easy to use and useful. There’s not much of a learning curve beyond improving the prompts you give it to answer your questions and write just about anything.

Despite surpassing 100 million monthly users within two months of its public launch in late November to become the fastest-growing web application ever, few U.S. adults have actually used the product, according to a report released by Pew Research Center last month. Just 14% of all U.S. adults say they have used it for entertainment, to learn something new or for their work, the study revealed.

Congress has regulation in its sights but has lagged behind the EU in action. While the EU AI Act is far from becoming law, it is likely to be one of the first formal rules for the technology globally. So what does it say?

The proposed law establishes categories for AI, requirements for AI systems and bans certain types of AI functionality. The overarching goal is to create a framework for developers and users of AI and help to protect people from the potential harms of AI.

AI systems would be divided into three categories: unacceptable risk, high risk and low risk. Unacceptable risk systems would be banned. High-risk systems, such as those that could influence elections, would be subject to strict requirements, while low-risk systems would be largely unregulated.

The law would also establish a number of requirements for AI systems, including transparency, fairness and accountability. Developers would be required to explain how their AI systems work and make it possible for users to understand how the systems make decisions. (This is similar to the way GDPR requires any company, located anywhere in the world, that collects user data from Europeans to specify what data is collected, how it is stored and make it possible for the user to remove his or her data from the company’s system.) AI systems would not be allowed to discriminate against people on the basis of race, sex, religion or other protected characteristics. Developers would be held responsible for the harms caused by their AI systems. Perhaps the most practical requirement in the package requires content creators to disclose when a piece of work — image, written or video — has been generated with AI.

The act also cites AI use cases that would be banned or restricted were the law to pass. Social scoring, which can be used to make decisions on things like lending, employment and insurance, would be banned. Mass surveillance systems used to collect data on large groups of people, track their movements, monitor their communications and identify potential threats would be subject to restrictions. China uses such a system to reward and penalize citizens.

While not banned, the EU AI Act would require businesses that use AI systems in their hiring process to disclose these systems and explain how they worked to job seekers. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms.

Parties found in breach of the new rules would face fines up to 30 million euros or 6% of profits, whichever is higher. The Act is expected to pass sometime this year and companies will likely have two years to comply with the new law. U.S. lawmakers are considering a variety of regulatory packages but appear to be fragmented — a more unified piece of legislation is needed, and the EU AI Act could help inform its development.

Leslie Meredith has been writing about technology for more than a decade. As a mom of four, value, usefulness and online safety take priority. Have a question? Email Leslie at asklesliemeredith@gmail.com.

Newsletter

Join thousands already receiving our daily newsletter.

I'm interested in (please check all that apply)