AI think tank calls GPT-4 a risk to public safety

AI think tank calls GPT-4 a risk to public safety

An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

文 » A