University of Chicago researchers seek to “poison” AI art generators with Nightshade

Enlarge (credit: Getty Images)

On Friday, a team of researchers at the University of Chicago released a research paper outlining "Nightshade," a data poisoning technique aimed at disrupting the training process for AI models, reports MIT Technology Review and VentureBeat.

CMA sets out principles for responsible AI development 

The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The …

文 » A