Researchers show ChatGPT, other AI tools can be manipulated to produce malicious code
Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code, which could be used to launch cyber attacks, according to research from the University of Sheffield.