Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.
One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.
There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks.
The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern.
“Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs.
“While there is no concrete evidence that data submitted to ChatGPT or any other generative AI system might be stored and shared with other people, the risk still exists as new and less tested software often has security gaps.”
OpenAI, the organisation behind ChatGPT, has been cautious in providing detailed information on how user data is handled. This poses challenges for organisations seeking to mitigate the risk of confidential code fragments being leaked. Constant monitoring of employee activities and implementing alerts for the use of generative AI platforms becomes necessary, which can be burdensome for many organisations.
“Further risks include using wrong or outdated information, especially in the case of junior specialists who are often unable to evaluate the quality of the AI’s output. Most generative models function on large but limited datasets that need constant updating,” adds Šedys.
These models have a limited context window and may encounter difficulties when dealing with new information. OpenAI has acknowledged that its latest framework, GPT-4, still suffers from factual inaccuracies, which can lead to the dissemination of misinformation.
The implications extend beyond individual companies. For example, Stack Overflow – a popular developer community – has temporarily banned the use of content generated with ChatGPT due to low precision rates, which can mislead users seeking coding answers.
Legal risks also come into play when utilising free generative AI solutions. GitHub’s Copilot has already faced accusations and lawsuits for incorporating copyrighted code fragments from public and open-source repositories.
“As AI-generated code can contain proprietary information or trade secrets belonging to another company or person, the company whose developers are using such code might be liable for infringement of third-party rights,” explains Šedys.
“Moreover, failure to comply with copyright laws might affect company evaluation by investors if discovered.”
While organisations cannot feasibly achieve total workplace surveillance, individual awareness and responsibility are crucial. Educating the general public about the potential risks associated with generative AI solutions is essential.
Industry leaders, organisations, and individuals must collaborate to address the data privacy, accuracy, and legal risks of generative AI in the workplace.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Assessing the risks of generative AI in the workplace appeared first on AI News.