“With great power comes great responsibility.” You don’t have to be a Marvel buff to recognize that quote, popularized by the Spider-Man franchise. And while the sentiment was originally in reference to superhuman speed, strength, agility, and resilience, it’s a helpful one to keep in mind when making sense of the rise of generative AI.
While the technology itself isn’t new, the launch of ChatGPT put it into the hands of 100 million people in the span of just 2 months, something that for many felt like gaining a superpower. But like all superpowers, what matters is what you use them for. Generative AI is no different. There is the potential for great, for good, and for evil.
The world’s biggest brands now stand at a critical juncture to decide how they will use this technology. At the same time, economic uncertainty and rising inflation have persisted — leaving consumers unsure of how to prioritize spending.
Considering both factors, Generative AI can help give brands a leg up in the battle for consumer attention. However, they need to take a balanced perspective – seeing the possibilities but also seeing the risks, and approaching both with an open mind.
What Generative AI means for insights work
The market research industry is no stranger to change – the tools and methodologies available to consumer insights professionals have evolved rapidly over the past few decades.
At this stage, the extent and speed of the changes that increasingly accessible generative AI will bring are something we can only speculate on. But there are certain foundations to have in place that will help decision makers figure out how to respond quickly as more information becomes available.
Ultimately, it all comes back to asking the right questions.
What are the opportunities?
Currently, the primary opportunity offered by generative AI is enhanced productivity. It can drastically speed up the processes of generating ideas, information, and written texts, like the first drafts of emails, reports, or articles. By creating efficiency in these areas, it allows for more time to be spent on tasks that require significant human expertise.
Faster time to insight
For insights work specifically, one area we see a lot of potential in is summarization of information. For example, the Stravito platform has already been using generative AI to create auto-summaries of individual market research reports, removing the need to manually write an original description for each report.
We also see potential to develop this use case further with the ability to summarize large volumes of information to answer business questions quickly, in an easy to consume format. For example, this could look like typing a question into the search bar and getting a succinct answer based on the company’s internal knowledge base.
For brands, this would mean being able to answer simple questions more quickly, and it could also help take care of a lot of the ground work when digging into more complex problems.
Insights democratization through better self-service
Generative AI could also make it easier for all business stakeholders to access insights without needing to directly involve an insights manager each time. By removing barriers to access, generative AI could help support organizations who are looking to more deeply integrate consumer insights into their daily operations.
It could also help to alleviate common concerns associated with all stakeholders accessing market research, like asking the wrong questions. In this use case, generative AI can help business stakeholders without research backgrounds to ask better questions by prompting them with relevant questions related to their search query.
Tailored communication to internal and external audiences
Another opportunity that comes with generative AI is the ability to tailor communication to both internal and external audiences.
In an insights context, there are several potential applications. It could help make knowledge sharing more impactful by making it easier to personalize insights communications to various business stakeholders throughout the organization. It could also be used to tailor briefs to research agencies as a way to streamline the research process and minimize the back and forth involved.
What are the risks?
Generative AI can be an effective tool for insights teams, but it also poses various risks that organizations should be aware of before implementation.
One fundamental risk is prompt dependency. Generative AI is statistical, not analytical, so it works by predicting the most likely piece of information to say next. If you give it the wrong prompt, you’re still likely to get a highly convincing answer.
What becomes even trickier is the way that generative AI can blend correct information with incorrect information. In low stakes situations, this can be amusing. But in situations where million dollar business decisions are being made, the inputs for each decision need to be trustworthy.
Additionally, many questions surrounding consumer behavior are complex. While a question like “How did millennials living in the US respond to our most recent concept test?” might generate a clear-cut answer, deeper questions about human values or emotions often require a more nuanced perspective. Not all questions have a single right answer, and when aiming to synthesize large sets of research reports, key details could fall between the cracks.
Another key risk to pay attention to is a lack of transparency regarding how algorithms are trained. For example, ChatGPT cannot always tell you where it got its answers from, and even when it can, those sources might be impossible to verify or even actually exist.
And because AI algorithms, generative or otherwise, are trained by humans and existing information, they can be biased. This can lead to answers that are racist, sexist, or otherwise offensive. For organizations looking to challenge biases in their decision making and create a better world for consumers, this would be an instance of generative AI making work less productive.
Some of the common use cases for ChatGPT are using it to generate emails, meeting agendas, or reports. But putting in the necessary details to generate those texts may be putting sensitive company information at risk.
In fact, an analysis conducted by security firm Cyberhaven found that of 1.6 million knowledge workers across industries, 5.6% had tried ChatGPT at least once at work, and 2.3% had put confidential company data into ChatGPT.
Companies like JP Morgan, Verizon, Accenture and Amazon have banned staff from using ChatGPT at work over security concerns. And just recently, Italy became the first Western country to ban ChatGPT while investigating privacy concerns, drawing attention from privacy regulators in other European countries.
For insights teams or anyone working with proprietary research and insights, it’s essential to be aware of the risks associated with inputting information into a tool like ChatGPT, and to stay up-to-date on both your organization’s internal data security policies and the policies of providers like OpenAI.
It’s our firm belief that the future of consumer understanding will still need to combine human expertise with powerful technology. The most powerful technology in the world will be useless if no one actually wants to use it.
Therefore the focus for brands should be on responsible experimentation, to find the right problems to solve with the right tools, and not to simply implement technology for the sake of it. With great power comes great responsibility. Now is the time for brands to decide how they will use it.
The post Generative AI for Market Research: Opportunities and Risks appeared first on Unite.AI.