top of page
Search
  • Pivot lab

Can AI unlock market insights? What are the benefits and risks? #AI #MarketResearch #Insights




Like many others, here at Pivotlab we’ve been reflecting on the opportunity and risks that Artificial Intelligence (AI), specifically Generative AI, might have on our work in market and customer insight. In this blog we explore how AI is already changing the way we approach our work and reflect on some of the potential benefits and pitfalls we are considering as we adopt new ways of doing things.


For content creation the most well-known of the Generative AI tools – CHAT GPT – (other tools are available) offers users the chance to pose questions and to have generated content – whether written or image-based - returned for within seconds. AI is also increasingly embedded in other commonly used research tools to improve their power. For example, Survey Monkey, now offers the opportunity to build a survey from just a few AI-driven prompts about what you want to know about your target respondents.  


The appeal of these tools is obvious. Desk research - hours spent reviewing journals, trade and press articles and statistics to explore and test your hypothesis – takes time as does developing customer personas, designing surveys and workshops, and writing up research findings. It also takes some expertise which seemingly can be replaced or supplemented (depending on your view) by these tools.


We are at very much at the start of our journey in this respect but keen to understand more. For example, we are exploring how using tools such as CHAT GPT can help with some of the groundwork in terms of creating content or drafting surveys, how transcription tools such as otter.ai (others are available) can help with writing up interview findings and how sophisticated ai-enabled analytics tools can help make sense of large data sets making these more accessible to non-technical users.


We are committed to exploring how these can benefit our business and our clients by increase our productivity and the quality of our outputs. However, we are also keen to understand some of the potential pitfalls.


Firstly, our initial research tells us that as with any other digital tools you need to understand how different tools have been developed, their limits, terms & conditions and how to get the best out of them. I used a simple prompt in CHAT GPT to give me some ideas for this blog. Whilst it gave me a broad-brush view of the issues to consider and a framework to start with there were obvious gaps in the content (although I also accept that my outputs should get better over time as my understanding of the effectiveness of the prompts improves). So, in creating new written content – whether a proposal or a blog, Generative AI tools are perhaps best considered as a way to get started with ideas rather than a shortcut the final output. Moreover, with Google taking a stance to reward ‘higher quality content’ i.e. that which is not solely AI generated, in their search tools this is a serious consideration if we are creating insight content which may be published.


Secondly, there are bias and privacy concerns to consider. Different tools have different privacy terms which means that putting any sensitive information into a search would be a bad idea and risks a breach of GDPR.


Thirdly, studies have also shown that these tools can sometimes have in-built bias stemming from the quality of data or research that has been input – whether related to gender, race or other protected characteristics. As a user we need to be mindful of this when interpreting search results and to guard against as much as possible building in bias to our findings or in our initial searches.[1]


Fourthly, copywrite of content that might form part of a research output is also an issue with the owners of some content in the system, including artists, not happy with how it got there and legislation that has not kept pace meaning that who owns the final output which we might use is not always clear.[2] Using any outputs in our work means considering carefully if we are likely to be breaching anyone else’s copyright with AI generated content.


Finally, transparency is major a challenge including understanding exactly which data or publication source has informed a statement and how robust that source is. A recent academic paper also raised concerns that reliance on AI tools across the research cycle could foster an ‘illusion of understanding’ were the tools seen as more robust or reliable than they are.[3] So, we must always go the extra mile to ensure we understand the source of any information returned from our searches and check to original source for confirmation.


So, armed with this information where are we going to start?

1.     We will start small with one or two tools and evaluate how these help improve or speed up different aspects of our work.

2.     We will read the small print so we understand how any inputs might be saved, shared or re-used in future.

3.     We will make clients aware, as necessary, about how we are using these tools

4.      We will start from first principles – we will take a collaborative approach to tools but still focus on principles of good research practice, follow Market Research Society Guidelines and seek out combine any findings with as much diverse human insight as possible to get the best results.  


13 views0 comments

Comments


bottom of page