Focus on APAC

July 23, 2025

Are the Benefits of Insights Being Devalued by the Hype of Generative AI?

By 2025, GenAI reshapes data access—but human insight still matters. Learn 3 key differences between AI and traditional research, and how to combine both.

Are the Benefits of Insights Being Devalued by the Hype of Generative AI?

By the end of 2025, the world’s digital landscape is projected to consist of 181 zettabytes1 of data. This vast amount of data is being ingested, consolidated, and analysed in the age of Generative AI (GenAI), aiming to uncover new, thought-provoking insights. With the support of machine learning, GenAI tools and platforms are enabling entities as well as empowering individuals with this wealth of information.

Since GenAI’s emergence, many insight professionals, including myself, have emphasized that this does not signify the end of research as we know it. Yes, tools will need to evolve considering the democratization of this emerging technology, the principles that drive insight frameworks remain steadfast. I support this democratization and I’m not alone. BCG2 recently reported that Asia Pacific is rapidly adopting the use of GenAI tools, closing the gap with North America in terms of regional adoption, long seen as the traditional technology leader.

Access and the ability to process the vast data sets that are now more accessible also support the development of more coherent and, with effort, less biased hypotheses to be further evaluated and tested. Yet, the ease of sourcing these might also lead to corporations’ over-reliance on GenAI platforms to scour for what is already available for instant gratification over conventional research approaches which enables us to ideate and create new approaches.

With a significant majority (90%) of companies in Asia looking to GenAI to boost efficiency and revenue, the real challenge lies in ensuring these tools are equipped to address the region’s unique complexities—especially in the absence of non-verbal cues and cultural context.

Let’s break down the nuances of an insights project that might focus solely on GenAI-powered findings and one that is underpinned by conventional forms of information gathering and analysis. And while I’m sure there are many comparisons to be made, I’d like to highlight just 3 for now.

Data Sources

It starts with the basics, the foundation of any model or research report – the dataset. GenAI has made connecting to accessible datasets much more convenient and yes, the ability to verify the sources for which the GPT has built its stance around. I’m sure some will argue that digital footnotes make it easier to identify sources but the challenges remain in the verification process. How informed and how disciplined are we, the human factor, to exercise discipline in verifying these said sources and to determine whether the data universe we are referencing is truly comprehensive? A recent article featured in the Straits Times3 highlighting the use of hidden GenAI prompts in research papers only demonstrates the rigor required needed in verifying what is being published and reviewed. The importance of this step of the process is even more important when the insights we are looking to identify are being used to make crucial decisions on.

Data Accessibility

The consistency in the digitalization of Asia’s data continue to pose a challenge for marketers operating in our region. While Natural Language Processing (NLP) tools have certainly enabled the ability to include data sources originally imputed in various languages more easily, Asia Pacific’s less-than-equal adoption rate of digital transformation may still leave certain segments of the population or findings underrepresented. Coupled with the fact that most GPT models are engineered towards providing a satisfactory answer to a given question, the chances of hallucinations multiplies. This inconsistency means that GenAI platforms ingesting this information may already have cognitive biases due to the absence of certain data.

The Human Factor

Beyond data, the interpretation of insights hinges on another critical element, human judgement. While LLM models can truly excel in information retrieval and pattern recognition, they continue to underperform against humans in terms of critical thinking skills. Yes, the GPT models are certainly improving in terms of mimicking human-like text and voice generation and can even generate various forms of creative text or stories based on the amalgamation of the sources they sit on.

However, these models continue to be found lacking in their reasoning abilities, breaking down a singular problem into smaller steps and then recombing the results in contextually relevant manner. At the same time, there are also ethical considerations that business professionals are tasked to consider and evaluate in such projects that need GenAI tools by itself continue to be lacking in.

Although improvements in GPT tools’ critical thinking process continue to progress forward, and various companies are looking at how inherent bias and ethical considerations can be built into current scrubbing and training of AI models, this task is still best managed by a human researcher responsible for a given project. Certainly, in Asia Pacific, where contextual and subtle cultural differences continue to dominate, the conventional insight generation will continue to lead GenAI-led projects.

A Complement, Not a Replacement

This is not to say GenAI has no place in insight generation. On the contrary, we’re already seeing professionals use it to streamline data collection and synthesis. Market research firms are experimenting with GPT-powered chat systems for qualitative moderation, and organizations are using GenAI to summarize large datasets more efficiently.

As organization continue to roll out GenAI powered tools for employees to learn from and become accustomed, a new habit is forming, turning to GPT assistant for answers. But this raises a critical question—can we still distinguish between a summary of what’s available and a true insight? In a recently published summary on the impact of GenAI on human’s cognitive skills4 , the impact on the usage of ChatGPT on students’ learning and performance indicators if left unfiltered is certainly thought-provoking. Will the over reliance  on GenAI tools also fundamentally affect humans’ critical thinking skills?

As we look ahead, the future of insights lies in ensuring we remain cognizant of the benefits of both AI and human expertise, mastering the art of combining both— thoughtfully, ethically, and contextually.

This original writeup was completed with the support of Copilot’s prompts and suggestions. The sourcing of reference points is my own ability to use Google search.

Sources

1. Statista: Volume of data/information created, with forecasts to 2028.

2. BCG, “In the race to adopt AI, Asia Pacific if the region to watch’, March 2025 

3. Nikkei Asia, ‘Positive review only’: Researchers hide AI prompts in papers”, July 2025

4. Dr. Philippa Hartmanv, Systematic Review & Meta-analysis of ChatGPT’s Effects on Student Learning,
Jan 2025 

generative AIchatgptdata analyticsartificial intelligenceAsia Pacific

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers