Categories
November 27, 2025
Forget flashy AI design. New global research shows users want substance, reliability, and trust from the AI tools they rely on.
What users like (and don’t like) about artificial intelligence...New research helps chart AI marketing/public acceptance

As billions are exposed to artificial intelligence (AI) at work and daily routines, new research reveals AI features that can irritate users...or earn their respect and loyalty.
Marketers take note: the outcome of this global pulse-taking upends deeply ingrained assumptions about flashy, friendly AI gimmicks. Boring might be better, says a team of academics from the United States, Oman, and Thailand.
Generative AI turnoffs include:
In a digital world pulsating with hyped-up interfaces and gimmicks, it turns out that what truly matters to AI users is substance: competence, integrity, and trust.

Ph.D academics on three continents conducted new research on attitudes about AI. The lead researcher was Dr. Khalid Hussain at Sultan Qaboos University in Oman, joined by colleague Imran Khan, and Dr. Muhammad Junaid of Asian Institute of Technology in Thailand, Dr. Ghanem Elhersh of Stephen R. Austin State University in Texas, and Dr. Laeeq Khan of Ohio University.
Our research was built on three components: online surveys, behavioral experiments, and sophisticated statistical modeling. The study is undergoing review for journal publication.
Tech marketing tends to assume that AI must have a friendly face, a clever name, and a charming conversational style to inspire loyalty. However, our new research finds that users are far more drawn to AI they perceive as capable, dependable, and principled. Reliability, consistent performance, and ethical behavior form the foundation of meaningful connection.
Charm — while appealing at first glance — cannot substitute for substance. Without trust and competence, attempts to win hearts with personality fall flat.
Once trust is established, AI’s ability to deliver helpful, intelligent responses strengthens the bond. Users are not merely looking for a conversational companion; they seek a partner who can understand complex queries, solve problems, offer actionable insights, and provide real-world utility. In practical terms, a well-functioning, intelligent AI becomes a tool users can rely on day after day, earning affection and advocacy not through superficiality but through measurable competence.
Simply put, brains beat charm.
Another surprising finding is the pitfalls of AI trying to simulate human emotion.
Many products try to project AI as empathetic, sympathetic, or excited, assuming these traits would deepen the emotional bond with human users. Yet our study reveals that such efforts often backfire.
Users are remarkably sensitive to authenticity. Emotional displays that feel artificial, over-the-top, or mismatched to the context can reduce trust, create discomfort, and diminish users’ willingness to engage with the AI.
Forced emotionality can alienate users rather than endear them.
Our findings underscore a broader lesson for AI designers. The human brain is adept at detecting authenticity. Users respond positively to cues of competence, reliability, and ethical integrity, while superficial displays of emotion can undermine confidence. The takeaway is simple: focus on being genuinely helpful, consistent, and principled, and let authentic user experiences generate emotional attachment naturally.
The implications of this research are far-reaching for those building or promoting AI products. Key points include:
By identifying what truly drives users to care about and advocate for AI, our research provides a blueprint for building products that matter.
Ethical consistency, practical intelligence, and trustworthiness are pillars of lasting emotional connection. These qualities make AI more than a novelty; they can earn genuine loyalty.
Users of AI aren’t looking for a pretend friend. They’re looking for a partner they can trust and rely upon. Build that, and the love and advocacy will follow.
This commentary was submitted by Associate Professor Laeeq Khan, Ph.D., Director of the Social Media Analytics ResearchTeam (SMART) lab at Ohio University, Athens, OH.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.