Being honest about using AI at work makes people trust you less, research finds

Image
a smartphone displaying an AI-generated image of a glowing brain surrounded by digital circuit patterns, symbolizing artificial intelligence technology.

The transparency trap: Admitting you use AI on the job can come at a cost to your credibility. Thomas Fuller/SOPA Images/LightRocket via Getty Images

Whether you're using AI to write cover letters, grade papers or draft ad campaigns, you might want to think twice about telling others. That simple act of disclosure can make people trust you less, our new peer-reviewed article found.

As researchers who study trust, we see this as a paradox. After all, being honest and transparent usually makes people trust you more. But across 13 experiments involving more than 5,000 participants, we found a consistent pattern: Revealing that you relied on AI undermines how trustworthy you seem.

Image
Oliver Schilke, Professor, Department of Management and Organizations

Oliver Schilke, Professor, Department of Management and Organizations

Participants in our study included students, legal analysts, hiring managers and investors, among others. Interestingly, we found that even evaluators who were tech-savvy were less trusting of people who said they used AI. While having a positive view of technology reduced the effect slightly, it didn't erase it.

Why would being open and transparent about using AI make people trust you less? One reason is that people still expect human effort in writing, thinking and innovating. When AI steps into that role and you highlight it, your work looks less legitimate.

But there's a caveat: If you're using AI on the job, the cover-up may be worse than the crime. We found that quietly using AI can trigger the steepest decline in trust if others uncover it later. So being upfront may ultimately be a better policy.

Being caught using AI by a third party has consequences, as one New York attorney can attest:

 

A global survey of 13,000 people found that about half had used AI at work, often for tasks such as writing emails or analyzing data. People typically assume that being open about using these tools is the right choice.

Image
Martin Reimann, Associate Professor, Department of Marketing

Martin Reimann, Associate Professor, Department of Marketing

Yet our research suggests doing so may backfire. This creates a dilemma for those who value honesty but also need to rely on trust to maintain strong relationships with clients and colleagues. In fields where credibility is essential – such as finance, health care and higher education – even a small loss of trust can damage a career or brand.

The consequences go beyond individual reputations. Trust is often called the social "glue" that holds society together. It drives collaboration, boosts morale and keeps customers loyal. When that trust is shaken, entire organizations can feel the effects through lower productivity, reduced motivation and weakened team cohesion.

If disclosing AI use sparks suspicion, users face a difficult choice: embrace transparency and risk a backlash, or stay silent and risk being exposed later – an outcome our findings suggest erodes trust even more.

That's why understanding the AI transparency dilemma is so important. Whether you're a manager rolling out new technology or an artist deciding whether to credit AI in your portfolio, the stakes are rising.

What still isn't known

It's unclear whether this transparency penalty will fade over time. As AI becomes more widespread – and potentially more reliable – disclosing its use may eventually seem less suspect.

There's also no consensus on how organizations should handle AI disclosure. One option is to make transparency completely voluntary, which leaves the decision to disclose to the individual. Another is a mandatory disclosure policy across the board. Our research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors.

A third approach is cultural: building a workplace where AI use is seen as normal, accepted and legitimate. We think this kind of environment could soften the trust penalty and support both transparency and credibility.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Other articles published on The Conversation in May:

 May 12, 2025

As US ramps up fossil fuels, communities will have to adapt to the consequences – yet climate adaptation funding is on the chopping block
The administration wants to cut funding for programs that help communities adapt to wildfire risk, sea-level rise and invasive species, among many other risks.

Jia Hu
Associate Professor, School of Natural Resources and the Environment


Read previous Conversation articles written by University of Arizona scholars:


Interested in submitting an article? Go to the sign up link on The Conversation website to create a username and password. Do a keyword search to see what has been written on the topic you have in mind. Fill out the online pitch form. (Scholars who would like to talk through an idea before submitting a pitch can send an email to conversation@arizona.edu.)

Resources for the Media