Almost Everyone Has Tried AI. But Do Americans Trust AI?

New AI trust statistics reveal how Americans use AI at work and home and why trust in AI still lags behind adoption. Survey insights from Howdy.com.

WRITTEN BY

Howdy.com
We Help American Companies Go Global.

AI adoption is on the rise across work and leisure uses, from automating tasks to planning vacations and making holiday shopping lists. However, use and adoption do not signal trust– the final frontier of AI usage in the US.

We wanted to know how Americans felt about AI even as they use it widely: what are their major concerns? Are they doomers, convinced AI and AGI (artificial general intelligence) will doom humanity, or are they evangelists, hoping AI will usher in a post-work, post-scarcity future solved by sophisticated algorithms? Many of these undercurrents are the philosophical backbone of the major players in AI– so let’s see how that stacks up to the average American.

Key takeaways: Trust and use are not aligned

Infographic summarizing AI trust trends: large gaps between trust in AI output vs. platforms, widespread concern about surveillance and government use, ChatGPT as most used but not most trusted, and transparency in data use as the top trust builder.

We polled Americans both on the degree to which they trusted the consistency and quality of AI output as well as the extent to which AI companies have their best interests in mind, and the difference was stark: there was a 26 point difference between the two numbers.

A possible explanation is deep concern around broader application of AI: 85% fear AI surveillance, and 80% don’t trust the U.S. government to use AI responsibly. 41% believe AI will usher in a mass surveillance state; this outcome was voted more likely over economic collapse and post-work utopia.

More trusting people seem to enjoy the flattering tendency of platforms: 84% of those who indicated they loved the degree to which their platforms flattered them indicated they also trusted those platforms completely.

ChatGPT reigns supreme as the most widely used platform, but it’s not the most trusted, either. One thing that might help? 44% indicated that more transparency around data use would help them trust AI platforms more.

AI trust in the workplace: 62% aren’t comfortable giving AI sensitive tasks

Chart showing high workplace AI usage (72%) with a 26-point gap between trust in AI output and trust in AI companies, concerns about sensitive tasks, platform usage and trust rankings, and common vs. avoided AI tasks at work.

Among those we surveyed, 72% use AI at work. Those that use AI are trusting more of the output of AI (75%) rather than whether AI platforms have their best interests in mind (49%). This is a huge gap and reflects a sense of AI agnosticism that pervades how Americans relate to AI on the micro and macro scale.

62% aren’t comfortable giving sensitive work tasks to AI, and 1 in 6 report that AI led to major problems in their workplaces, like bad code or deleted inboxes. This increases to 25% for tech workers, 90% of whom use AI on the job.

Top platforms used in the workplace are:

  • ChatGPT (60%)
  • Gemini (16%)
  • Copilot (11%)
  • Claude (6%)

The most trusted platforms turn out to be Gemini, with 65% of Gemini users trusting in Google’s intentions, followed by Perplexity (60%), and ChatGPT (56%).

Top tasks workers trust AI with the most include writing, data analysis, and communications. On the other hand, they don’t trust AI with hiring, payroll, or overall strategy in the workplace. Workers would trust AI more if it were more consistent (58%), a system based on analysis rather than probability (22%), and lacked profit motive (15%). Over 1 in 10 don’t think AI should be used on the job at all.

Roughly 1 in 6 pretend to use AI on the job; they primarily prefer to do the work themselves because they enjoy it.

AI off the clock: 39% turn to AI for health advice

Graphic comparing higher personal AI use (87%) than work use (72%), with common uses like search and writing, lower trust in sensitive areas like mental health and finances, and transparency as the top factor for increasing trust.

AI adoption is on the rise perhaps even more in life than in work: 87% of those surveyed use AI in their personal lives, compared to just under 3 in 4 on the job. However, trust issues remain.

61% think AI provides biased recommendations, but despite this, 82% use AI for searching for information. 42% use it for writing, and a concerning 39% use it for health advice. Other top uses include shopping and, ironically, financial advice as well. 1 in 3 use it for meal planning, and 1 in 5 turn to AI for mental health recommendations. Speaking of mental health: 11% use AI for companionship, a signpost of just how lonely we are today.

The most trusted uses for AI include searching for information, writing, and creating content for social media, while users don’t trust AI for companionship, mental health advice, and financial advice– interesting, given those are some of its most frequent uses. Speaking of social media, there’s a double edged sword to AI content creation: 75% don’t trust AI generated content on social media.

AI platforms could gain trust by increasing their transparency around data use (51%), providing better answers to simple questions (35%), and providing more usage guardrails for AI in-platform (35%).

Among the 13% who don’t use AI, the majority – 57%-- don’t use it because they simply don’t trust it.

AI boomerism and doomerism: 85% worry about mass surveillance

Survey results on AI’s societal impact, showing mixed opinions on benefits and risks, strong concerns about government misuse and surveillance, and most workers reporting time savings from AI.

There’s a growing divide between AI evangelists and AI doomers that seems to be widening; while 50% believe AI is beneficial to society, 18% are also a firm “no.” AI saves time (92% say so) and has credible expertise for some (24% would trust AI advice over human expertise); however, 35% also consider AI a threat to humanity.

One of the biggest points of contention around AI trust is potential use against US citizens; 80% don’t trust the US government to use AI responsibly, and 85% worry about AI being used for mass surveillance. In fact, when given the option between potential futures ranging from post-work utopia to ecological collapse from AI energy consumption, the most commonly chosen “likely future” was that of a dystopian surveillance state.

The lagging trust behind mass adoption speaks to a lingering sense that AI isn’t quite as game-changing as some ads would have you believe, but the numbers also show real benefits to using AI at work and in life. Users want transparency, consistency, and for AI to stay out of the hands of the U.S. government until there are more guardrails.

Want workers who know how to use AI in ways that maximize trust? Reach out today.

Methodology & fair use

In March 2026, we surveyed 963 employed Americans nationwide on their use and trust of AI. 25% self-identified as tech workers. Ages ranged from 18-71 with an average age of 44. 51% were women, 48% men, and 1% either nonbinary or chose not to disclose.

For media inquiries, reach out to media@digitalthirdcoast.net

Fair use

When citing this data, please attribute by linking directly to this page or Howdy.com