👾 Trust Disconnect: When Employees Turn to AI Instead of Leaders

Why employees secretly turn to AI, what leaders miss, and how trust can bridge the growing workplace divide.

The following is a guest post by Forward Future community member Mike Hostetler — technologist-entrepreneur and AI agent architect, and the creator of Jido.

Half of workers in the U.S. say they use AI tools at work without clear permission according to a study conducted by KPMG. Even more concerning, 44% of employees admit they misuse these tools on purpose. This shows a big problem: employees start using AI faster than companies can control it. While 62% of business leaders support using AI, only 52% of employees agree. This difference creates real risks and can hurt a company's success. Nearly one in four employees think their companies ignore their interests when using AI. Most workers (80%) say their companies have not given clear rules about responsible AI use. This lack of trust creates dangers for safely and effectively using AI in the workplace.

When Employees Go Rogue: The Rise of Unauthorized AI

Employees often use AI without permission, which is called shadow AI. More than half (58%) of U.S. workers use AI tools without checking the results, and 53% pass AI-generated work off as their own. Additionally, 43% of employees do not trust companies or the government to manage AI safely. Workers secretly use AI because they believe it helps them work faster, better, and keep up with competitors.

Shadow AI is risky for businesses. Unapproved AI tools can lead to security problems and data leaks. Companies might accidentally break rules because employees use AI without oversight. The quality of AI-generated work can vary and hurt a company's brand and market position. Without proper control, companies miss important opportunities to use AI strategically.

Bridging the Divide: Why Trust Goes Both Ways

The gap in trust about AI between leaders and employees is growing. Leaders worry about AI mistakes, security threats, and too much reliance on machines. Employees fear losing their jobs, being left out of decisions, and unclear rules. Poor communication and a lack of openness from companies make these problems worse.

Only 62% of leaders think their company manages AI responsibly. Meanwhile, 42% admit their company is unclear about when tasks should be automated or handled by people. Although 70% of leaders want humans involved in AI processes, companies often fail to clearly communicate or include employees. Lack of proper training and clear AI policies further harm trust.

Trust-First Leaders: How Forward-Thinking Companies Get It Right

Companies that focus on openness and employee involvement successfully build trust around AI. EY, for example, actively trains employees in AI. Over 50,000 workers earned special AI training, and over 100,000 received basic AI education. EY proves that involving employees and being clear about AI plans helps build trust.

Successful companies involve employees directly in AI planning, give AI training to everyone, set clear guidelines for AI use, and regularly talk about AI developments with employees. These companies view AI as a helpful tool, not a replacement for jobs. As a result, employees feel more confident, use AI properly, and trust their companies more.

The ROI of Getting Trust Right

Businesses with clear AI strategies succeed 80% of the time, compared to only 37% without clear plans. Companies that manage AI well see a 28% increase in employees actively using AI. On the other hand, companies that fail to build trust often face costly mistakes, security problems, and legal issues.

Trust-based methods benefit companies greatly. Good trust increases employee productivity, job satisfaction, and reduces risks. Companies with clear AI rules stand out from competitors and build lasting advantages. Companies that prepare for future regulations attract skilled workers and keep them longer.

Five Steps to Close the Trust Gap

Companies should follow these five steps to build trust and manage AI successfully:

  1. Understand Employee AI Use: Ask employees how they use AI, what concerns them, and what could be improved. Use surveys and group discussions to collect information.

  2. Create AI Strategies Together: Form teams with people from different departments, including managers and frontline workers. These teams should create AI plans that reflect everyone’s input.

  3. Offer Clear and Practical Training: Provide training for all employees that covers basic AI skills, how to use AI safely, ethical considerations, and what to do if something goes wrong.

  4. Set Clear and Simple Guidelines: Write clear rules that explain what is allowed and not allowed when using AI, how to check AI-generated work, and how to handle sensitive data.

  5. Keep Communication Open: Hold regular meetings and discussions to update employees about AI, listen to their concerns, and quickly address any problems.

Every time an employee uses AI without permission, the trust gap gets wider. Building trust is not just about following rules. It is about creating a strong foundation for safely and effectively using AI.

Conclusion

Companies that put trust first in AI today will become leaders in the future. Trust is critical for effective teamwork between people and AI. Organizations that commit to building trust with AI will succeed in facing future challenges and create lasting success.

Mike Hostetler

Mike Hostetler is a technologist‑entrepreneur and AI‑agent architect - creator of Jido, an Elixir framework for operating scalable autonomous agent swarms.

His background spans enterprise architecture to executive leadership, where he designs software solutions that enhance operational efficiency, customer experience, and profitability.

He’s committed to bridging engineering with business strategy and always welcomes new professional connections.

👉️ Connect with Mike on LinkedIn or his website.

Reply

or to participate.