AI

Why employees are afraid of AI-assisted work tools – and what to do about it

25 Nov 2024

As AI continues to transform industries, a significant challenge lies not just in its technical integration but in managing the human response to it. Our experience shows that change management is as crucial as new technology itself, and AI is no miraculous remedy to organizational challenges despite its potential to streamline operations and increase the efficiency of daily tasks.

The matter is serious, especially because, according to recent studies, many organizations are struggling to realize the benefits of AI fully. Deloitte found that only 18% of companies reduce costs thanks to AI, and just 27% report improved efficiency and productivity. Gartner reported that only 53% of AI projects successfully transition from pilot to production. This so-called "pilot paralysis" is often fueled by fear and uncertainty among employees, who may not fully understand AI or feel prepared to work with it.

Our experience as UX consultants shows that implementing AI systems within large organizations has revealed a common thread: employees are often reluctant about AI (and any new tools or technology), fear losing their jobs to AI, or don’t feel confident enough with their current skills.

Let’s examine in more detail what fears they have and how the user experience perspective can help overcome them—and view AI as a tool for collaboration rather than a threat.

Fear 1: “I don't know how AI works or how to use it.”

One of the most common fears employees have about AI is a lack of understanding – "I don't know how AI works or how to use it." This anxiety arises from the opacity of AI systems, often described as a "black box." Users (and sometimes even creators) do not know how specific outcomes are generated – and this knowledge gap generates a sense of uncertainty.

This information asymmetry can lead to unfair competition within a company. AI-literate individuals manipulate AI systems to favor specific interests, leaving AI-illiterate employees feeling even more disconnected from the process. At the same time, less-knowledgeable employees are concerned about colleagues, who have an advantage in productivity and efficiency due to AI.

Remedies

1) Democratization of new technologies and managing the level of skills

Addressing the challenge of AI literacy begins with recognizing employees' varying levels of technical skills and knowledge. Rather than segmenting users into silos based on their expertise, it's essential to adopt an inclusive approach that allows everyone to engage with AI comfortably, at their own pace.

In our recent project, we developed two parallel learning paths: an independent path for those confident in exploring AI autonomously and a guided path for users who need more assistance through step-by-step guidance. This flexible system lets users switch between paths based on their growing familiarity with the technology, encouraging continuous learning without feeling overwhelmed.

2) Transparency

We also find transparency and open communication key to fostering AI literacy and overcoming fear. By openly sharing the project roadmap, including stages from experimentation to full rollout, organizations can soften concerns of unfair advantage.

Our recent strategy involved a gradual rollout of the AI tool to debug and stabilize the system before a broader release. All end users were informed about the various stages of the project: initial experiments, early versions, optimization, stabilization, and final rollout. We also emphasized how AI really works, what it does and on what foundations.

3) Communicating change

As we have also noticed, the key to success in every new tool implementation is communicating a change story. This helps employees understand where the organization is headed, why it is changing, and why the changes are essential. Our guest authors, Sarah Faict and Stijn Vercamer, wrote more about this in this article, describing interesting studies from Mediahuis's newsroom.

4) Building a knowledge system around a new tool

At some point, it’s also valuable to introduce a knowledge system that keeps employees informed about a new tool. This extensive and rich base is not necessarily understood as a "place" on the intranet but as a user-friendly framework enabling knowledge gathering and sharing.

We believe it’s a critical factor organizations tend to omit. When introducing a new tool, employees should get proper training and support, and it should not be just an introductory session or workshop. It should happen permanently and be a natural, daily activity interwoven into the company’s DNA. Investing in knowledge sharing and effective knowledge-management frameworks will benefit the system users and create a more flexible environment inside your organization.

Fear 2: “I am afraid that AI will make me obsolete and replace me”.

This is a common concern among employees, and the data shows that 25% of employees have already witnessed layoffs caused by the introduction of artificial intelligence. According to the same study, the fear of being replaced by AI is most prevalent among three groups: low-skilled workers at risk of automation, younger employees witnessing the rapid adoption of new technologies, and high-status professionals who are keenly aware of market changes.

The fear of being replaced can lead to yet another phenomenon known as “Hollow Intelligence.” It happens when an over-reliance on AI leaves human resources without the skills or capacity to address complex issues independently.

Remedies

1) Role allocation between humans and AI

To address the fear of being replaced by AI, organizations need to adopt a balanced approach that clarifies the distinct roles humans and AI play in the value chain. One solution is role allocation: separating tasks best suited for AI (such as data processing, pattern recognition, or automation of repetitive tasks) from those where human expertise remains essential—creativity, strategic decision-making, and emotional intelligence. This clear division allows for productive human-AI collaboration and reassures employees that their roles remain indispensable for tasks requiring uniquely human skills.

One example is our recent concept of an AI-powered customer support assistant, Freddy. The AI's role was to reduce the workload on human agents and ensure that human intervention was timely and effective in case of more complex support tickets. This way, 7090% of work could be automated, with humans supervising the process when needed.

2) Upskilling and continuous education

It may seem obvious, but let us say it once again: all companies today should focus on upskilling and continuous education to ensure employees grow alongside AI technologies. Providing opportunities to learn new skills, particularly in how to work effectively with AI (rather than fearing working with AI), encourages a mindset of adaptation rather than resistance. This could include structured learning paths, such as independent exploration for tech-savvy individuals and guided support for those who are less experienced with AI.

Fear 3: “I have to adopt AI just because I have to do so.”

The pressure to adopt AI without clear reasons or adequate preparation is a growing source of anxiety for organizations, referred to as “AI fever.” This type of AI-phobia occurs when organizations feel compelled to implement AI not because of a genuine need but simply because of external pressures or trends.

This phenomenon mirrors the concept of “reform fever,” where organizational changes are driven by a desire for change rather than a practical necessity.

When higher management is swept up in AI fever, AI adoption often becomes more about symbolism than substance, leading to superficial implementation. This forced adoption can leave managers and staff dealing with underdeveloped AI systems, unclear objectives, and unpredictable consequences. Teams may feel they must embrace AI to stay competitive or appear innovative, even if AI doesn’t align with their specific needs. Managers may adopt AI as a means of showcasing innovation or responding to competitors without thoroughly assessing whether it fits their organization’s operations. And systems that are implemented hastily create more problems than they solve.

Remedies

Assessing real organizational needs

Before adopting AI, it is advisable to conduct thorough assessments to determine whether AI is genuinely needed and how it can support core objectives. If you need help assessing your current work tools and workflows and want to identify potential challenges and opportunities linked to AI adoption, we offer a solution called Process & Tools Review.

It is designed to uncover the truth about your organization's current tools and processes and identify needs, problems, money leaks, and opportunities for the future. In addition to an in-depth analysis of your work tools, you will receive a set of actionable recommendations tailored to your organization's unique needs and objectives.

Contact us for more information about Process&Tools Review.

Recruiting ambassadors

Our experience shows that in every organization there are proponents of the new solution – we call them ambassadors of change. They are the link between the initial AI team and the broader user group, who can help foster a positive reception of a new technology.

In a recent case of implementing an AI assistant in a large organization, ambassadors were users who had already independently experimented with AI tools. They were not only familiar with the specific needs of their teams but also understood the capabilities and limitations of existing AI technologies. They played a crucial role in defining solution requirements and delivering insights that deepened our understanding of the user perspective. They also communicated the value of the AI assistant to the broader user base, provided training, and gathered and relayed feedback to the development team.

When it was time to present the system to a broader audience, they shouldered much of the explanatory work, addressing detailed user questions and helping to demystify the new technology.

Turning resistance into support

In any transformative process, particularly with AI implementation, resistance is inevitable. Users may challenge the results produced by AI, escalate every issue, and express skepticism. They are likely to question the effectiveness of AI, dismissing its outputs as inferior or unreliable.

So, what should you do when faced with this resistance?

The first step is to be aware of this potential resistance. Recognizing that it exists and understanding the motivations behind it is crucial for managing it effectively.

Then, identify resisters early on (it’s likely they exhibited similar attitudes in other situations involving change). By recognizing them, you can better anticipate and address their concerns. You can also position your ambassadors who can provide counterarguments and help neutralize resistance.

Support the entire process with clear and transparent communication (return to remedies to Fear 1 for more information). With some skill and a bit of luck, it's sometimes possible to convert a defender of the status quo into an AI ambassador. By addressing their concerns, involving them in the process, and demonstrating the value of AI, they may become your strongest advocates for change. It’s also worth to remember about proper education and training, as we described in Remedies for Fear 2.

Fear 4: “When something goes right or wrong, who or what is responsible: me or AI?”

AI makes mistakes. So, when AI-provided information is incorrect or leads to poor outcomes, the question arises: who is responsible—the programmer, the user, or the AI itself? Similarly, when human decisions are made based on AI-provided data, the issue of accountability remains murky, adding to the anxiety. The issue goes both ways: Human success or failure may wrongly be attributed to AI, or AI-generated success or errors may incorrectly be credited or blamed on the user. Both scenarios create the risk of either over-reliance on AI or avoiding its use entirely due to fear of liability.

Remedies

1) Clear accountability frameworks

It’s best to define and establish accountability boundaries for all stakeholders involved in the AI process—designers, developers, suppliers, and users. Everyone involved should know which tasks AI handles autonomously and where human oversight is required.

2) Human oversight

AI is a tool rather than a decision-maker. It can be slow, has weaknesses, doesn’t always follow the prompts, it hallucinates, and the question of its ethics is still debatable. For now, we need human supervision and judgment, as AI models must be monitored and updated regularly – by us – humans. In the end, we need control and responsibility over final outcomes.

This is why it’s advisable to establish guidelines when human intervention is necessary, especially in high-risk or sensitive areas.

3) AI error reporting, reviews & feedback

Every application must be thoroughly measured, especially one designed as a professional tool. This means you should implement regular reviews of AI systems to ensure they remain accurate. Also, it’s important that users can log and report AI errors or questionable outputs.

Sometimes, unexpected situations happen, though. In our recent project we introduced an “AI-assisted” label for articles where AI generated at least one paragraph. After a week, we noticed that the number of such articles was lower than expected. It turned out that editors were bypassing the designated path. Instead of using the provided button to insert AI-generated content, they manually copied and pasted the text. As a result, our tracking system failed to recognize these articles.

Last but not least, we have discovered that gathering “local feedback” is also very valuable when working with AI. It is about gathering feedback in context — right when the user is actively engaging with the system. This approach captures fresh emotions, as the user is still in the moment of their experience.

For example, we know how to identify the exact point when a user finishes their current interaction with a chat. This is the perfect time to ask for quick feedback—a rating and a comment—while the experience is still fresh in their minds.

Share Article:
Let's talk
If you feel we are a good fit, email us today, or call Slawek +48 603 440 039You can also follow us on Linkedin, Facebook or Dribbble.