AI

A guide to preparing and engaging employees for AI assistant adoption. Part I: People

18 Sep 2024

In this series of articles, we share our insights and lessons learned from a recent case study on implementing AI assistants in large companies with 100 to 1,000 employees involved in content creation, as well as in production-focused companies managing thousands of daily service tickets. The AI solutions aim to streamline workflows, enhance efficiency, and improve overall responsiveness.

We believe our findings can be valuable for both media and production organizations, so if you’re in that loop – jump right in. You will find a short overview of the project and its context here.

This is the first part, focusing on the people aspect. Make sure to check out the other two parts: System and Relations. We chose elements that feel key to successful implementation, although this list is not exhaustive.

Overview and context

The project's goal was to leverage AI to reduce the time employees spend creating content and to introduce autonomous content creation processes. In this case study, we explore how we tackled the challenge of preparing and engaging employees throughout the entire AI implementation process, integrating it into existing work tools and workflows.

The key questions we aim to answer are: What UX design challenges arise when building relationships between humans and AI? How can we design a user experience that helps employees adopt AI easily and without fear? And how do we engage people in this transformative journey?

The answers are not straightforward, as AI adoption comes with significant challenges.

Research shows that 36% of employees fear losing their jobs to AI, and one in four have already witnessed job cuts due to AI in their workplace. Additionally, Gartner reports that a staggering 85% of AI projects fail to achieve their goals.

1) Ambassadors: Catalysts of change

In AI implementation, ambassadors are the link between the technical team and the broader user base, ensuring not just a smooth transition but also fostering a positive reception among employees. In our case, even before developers and legal teams began their work, we formed a cross-functional working group composed of business representatives, users, and our design team.

This group was tasked with defining objectives, identifying risks, and gathering requirements, expectations, and concerns about the AI assistant. We found that a flexible, iterative approach not only encouraged greater innovation but also ensured adaptability to the evolving needs of the project.

Ambassadors: experienced and trusted users

The ambassadors were users who had already independently experimented with AI tools. They were not only familiar with the specific needs of their teams but also understood the capabilities and limitations of existing AI technologies. Selected from within the user base, these ambassadors were known and respected within their teams, earning the trust of their colleagues and becoming invaluable in promoting the tool’s adoption.

Key responsibilities of ambassadors

Ambassadors played a crucial role in defining solution requirements and delivering insights that deepened our understanding of the user perspective—insights that would have been difficult to obtain through formal interviews alone. They also reviewed early test versions, providing valuable feedback that prioritized the most pressing user needs and informed the development process.

As our first reviewers, ambassadors offered weekly feedback on new iterations of the tool, helping to refine the AI assistant and better align it with user needs. They communicated the value of the AI assistant to employees, provided comprehensive training, and gathered and relayed feedback to the development team. When it was time to present the system to a broader audience, they shouldered much of the explanatory work, addressing detailed user questions and helping to demystify the new technology.

Indeed, they were allies in change, offering support, fostering trust, and ensuring a smooth, effective transition to new technology.

Read also: The future of AI in sales: Insights from SMOC's CEO, Kristoffer Kvam

Watch: 6 non-obvious signals showing you might need UX consultancy:

2) Equal access and democratization of new technologies

Traditionally, IT projects are often developed in relative isolation within a working group, only to be introduced later to a select group of users. While this might seem logical, particularly for new technologies, this approach proved problematic when it came to our AI assistant project.

AI often evokes strong, polarized emotions. While some employees are fascinated by the potential, others have deep concerns — fear of the unknown, resistance to change, and anxiety over job security or alterations in their responsibilities.

The worry isn't just about the AI itself but also about the perceived imbalance it could create. Users sometimes are afraid of competing with AI or with colleagues who have early access to it, fearing they might be at a disadvantage — like competing against someone "on steroids." They recognize that AI could significantly enhance productivity and efficiency but are concerned that this could diminish their own roles within the organization.

Our initial approach, which involved a phased rollout of the AI assistant to a small group of users for testing, unintentionally fed into these fears. The limited access led to speculation, rumors, and a negative atmosphere within the broader team. To address this, we quickly pivoted and decided to present the entire project plan transparently to the organization.

Transparency: Addressing concerns

While it might seem logical to release only a polished, fully tested version of the AI tool, it’s crucial that all end users are aware of the planned stages of development—experimentation, early versions, optimization, stabilization, and eventual full rollout. Clear communication can help alleviate fears and foster a more transparent and positive environment around the project.

Fears of an unfair advantage and the need for clear communication

One of the most significant challenges we faced was the fear of a divide between those with early access to AI and those without. Users were concerned about competing with colleagues who had an advantage in productivity and efficiency due to AI, fearing it would create an uneven playing field. Our strategy involved a gradual rollout of the AI tool to debug and stabilize the system before a broader release. However, we quickly realized that without clear communication, this phased approach could heighten existing fears and concerns among the broader user base.

To mitigate this, we decided to openly communicate the entire project plan from the start. All end users were informed about the various stages of the project: initial experiments, early versions, optimization, stabilization, and final rollout. We emphasized that only a thoroughly tested and stable version of the AI tool would be released for widespread use, ensuring that users wouldn’t be burdened with a faulty or incomplete product.

At the same time, we wanted to avoid any secrecy of the project – the topic is too sensitive to keep under wraps, and any attempt to do so is likely to backfire. We made this mistake once, and it’s a lesson worth sharing. By being transparent from the beginning, we can prevent the spread of rumors and create a more supportive and collaborative atmosphere around AI implementation.

Read also: Strategic risk identification in UX design projects: what is it and why you need it

3) Managing the level of skills and knowledge among users

One of the challenges in implementing AI across a large organization is the varying levels of technical skills and knowledge among users. Expertise in AI and prompt engineering are now some of the most valuable skills, but they still remain niche, and not everyone is proficient in them. And in a large-scale rollout like the one we worked on, it’s essential to acknowledge these differences without creating barriers or silos that could hinder inclusivity.

In the described case, the simplest approach might have been to segment users based on their skill levels and create separate paths for different types of users. However, we chose a different path and a more inclusive strategy.

Read also: Work, future, and AI: The three realms ahead of us

Two learning paths

We developed two parallel paths—independent and guided—that function side by side.

The independent path is designed for users who feel comfortable exploring AI on their own. This path allows them to experiment freely with advanced options, encouraging self-driven learning and discovery.

On the other hand, the guided path is tailored for less experienced users, providing support through a wizard or guide that assists them step-by-step. This path smooths the learning curve, making it easier for beginners to build their confidence and skills without feeling overwhelmed.

The key to this approach is flexibility. Users are not confined to a single path; they can switch between independent and guided experiences as they grow more comfortable with the technology. A beginner can start with the guided path and then gradually explore more advanced features independently. If they find the advanced options too challenging, they can easily return to the guided path for additional support.

This transition between the two paths ensures that users progress at their own pace. The journey from basic to advanced proficiency happens naturally, driven by the user's own choice and comfort level.

A unified learning experience

The result of this approach is a solution that doesn’t divide users or highlight their differences but instead promotes inclusion. By smoothing the path to learning and skill development, we’ve created an environment where every user, regardless of their starting point, can engage with AI confidently and effectively.

4) Resistance and defenders of the status quo

In any transformative process, particularly with AI implementation, resistance is inevitable. Users may challenge the results produced by AI, escalate every issue, and express skepticism. This resistance often stems from two primary sources:

  • People who fear losing their jobs or seeing their roles fundamentally altered by AI. They may view AI as a direct threat to their livelihood and, as a result, become highly critical of its implementation.

  • Conservatives are more general opponents of progress, and they typically emerge during times of significant scientific and technological advancement. Their resistance is less about personal threat and more about a deep-seated opposition to change.

Both groups are likely to question the effectiveness of AI, dismissing its outputs as inferior or unreliable. Comments like "This text sounds like it was written by a robot; it's not good enough, and fixing it is a waste of time" or "If this machine can't solve a simple problem, how can we trust it with more complex data?" are common expressions of their skepticism.

How integrating AI tools into the newsroom's CMS can revolutionise the publishing process

How to manage resistance

So, what should you do when faced with this resistance?

  • Awareness: The first step is to be aware of this potential resistance. Recognizing that it exists and understanding the motivations behind it is crucial for managing it effectively.

  • Identify resisters: Learn to identify these individuals early on. It's likely that they've exhibited similar attitudes in other situations involving change. By recognizing them, you can better anticipate and address their concerns.

  • Filter feedback: While their comments may be valuable, they can also be very emotional. It's important to extract the factual elements of their feedback while filtering out the biases.

  • Leverage ambassadors: Position your AI ambassadors against these defenders of the status quo. Ambassadors, being well-respected users with a deep understanding of the AI system, can provide counterarguments and help neutralize resistance.

  • Consistent communication: Support the change process with consistent messaging from the highest levels of the organization, particularly targeting those who feel most threatened. Clear, honest communication can help alleviate fears and reduce resistance.

Turning resistance into support

With some skill and a bit of luck, it's sometimes possible to convert a defender of the status quo into an AI ambassador. By addressing their concerns, involving them in the process, and demonstrating the value of AI, they may become some of your strongest advocates for change.

Keep reading!

This was the first part of our insights from the project on implementing an AI assistant. Click here to access Part Two (System) and Part Three (Relations).

If you feel there’s something missing and want to share a comment, reach out to Slawek who will happily discuss the subject with you!

Share Article:
Let's talk
If you feel we are a good fit, email us today, or call Slawek +48 603 440 039You can also follow us on Linkedin, Facebook or Dribbble.