29% of employees admit to actively sabotaging their company's AI strategy. That number rises to 44% among Gen Z workers. According to Fortune, this sabotage is more than quiet quitting. It’s entering proprietary data into public tools, using unapproved apps, or intentionally generating low-quality work to make AI look ineffective. It is easy to dismiss this as generational anxiety or an "AI" problem. But that misses the root cause: lack of change management. When employees resort to sabotage, it’s a glaring indicator that leadership has failed to build the most critical element of transformation: Trust. Trust is the primary driver of AI adoption. The vision for an organization's AI journey cannot remain locked in the C-suite. Employees need to understand not just the "what" of AI adoption, but the "why" and the "how." "FOBO"—fear of becoming obsolete—is a direct result of poor communication and a lack of transparency regarding how roles will evolve alongside AI. To move in alignment, leaders must: 🔑 Articulate Augmentation: Replace vague promises with specific role-evolution roadmaps. If an employee doesn't see where they sit in a post-AI workflow, they will naturally protect the status quo. 🔑 Demystify Governance: Employees need clear guidelines on how to safely use AI, including the risks and consequences of entering PII and proprietary data into unauthorized tools. 🔑 Invest in Enablement: Offer adequate training so people can understand exactly how to incorporate AI into their daily workflows. When employees feel supported and enabled, they hit the ground running. You cannot force AI on a workforce, announce layoffs, and expect enthusiasm. You cannot expect workers to consistently churn out more value than ever while they feel like they are on the chopping block. Nurturing employees is part of business AND AI strategy. When we prioritize change management, AI stops being a source of anxiety and starts being a tool for collective success.
Implementing Technology for Employee Experience
Explore top LinkedIn content from expert professionals.
-
-
AI Adoption: My Lived Experience The FOMO is real. You look around and see smart people vibe-coding their way through the day—whipping up Chrome extensions, Figma plugins, reminder apps, blog engines, even lightweight CRMs that mimic Salesforce, all in minutes. The velocity is dizzying. But let’s pause and ask—is that adoption? Or is it just experimentation? At best, it’s trial. A beautiful sandbox moment where curiosity meets capability. People are willing to throw money at AI tools the way they once did with crypto or NFTs. This shiny toy, however, is different—because it actually is useful. But useful doesn’t mean embedded. Not yet. Adoption depends on use cases. For me, AI is a thought partner. It helps me think better, write clearer, and design ideas faster. That’s my daily use case. For others, it’s code generation. That’s not my lane. And the AI influencers? They make it seem like every workflow is flawless and frictionless. It’s not. Debugging still eats your time. The black box overwhelms you, especially when you don’t understand how things work inside. Your sense of agency often drains in the wrong direction—trying to fix things you didn’t even build. So what’s holding adoption back? • The real TAM is still a mirage. GenAI helps some workflows. Not all. • You’re often a lone wolf. Most peer groups aren’t AI-ready. The conversations are shallow or non-existent. • Enterprise barriers are real. You can’t even install half these tools on your work laptop. Security, IT, procurement—none of them are on your side yet. • Retention comes with reps. Tools are sticky only when they remember context, when they feel like they “know” you. That memory is the real magic. I test ruthlessly. Same prompt across tools. I pick what resonates with the human in me. No single tool wins it all. They all cost roughly the same, so switching doesn’t hurt. But staying? Staying takes trust and time. We’re still in the earliest innings. In time, AI will have its own disciplines—AI Ops, AI for Devs, AI for Designers, AI in Martech, Sales, and beyond. Slowly, workflows will evolve. Teams will restructure. Products will rebuild. The market will mature. And in five years, we won’t recognize the world we’re in. This isn’t hype. This is history being written—with prompts, patterns, and patience. #ai
-
LLMs massively empower individuals. Used well, they augment thinking and intentions to an extraordinary degree. The impact is far more muted and delayed for large organizations, which have entrenched ways of working that will take years to shift through careful negotiation of culture and governance. AI doyen Andrej Karpathy has neatly laid out how genAI results, quite simply, in: Power to the people. Transformative technologies have usually been developed and used by governments and the military, and then diffused to companies and individuals. For LLMs, everyone has access to the same quality AI, largely free, in every language, to be applied immediately to whatever users want to do. In contrast, there are many reasons why it will be far slower for organizations to get value: ➡️ LLMs offer broad but shallow capabilities, which are less valuable to organizations already equipped with deep domain experts. ➡️ Organizations already consolidate specialized expertise, so LLMs typically enhance existing workflows rather than enabling entirely new capabilities. ➡️ The improvements LLMs provide are incremental, making organizations slightly more efficient at tasks they already perform well. ➡️ Integrating LLMs into complex legacy systems and existing processes is technically challenging and resource-intensive. ➡️ Strict security, privacy, and regulatory requirements limit how freely LLMs can be used in corporate and government environments. ➡️ The risk of errors or hallucinations from LLMs is unacceptable in high-stakes or legally sensitive organizational contexts. ➡️ Organizational culture can resist the adoption of new tools, especially when they disrupt established roles or processes. ➡️ Decision-making in large organizations is often slow, with multiple layers of approval and governance slowing experimentation. ➡️ Retraining employees to use LLMs effectively at scale is a significant undertaking with cost and coordination challenges. ➡️ Bureaucracy, turf wars, and political dynamics within organizations often create resistance to rapid technological adoption. Take advantage of power flowing to the people!
-
Chief AI Officers and other tech leaders reveal challenges…. I recently moderated roundtable discussions with over 125 Chief AI officers and leaders responsible for AI across both regulated and unregulated industries. A few key themes surfaced around the barriers to successful AI adoption: • Budget constraints and demonstrating clear ROI • Executive buy-in: Leadership alignment remains a major hurdle • Setting realistic expectations: AI is not an overnight solution, but a long-term strategy • Employee fear: Concerns about AI’s impact on jobs create resistance • Data: Access, quality, and governance issues continue to slow progress • Governance and regulatory compliance: Navigating the complex landscape of rules and regulations presents additional challenges • Hype vs. reality: There is a lot of AI hype to combat, and managing expectations around what AI can truly deliver is essential It’s clear that the job for chief AI officers, CTOs, and others leading these efforts is extremely challenging, requiring a delicate balance of technical knowledge, leadership, and strategy. Despite these obstacles, the energy and innovation in the AI space are undeniable. What did we miss? #AIAdoption #ChiefAIOfficer #ArtificialIntelligence #AILeadership #EthicalAI #TechLeadership #AIInBusiness #AIInnovation #AIRegulation #DataGovernance #ExecutiveBuyIn #FutureOfAI #AITransformation #AIChallenges #AIForGood
-
The biggest barrier to AI adoption in 2026 is not technology. It is human readiness and workforce confidence. Organisations accelerating their AI strategy should pause, not to slow innovation, but to make sure their people are ready. Effective AI adoption is never just about rolling out new tools. It is about building the right support systems, investing in training, strengthening communication and helping employees understand how AI fits into their roles. For HR leaders, this means addressing the real concerns that surface during digital transformation. Employees want clarity on AI’s impact on skills, job design, autonomy and security. Without this foundation, even the best AI initiatives struggle to gain traction. The most effective AI transformation combines ambition with empathy. A human-centred change plan that upskills, reassures and actively involves employees will turn AI into a long-term strategic advantage rather than a short-lived experiment. Leaders also need a clear AI success framework. How will AI create value? How will teams evolve? How will people continue to grow in an AI-enabled workplace? Successful AI integration is not a checkbox exercise. It is a cultural transformation. For anyone leading people, this is the call for 2026. Move with purpose, move with care and support teams to adopt and adapt. AI becomes powerful only when people feel ready to use it. #DrJaclynLee #AI #FutureOfWork #HRLeadership
-
𝗬𝗼𝘂𝗿 𝗔𝗜 𝗶𝗻𝗶𝘁𝗶𝗮𝘁𝗶𝘃𝗲𝘀 𝗮𝗿𝗲 𝗳𝗮𝗶𝗹𝗶𝗻𝗴. 𝗔𝗻𝗱 𝗶𝘁'𝘀 𝗻𝗼𝘁 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝘆𝗼𝘂𝗿 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆. 70-85% of AI projects fail to deliver value. But here's the thing: → Your algorithms work fine → Your data is clean → Your APIs connect perfectly So why are you still stuck? 𝗕𝗲𝗰𝗮𝘂𝘀𝗲 𝘆𝗼𝘂'𝗿𝗲 𝘁𝗿𝘆𝗶𝗻𝗴 𝘁𝗼 𝘀𝗼𝗹𝘃𝗲 𝗮 𝗽𝗲𝗼𝗽𝗹𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝘄𝗶𝘁𝗵 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆. The real blocker isn't your tech stack. It's your culture. 𝗧𝗵𝗲 3 𝘀𝗶𝗹𝗲𝗻𝘁 𝗸𝗶𝗹𝗹𝗲𝗿𝘀 𝗼𝗳 𝗔𝗜 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻: 𝗧𝗵𝗲 𝗘𝘅𝗶𝘀𝘁𝗲𝗻𝘁𝗶𝗮𝗹 𝗧𝗵𝗿𝗲𝗮𝘁 "If AI can do my job, what happens to me?" (Employees resist what they can't control) 𝗧𝗵𝗲 𝗠𝗶𝗱𝗱𝗹𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗿 𝗦𝗾𝘂𝗲𝗲𝘇𝗲 You're asking them to implement tech that threatens their role (While still judging them by old metrics) 𝗧𝗵𝗲 𝗜𝗻𝗰𝗲𝗻𝘁𝗶𝘃𝗲 𝗠𝗶𝘀𝗺𝗮𝘁𝗰𝗵 Your AI recommends preventative shutdowns Your managers get rewarded for uptime (Guess which one wins?) 𝗪𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘄𝗼𝗿𝗸𝘀: • Elevate people, don't eliminate them • Create safe-to-fail zones for experimentation • Put domain experts in control of AI implementation • Align incentives with AI-enhanced productivity • Address career anxieties with concrete transition plans 𝗧𝗵𝗲 𝗯𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: - Technical advantages last weeks. - Cultural advantages last years. Your competitors can copy your algorithms. They can't copy your culture. 𝗪𝗵𝗮𝘁'𝘀 𝗵𝗮𝗿𝗱𝗲𝗿 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Building a chatbot or getting people to actually use it? Your answer says it all. I just published a deep dive on this in The AI Journal: "The Hidden Barrier to AI Success: Organizational Culture" It breaks down exactly how to build a culture that makes AI adoption inevitable (not just possible). 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗮𝗿𝘁𝗶𝗰𝗹𝗲→ 𝗵𝘁𝘁𝗽𝘀://𝗮𝗶𝗷𝗼𝘂𝗿𝗻.𝗰𝗼𝗺/𝘁𝗵𝗲-𝗵𝗶𝗱𝗱𝗲𝗻-𝗯𝗮𝗿𝗿𝗶𝗲𝗿-𝘁𝗼-𝗮𝗶-𝘀𝘂𝗰𝗰𝗲𝘀𝘀-𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝗮𝗹-𝗰𝘂𝗹𝘁𝘂𝗿𝗲/ Want more insights on the human side of AI transformation? 🔔 𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 for weekly posts on AI + organizational psychology 📧 Join other informed leaders getting my "AI + Human Edge" newsletter for frameworks like this 𝘞𝘩𝘢𝘵'𝘴 𝘣𝘦𝘦𝘯 𝘺𝘰𝘶𝘳 𝘣𝘪𝘨𝘨𝘦𝘴𝘵 𝘣𝘢𝘳𝘳𝘪𝘦𝘳 𝘵𝘰 𝘈𝘐 𝘢𝘥𝘰𝘱𝘵𝘪𝘰𝘯? 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘰𝘳 𝘱𝘦𝘰𝘱𝘭𝘦? 𝘋𝘳𝘰𝘱 𝘢 𝘤𝘰𝘮𝘮𝘦𝘯𝘵 𝘣𝘦𝘭𝘰𝘸 👇
-
"𝐀𝐈 𝐢𝐧 𝐭𝐡𝐞 𝐖𝐨𝐫𝐤𝐩𝐥𝐚𝐜𝐞: 𝐓𝐡𝐞 𝐁𝐮𝐳𝐳𝐰𝐨𝐫𝐝 𝐄𝐯𝐞𝐫𝐲𝐨𝐧𝐞’𝐬 𝐓𝐚𝐥𝐤𝐢𝐧𝐠 𝐀𝐛𝐨𝐮𝐭 (𝐁𝐮𝐭 𝐍𝐨𝐛𝐨𝐝𝐲’𝐬 𝐑𝐞𝐚𝐝𝐲 𝐅𝐨𝐫)" AI is the shiny new toy in every workplace. Leaders rave about it, teams scramble to roll out tools, and everyone’s racing to be “AI-ready.” But let’s be honest, behind the buzz, most people are thinking: ⇢ “Will this replace my job?” ⇢ “How much will this cost (and how hard will it be to learn)?” ⇢ “Why are we even doing this?” Here’s the problem: 𝐀𝐈 𝐚𝐝𝐨𝐩𝐭𝐢𝐨𝐧 𝐢𝐬𝐧’𝐭 𝐟𝐚𝐢𝐥𝐢𝐧𝐠 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐢𝐬𝐧’𝐭 𝐫𝐞𝐚𝐝𝐲, 𝐢𝐭’𝐬 𝐟𝐚𝐢𝐥𝐢𝐧𝐠 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞 𝐩𝐞𝐨𝐩𝐥𝐞 𝐚𝐫𝐞𝐧’𝐭. It’s like rolling out a self-driving car and expecting people to trust it without explaining how it works—or what’s in it for them. At its core, this isn’t a tech problem. 𝐈𝐭’𝐬 𝐚 𝐜𝐡𝐚𝐧𝐠𝐞 𝐦𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐩𝐫𝐨𝐛𝐥𝐞𝐦. Organizations focus too much on the tool and too little on the people it’s meant to help. ⇢ AI feels like that treadmill you bought during lockdown, amazing in theory, intimidating in practice. ⇢ No one wants to admit they’re unsure how to use it—or worse, that they’re scared of it. 𝐓𝐡𝐞 𝐫𝐞𝐬𝐮𝐥𝐭? Resistance, skepticism, and a lot of wasted potential. So what should we do? The secret to successful AI adoption isn’t tech, It’s trust. If you are the leader, here's what you can do: ➡️ Start with the “Why” (for Them) ⇢ Don’t say: “This tool will transform operations.” ⇢ Say: “This tool will save you hours on admin work so you can focus on strategy—or finish early.” ➡️ Acknowledge the Fear ⇢ “AI isn’t here to replace you—it’s here to handle the boring, repetitive stuff so you can do what humans do best: think, create, and lead.” ➡️ It's always about people, not tech ⇢ Instead of just explaining what AI can do, show how it fits into their day-to-day. ⇢ Example: “Here’s how you can use this tool to draft a report in minutes, saving you time for more meaningful work.” ➡️Celebrate the Small Wins ⇢ “The support team resolved 20% more tickets last week using AI, freeing up time for complex customer issues.” Small victories build momentum—and trust. AI isn’t just a new tool—it’s a cultural shift. If your team doesn’t see what’s in it for them, they won’t adopt it. Always, ask yourself: ⇢ “Does this solve their real frustrations?” ⇢ “Have I shown them why it matters to them?” ⇢ “Are we building their confidence to succeed?” Remember: AI doesn’t fail because it’s too advanced. It fails because change isn’t managed well. BTW: That’s me—snapped at an AI camera booth, channelling my future avatar. Don’t worry, I’m still human (for now). But seriously, if this version of me could help manage change better, I’d take it. #changemanagement #artificialintelligence #ai #adoption
-
Employee engagement surveys are broken. There, I said it. Companies spend thousands each year on surveys. Promising insights into how their people feel. Yet the results are often inaccurate, incomplete, and unreliable. And why is this? 1. Mistrust of anonymity Employees open up when surveys feel safe. But 45% think HR can track their answers, so they hold back. Reframe surveys as confidential and explain how the data is used. 2. Outdated survey design Generic surveys miss the mark. Every company is different, so should its questions be. Tailor surveys to your culture and goals to get useful insights. 3. Timing matters Annual surveys? Outdated. Engagement shifts all year. Regular pulse checks give a clearer picture. 4. The trust gap Nothing kills engagement like ignored feedback. If employees don’t see change, they stop caring. Share results, communicate next steps, and follow through. How do we fix it? - Run shorter, more frequent pulse surveys. - Focus on patterns, not individual responses. - Follow up with action and communicate results. Employee engagement builds trust. Not simply collecting data. Are your surveys doing that?
-
A really simple and effective tool we've been using with customers: Rose - Thorn - Bud. We use this to understand how employees experience performance management. Here's how it works and why it's so effective. This exercise breaks down the experience into three categories: 🌹 Rose: What's working well 🌵 Thorn: Challenges - what's not working 🌱 Bud: Ideas - what could be explored for improvement By categorizing feedback into these three areas, we quickly get a clear picture of: 1️⃣ What employees appreciate about the current system (Roses). 2️⃣ The obstacles or frustrations they face (Thorns). 3️⃣ Potential improvements or innovations they look forward to (Buds). Doing it on a (virtual) whiteboard with sticky notes make it safe for everyone to contribute. After the stickies have been added, we give everybody 3 votes per category, where they upvote the stickies that resonate most. Then we go deep and unpack the top-voted stickies. The activity can be completed in less than an hour - the conversations are eye-opening, and the insights and ideas are phenomenal, every time! We love this for performance management, but no reason why it couldn't be used in other PX initiatives.