top of page
Search

Top 10 Missteps in HR’s Use of AI—and How to Avoid Them


HR and AI
HR and AI

AI is rapidly transforming human resources, from automating recruitment to personalizing employee development. Used wisely, AI can save time, improve decision-making, and enhance employee experiences. However, if implemented carelessly, AI can lead to serious missteps that undermine HR’s effectiveness and erode trust. Below we outline the top 10 pitfalls HR teams face when leveraging AI, along with best practices to steer clear of these issues. This guide is written for HR professionals seeking to harness AI’s benefits responsibly and ethically.


1. Overreliance on AI Without Human Oversight


The Misstep: Enthralled by AI’s efficiency, some HR departments rely too heavily on algorithms to make decisions – from screening résumés to answering employee queries – with minimal human intervention. This overreliance assumes the AI is infallible and can replace human judgment entirely. In reality, AI often lacks the context, empathy, and nuanced understanding that experienced HR professionals bring. For example, a Harvard Business School study found that automated applicant tracking systems (ATS) were filtering out millions of qualified candidates due to rigid keyword criteria goco.io. When AI is left to make such decisions unchecked, great candidates or important exceptions can fall through the cracks.


Best Practices: AI should be an assistant, not the final arbiter. To avoid this pitfall, keep humans in the loop for important HR decisions. For instance, use AI to shortlist applicants, but have a recruiter review the list for overlooked talent. Regularly audit AI-driven outcomes – if an algorithm consistently rejects candidates who later turn out to be successful in other companies, adjust your filters. Use AI for efficiency, but retain human judgment for qualities that algorithms can’t grasp (like potential, attitude, or growth capability). In short, treat AI as a tool to augment human decision-making, not a replacement for it.


2. Ignoring Bias and Fairness Issues in Algorithms


The Misstep: Assuming an AI system is “neutral” or automatically fair can be a critical mistake. AI models learn from historical data – and if that data contains human biases, the AI will likely reproduce or even amplify those biases. In HR, this can lead to discriminatory outcomes in hiring, promotions, or evaluations. One high-profile example is Amazon’s experimental hiring algorithm that began favoring male applicants for technical roles and downgrading résumés that included the word “women’s” (as in “women’s soccer team”)reuters.com. The algorithm had trained on the company’s past hiring patterns, which reflected male bias in tech positions, resulting in a tool that systematically discriminated against women. Amazon eventually scrapped the tool, but it provided a stark lesson in AI bias. Beyond gender, algorithms can unfairly screen out candidates from certain schools, zip codes, or ethnic groups if those patterns existed in the training data. Additionally, regulators have warned that careless use of AI can violate equal opportunity laws – for instance, the U.S. Department of Justice cautioned that AI hiring tools may unlawfully discriminate against people with disabilities justice.gov.


Best Practices: Proactively address bias at every stage of your AI project. Start by choosing AI tools that offer transparency and bias mitigation features (such as bias testing or adjustable algorithms). Conduct regular bias audits of AI decisions – for example, check if rejection rates in recruitment differ significantly by gender or race, and investigate the reasons. Use diverse, representative data to train your models, so they learn from a broad set of examples rather than reinforcing one narrow profile. It’s also wise to involve a diverse group of stakeholders in testing AI outputs. If an algorithm flags a concern (say, it keeps rejecting older candidates or those with non-traditional career paths), have human reviewers analyze those cases. Finally, provide a channel for candidates or employees to appeal or inquire about AI-driven decisions. This human feedback loop not only builds trust, but also helps catch biased outcomes. Remember: AI is only as fair as the data and design behind it, so vigilance is key reuters.com.


3. Lack of Transparency with Candidates and Employees


The Misstep: Failing to inform people how AI is being used in HR processes is a recipe for confusion and mistrust. Many organizations introduce AI into hiring or HR management behind the scenes. Candidates might be rejected by an algorithm without ever knowing an AI was involved, and employees may suspect a “black box” is deciding promotions or performance evaluations with no explanation. This opacity creates frustration and skepticism. In recruiting, for example, applicants often get boilerplate rejection emails and have no idea if a human or an AI made the call – leaving them feeling helpless about how to improve. Likewise, employees may hear that “an algorithm” determined their engagement survey scores or recommended them (or not) for training programs, breeding anxiety if they don’t understand the criteria. In fact, a recent global survey by HR solutions firm UKG found 54% of employees have “no idea” how their company uses AI, even though leadership is deploying it greatplacetowork.com. The same study showed that transparency can dramatically improve acceptance: 75% of workers said they’d be excited to use AI if their employer was more open about how AI would improve their workflow and how exactly it was being used greatplacetowork.com. The message is clear – hiding AI’s role creates distrust, whereas openness invites buy-in.


Best Practices: Make transparency a core principle of your AI strategy. Internally, communicate with your workforce about any new AI tool before it’s implemented. Explain: What decision or process is the AI helping with? How does it work (in understandable terms)? What data does it use? Crucially, clarify that the AI is there to assist, not make unilateral judgments. For external talent, if AI plays a role in hiring – say you use a chatbot for screening or video interview analytics – disclose that to candidates. Let applicants know, for example, “Your application will be reviewed by an AI system trained to look for X, and all AI recommendations are verified by our HR team.” Whenever possible, provide feedback. If a candidate is rejected by an AI-driven process, give a brief explanation or offer them the chance to request feedback. Some companies even allow candidates to opt out of AI screening and have a human review their application (which can be a good practice for compliance as well). Finally, foster an internal culture that views AI as a partner – encourage employees to ask questions about AI tools and provide input. The more people understand the “why” and “how” behind AI decisions, the more they will trust and support them.


4. Poor Data Quality and Preparation (“Garbage In, Garbage Out”)


The Misstep: Diving into AI without ensuring your HR data is accurate, consistent, and up-to-date is like building a house on sand. AI systems live on data – they learn from it and make decisions based on it. If that data is flawed, the outcomes will be flawed. HR data is often spread across multiple systems (HRIS, ATS, performance management, etc.), riddled with inconsistencies (e.g. different job titles for the same role, or missing performance records), and reflective of outdated practices. Implementing AI on top of such data can produce nonsensical or biased results. For instance, if an algorithm is fed years-old compensation data, it might recommend salaries that are no longer competitive. Or if an employee database has many blank fields (say, missing reason-for-leaving information for past employees), an AI attrition model might draw wrong correlations. The old adage “garbage in, garbage out” very much applies – one study even noted that HR executives themselves lost confidence in AI decisions when they realized the messy state of the underlying data. In short, neglecting data quality is a silent misstep that can undercut even the best AI initiatives.


Best Practices: Before rolling out AI, get your data house in order. Conduct a data audit of your HR systems: identify inaccuracies, duplicates, or biases in the data. Standardize data definitions (for example, ensure all departments use the same codes for employment status, etc.). Cleanse historical data – correct errors, and fill gaps where possible. If you’re using AI for something like predictive analytics or hiring recommendations, make sure the training data is relevant to current conditions (avoid training only on data from decades ago that no longer reflect your workforce or talent market). You should also establish ongoing data governance. This means assigning responsibility for maintaining data quality and periodically reviewing the datasets that AI systems draw from. Some companies set up cross-functional data councils to oversee data used in AI. Additionally, incorporate external benchmarks or data sources carefully – if you pull in market data, verify its accuracy and fairness. By investing the time in data preparation and governance, you dramatically improve the odds that your AI will yield reliable and unbiased insights. This foundational step isn’t glamorous, but it separates AI projects that deliver value from those that disappoint.


5. Neglecting Compliance and Ethical Responsibilities


The Misstep: Moving ahead with AI in HR without considering legal and ethical implications can land an organization in serious trouble. Employment laws and regulations apply to AI decisions just as they do to human decisions. If an AI inadvertently screens out people over 50 or certain minority groups, it could violate Equal Employment Opportunity (EEO) laws. If an algorithm makes promotion recommendations that systematically favor one gender, that’s a potential discrimination lawsuit waiting to happen. Beyond bias, privacy laws come into play when AI handles personal employee data. In the EU and other jurisdictions, laws like the GDPR grant individuals rights when decisions are made about them by algorithms (including the right to an explanation of an automated decision). Regulators are increasingly focusing on AI in hiring and HR: in 2023, New York City implemented a first-of-its-kind law requiring annual bias audits of AI hiring tools used by any employer operating in the citygoco.io. This means if you use a résumé screening AI or similar tool, you must have an independent auditor check it for discriminatory impact each year. Other localities and states are considering similar rules. If HR deploys AI without accounting for these requirements – or fails to follow existing guidelines (like the U.S. EEOC’s guidance on AI in employment) – the organization could face fines, litigation, and reputational damage. Ethically, there’s also the duty to treat employees and candidates fairly and to avoid overly intrusive AI practices (for instance, constant AI monitoring of employees could be seen as unethical even if technically legal in some places).


Best Practices: Bake compliance into your AI project plan from day one. Consult with legal experts or counsel who understand both HR and AI. When evaluating AI vendors for HR, inquire about how their tools meet EEO standards, and ask if they have been audited for bias. Establish a process for regular bias audits (even if not yet legally mandated, it’s a best practice). Document your AI decision processes – if you ever need to explain how a particular decision was made (to a regulator or in court), you’ll want to have records of how the model works and how you validate it. Ensure transparency and the option for human override, especially for high-stakes decisions like hiring or firing. For instance, if an AI recommends termination based on performance data, have a human review the case and consider extenuating factors to avoid wrongful dismissal. Regarding privacy: be cautious with employee data. If you’re analyzing things like emails or Slack messages to gauge engagement (some AI tools offer this), ensure it complies with privacy laws and get employee consent if required. Any AI that touches personal data should have robust security (encryption, access controls) and you should have agreements in place with vendors about data use and protection. In summary, treat AI in HR with the same rigor as any HR practice – follow the law, uphold ethical standards, and don’t let the “wow” factor of AI lure you into dropping your compliance guard goco.io.


6. Inadequate Training and Change Management


The Misstep: Handing sophisticated AI tools to an HR team (or workforce) that isn’t trained in their use can backfire quickly. If HR professionals don’t understand how an AI tool works – its purpose, limitations, and the basics of its operation – they might misuse it or become over-reliant on it. Conversely, lack of training can breed mistrust: recruiters or managers might ignore valuable AI insights simply because they don’t trust or comprehend them. A knowledge gap is evident in many organizations. In one survey, only about 1 in 8 HR professionals felt fully knowledgeable about using AI in talent acquisitiongoco.io. This gap means many HR staff are learning AI on the fly (or not at all), which increases the chances of errors. Additionally, organizations often underestimate the change management aspect of introducing AI. They roll out a new AI-driven system without adequately preparing the team or adjusting processes. The result can be confusion (“What do we do with this new chatbot?”), resistance (“This algorithm can’t tell me how to hire!”), or inconsistent usage that nullifies the tool’s benefits. A real-world illustration comes from IBM’s experience: when they first introduced an AI chatbot (“AskHR”) to assist managers, uptake was low because employees weren’t comfortable with it. Managers continued calling HR staff as usual, essentially bypassing the bot. IBM then tried to force adoption by requiring certain inquiries to go through the chatbot – and employee satisfaction with HR plummeted, because people felt they had lost a helpful human touch with no clear gain inkl.com. The initial mistake was treating the implementation purely as a tech install rather than a change that needed behavior adoption and training.


Best Practices: Invest in comprehensive training for your HR team whenever you implement an AI solution. This training shouldn’t just cover which buttons to click – it should explain how the AI makes its decisions, what its strengths and weaknesses are, and how staff can interpret its outputs. For example, if you deploy an AI tool that ranks job applicants, train recruiters on what the ranking is based on, and emphasize that it’s a starting point, not an absolute truth. Encourage questions and even healthy skepticism in training sessions, so people feel comfortable challenging or double-checking the AI when needed. Alongside skills training, focus on change management for the organization. Communicate early and often about why the AI is being introduced – how it will make work easier or more effective – and acknowledge employees’ concerns (many fear AI could replace their jobs or make work impersonal). Solicit feedback and involve some end-users in the pilot phase to act as change champions. When rolling out the AI, do it gradually if possible. Perhaps allow a transition period where staff can use the traditional method or the AI, and gather feedback on issues. In IBM’s case, they learned to pair the chatbot with a gradual adoption strategy and to keep human support for complex issues, which eventually led to widespread acceptance. The lesson: Upskill your people and manage the human side of the rollout with as much care as the technology, and you’ll avoid chaos and get much better results businesswire.com.


7. Over-Automating and Losing the Human Touch


The Misstep: In a rush to automate, some HR teams go too far and remove the human element from processes that truly require it. HR is, at its heart, about humans – their motivations, feelings, and unique situations. If every interaction becomes a bot or every decision is algorithmic, employees and candidates can feel alienated. We’ve seen companies attempt to handle employee relations purely through AI (e.g., an AI tool giving automated answers to sensitive HR questions, or chatbots handling employee grievances) and then scramble to reverse course when they realize morale is dropping. Even in recruitment, an overly automated experience – like only interacting with chatbots, automated interview schedulers, and form emails – can turn great candidates off because the process feels cold and impersonal. The misstep here is failing to identify the moments where empathetic, personal interaction is crucial. The result can be dissatisfaction, distrust, and a hit to company culture. Recall the earlier IBM example: by initially cutting off live HR support in favor of an AI assistant, they triggered a wave of frustration until they reintroduced the option of speaking with a human for complex issue sinkl.com. Similarly, imagine an employee going through a personal crisis trying to navigate leave or benefits – a chatbot might give correct policy info, but it can’t truly listen or convey compassion. Over-automation also can erode the sense of community and company values if employees feel they are just interacting with machines rather than people who care.


Best Practices: Find the right balance between automation and human touch. Map out your HR processes and flag the steps that benefit from human empathy or insight. For routine, transactional tasks (like updating an address, checking PTO balance, simple FAQs), AI and self-service can be fantastic – employees often appreciate quick, on-demand answers. But for consequential or emotional topics (performance reviews, conflict resolution, personal leave requests), ensure there’s a human in the loop or at least easily accessible. One best practice is the “AI augmentation” approach: use AI to handle the repetitive 80%, and free up HR staff to focus on the 20% of cases that truly need a personal touch. For instance, let the AI chatbot handle common Tier-1 questions, but have it automatically escalate to a human rep if the query is about harassment, a complex policy interpretation, or if the user is clearly upset (some AI can even detect sentiment). Make sure every automated message or email has an option like, “Need more help? Contact Jane in HR directly.” During hiring, consider injecting a human interaction early – maybe a brief personal note or call to high-potential candidates – so they feel seen as individuals, not just data points. Regularly seek feedback from employees and applicants about their experience. If people report that a process felt impersonal or frustrating, dial back the automation in that area. Remember, the goal of HR automation is to enhance the human experience (by removing drudgery and speeding up service), not to eliminate humanity from the workplace. By preserving the human touch where it matters most, you’ll maintain trust and engagement, while still reaping efficiency gains from AI.


8. Overlooking Soft Skills and Cultural Fit in Hiring


The Misstep: HR teams that lean heavily on AI for hiring assessments can inadvertently prioritize candidates who look best on paper, at the expense of those with the right soft skills or cultural fit. Many AI hiring tools focus on easily quantifiable data – resume keywords, years of experience, education, assessment scores, etc. These are important, but they don’t paint the full picture of a candidate. Qualities like leadership potential, teamwork, creativity, adaptability, and emotional intelligence are harder for AI to gauge. If HR becomes fixated on optimizing algorithm scores, they might end up with hires who check all the technical boxes but falter in real-world team dynamics. For example, an AI might rank a developer candidate highly because they have 10 years of experience and every certification under the sun; yet that candidate might lack collaboration skills or innovation – something a human interviewer would catch. There have been cases where companies realized their AI screening was weeding out people with unconventional but rich backgrounds (like entrepreneurs or veterans shifting careers) simply because their resumes didn’t fit the mold that the algorithm was trained on. This misstep is essentially a blind spot: by trusting AI’s quantitative measures too much, organizations can miss out on the very human qualities that drive long-term success. As one HR commentator quipped, “People hire people” – meaning that ultimately it’s the personal traits and cultural contributions that make someone a great hire, and those can’t be fully captured in an algorithm.


Best Practices: To avoid this pitfall, ensure your recruitment process blends AI assessment with human judgment focused on soft skills. Use AI for what it’s good at: initial screening for minimum qualifications, scheduling interviews, maybe even basic skills tests. But always include interviews or live assessments that evaluate communication, attitude, and culture fit. Train hiring managers to value these elements – sometimes a candidate might score slightly lower on an AI test but show exceptional leadership or creativity in person. Consider structured interviews or behavioral questions that allow candidates to demonstrate problem-solving, teamwork stories, and adaptability. Some companies use group interviews or collaborative exercises to see soft skills in action. If your AI includes video or language analysis (some claim to analyze tone or facial expressions), take those results with caution – they are often based on debatable science. It’s better to have a human panel assess a candidate’s interpersonal skills through conversation. You might also adjust your AI to account for diverse backgrounds: for instance, instruct it to flag non-traditional candidates for human review rather than simply discard them. Calibrate your hiring AI with human feedback – after hiring someone, compare their long-term performance with how the AI initially ranked them. This can highlight if your algorithm undervalued certain traits. In summary, design your hiring funnel such that AI handles the volume and initial filtering, but humans ultimately make the qualitative judgments. By doing so, you leverage AI’s speed while still selecting people who will thrive in your unique culture goco.io.


9. Implementing AI Without a Clear Strategy or Alignment to Values


The Misstep: Jumping on the AI bandwagon without a well-defined strategy is a common mistake. Some HR departments implement AI tools simply because “everyone’s doing it” or due to pressure from upper management to appear innovative – but without a clear problem to solve or a goal to achieve. This can lead to disjointed projects that don’t deliver real value. For example, adopting a fancy AI analytics platform without identifying key HR questions it should answer, or layering a chatbot into HR communications without thinking through which queries it should handle versus HR staff. Even if a tool is powerful, if it’s not solving a real pain point or aligning with the company’s strategy, it can become a costly distraction. Another aspect of this misstep is not aligning AI use with the organization’s core values and culture. A company that prides itself on a personal touch and employee-centric culture could undermine those values by deploying AI in a cold, non-transparent way (linking back to the missteps above). Employees will sense when an AI initiative conflicts with the company’s stated ethos, leading to cynicism. Essentially, the misstep is implementing AI as a shiny new toy rather than as an integral part of your HR mission. The results can include low adoption (because employees or HR staff don’t see the point), wasted budget, or even cultural damage.


Best Practices: Start with the “why” and the “what” before the “how” of AI. Ask: What HR challenge are we trying to address? Is it reducing time-to-fill in recruiting, improving employee engagement, identifying flight-risk employees before they leave? Once you have a clear objective, evaluate if AI is actually the right tool for it (sometimes process improvements or simpler analytics might suffice). If yes, define how you’ll measure success – e.g., “We aim to cut average hiring time by 30% within a year” or “Increase internal promotion rates by using AI to identify talent.” This creates a focused strategy where AI serves a purpose. Next, ensure leadership and stakeholders are aligned on the use of AI and that it fits your culture. Communicate how the AI initiative supports the company’s mission and values. For instance, “We value innovation, so we’re using AI to free our team from admin tasks to focus on creative, strategic work,” or “We put employees first, so we’re introducing an AI career coach to give employees more personalized growth suggestions.” Frame the narrative that AI is a tool to enhance your values, not replace them. Also, involve end-users in planning – get input from HR team members or employees on what they would find helpful or what concerns they have. This ensures the AI solution is grounded in reality and gains early buy-in. Lastly, avoid the big-bang approach. Pilot new AI solutions in a department or two, get feedback, and iterate. This strategic, values-driven approach will help AI initiatives gain traction and deliver meaningful results. As one HR leader put it, new technology should not change who you are as an organization – it should support who you are greatplacetowork.com. By aligning AI with your core purpose and strategy, you turn it from a buzzword into a true enabler of HR excellence.


10. Neglecting Security and Privacy Risks


The Misstep: Implementing AI in HR without robust data security and privacy safeguards is a serious oversight. HR data is some of the most sensitive information in any organization – personal identifiers, salaries, performance reviews, even medical or background check information. When you bring AI into the mix, often this data is being processed in new ways or even leaving your company’s four walls (e.g., cloud-based AI services). A big misstep is assuming that vendors or tools have security covered, or not updating your policies around data use. We’ve seen what can go wrong: in 2023, Samsung engineers accidentally leaked confidential company information by inputting sensitive code into ChatGPT (a generative AI tool)techcrunch.com. Because ChatGPT’s backend stored that data, it created a privacy breach – Samsung’s trade secrets were essentially on an external server. This prompted Samsung to urgently ban employees from using such AI tools until they could implement proper controls techcrunch.com. It’s a cautionary tale that when using external AI, employees may unknowingly expose data. Even within internal systems, an AI that aggregates employee data could become a target for hackers if not well-protected. Neglecting to update access controls is another common issue – perhaps more people or systems can now access certain HR data via the AI than before, expanding the risk surface. Additionally, privacy expectations (and laws) require that personal data is used for specific purposes and kept only as long as needed. If AI models retain data or use it in ways not anticipated, you might be violating those principles. In short, treating AI in HR as “just another IT tool” without due diligence on security/privacy can lead to breaches of data and trust.


Best Practices: Approach AI in HR with a security-first mindset. When evaluating AI vendors, thoroughly vet their security protocols: encryption of data at rest and in transit, access controls, audit logs, compliance with standards like SOC 2 or ISO 27001, etc. Work closely with your IT and cybersecurity teams to assess any new AI system’s architecture. Set clear policies for employees on the use of external AI tools – for example, if you allow ChatGPT or similar at work, issue guidelines like “do not input any confidential or personally identifiable HR data into these tools.” Samsung’s response – banning external AI until safeguards were in place – is one approach, but you can also find middle ground by using versions of tools that run internally or have data privacy options. Also, consider anonymizing or masking personal data when using AI for analytical purposes. If you’re testing an AI on past HR data, strip out names or IDs if possible. Update your data privacy notices to employees to cover AI usage – transparency here is part of ethical practice. Additionally, ensure that any AI that makes decisions is not operating in a security vacuum. For example, an AI that detects “insider threats” by scanning messages should itself be secured so it doesn’t become an entry point for hackers. Keep data minimization in mind: don’t feed AI more data than it needs. If your engagement AI only needs survey results, don’t also give it payroll info unless necessary. Regularly review who (or what systems) have access to AI-processed HR data and adjust roles as people change jobs. Finally, have an incident response plan specifically for AI-related data issues. If a leak or breach occurs due to AI, be ready to communicate and remediate. By being vigilant about protecting sensitive HR data and respecting privacy, you’ll avoid turning your innovative AI project into the next headline-making security fiasco.


AI is a tool, not a replacement
AI is a tool, not a replacement

Conclusion


Artificial intelligence holds enormous promise for HR – from eliminating drudge work to uncovering insights that help employees thrive. By automating routine tasks, AI lets HR professionals focus on strategic, human-centered work. It can improve consistency in decisions and even help flag issues that busy managers might miss. However, as we’ve detailed, the deployment of AI in HR must be handled with great care. Each misstep – whether it’s allowing bias to creep in, alienating employees through lack of transparency, or letting compliance and security lapse – can have serious repercussions. The good news is that all these pitfalls are avoidable with foresight and responsible practices. By combining the strengths of AI (speed, data processing, pattern recognition) with the strengths of human judgment (ethics, empathy, creativity), HR teams can achieve better outcomes than either alone. In essence, think of AI as a powerful ally – but one that needs guidance, oversight, and a human touch to reach its full potential.


HR leaders who follow the best practices outlined above will position their organizations to reap the benefits of AI while upholding fairness, privacy, and the core values that make their culture unique. In doing so, they turn AI from a risky experiment into a robust asset for their people strategy. The companies that succeed with HR AI will be those that never forget the “human” in Human Resources – using technology not as a panacea, but as a tool to support and enhance the workforce. As the industry saying goes, “AI won’t replace HR professionals, but HR professionals who use AI may replace those who don’t.” By learning from these common missteps and implementing AI thoughtfully, you can ensure that your HR team is on the winning side of that equation – leveraging innovation to create more efficient, effective, and human-centric HR practices.


Bibliography

  1. James Vincent, “Automated hiring software is mistakenly rejecting millions of viable job candidates,” The Verge, Sept. 6, 2021. (Summary of a Harvard Business School report on how ATS filters exclude many qualified candidates.)

  2. Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct. 11, 2018. (Case study of Amazon’s AI hiring experiment that developed gender bias.)

  3. U.S. Department of Justice, “Justice Department and EEOC Warn Against Disability Discrimination,” Press Release, May 12, 2022. (DOJ and EEOC guidance highlighting how AI tools can violate the ADA by screening out people with disabilities.)

  4. Phil Albinus, “What HR everywhere needs to know about NYC’s new AI bias law,” HR Executive, Oct. 3, 2022. (Overview of New York City’s law requiring annual bias audits for AI hiring tools, and its implications for employers.)

  5. Ted Kitterman, “5 Mistakes That Undermine Employee Trust in an AI-Powered Workplace,” Great Place To Work (blog), Dec. 18, 2023. (Discusses transparency, change management, and trust issues when implementing AI, based on a UKG survey of 4,000 employees.)

  6. Anna Coucke, “AI Recruitment Mistakes: Top Pitfalls and How to Avoid Them,” GoCo HR Blog, Feb. 28, 2025. (HR tech blog article covering common AI pitfalls in hiring, including overreliance on automation, bias, lack of training, and soft skills gaps.)

  7. General Assembly HR AI Survey – Press Release, Business Wire, July 15, 2025. (Reports that 82% of HR professionals use AI but only 30% have received job-specific AI training, emphasizing the need for better AI education in HR.)

  8. Emma Burleigh, “The CHRO of IBM details a huge mistake in getting its workforce onboard with AI,” Fortune (via inkl.com), July 2024. (Recounts how IBM’s initial roll-out of an HR chatbot caused employee dissatisfaction and the lessons learned to improve adoption.)

  9. Kate Park, “Samsung bans use of generative AI tools like ChatGPT after internal data leak,” TechCrunch, May 2, 2023. (Describes how Samsung temporarily banned AI tools after engineers accidentally leaked sensitive data to ChatGPT, citing security and privacy concerns.)

  10. Great Place To Work Editors, “Losing touch with your organization’s purpose (in the context of AI adoption),” in 5 Mistakes That Undermine Employee Trust in an AI-Powered Workplace, Dec. 2023. (Advice on aligning AI initiatives with company values and purpose to maintain trust and authenticity.)

 
 
 

Comments


bottom of page