Littal Shemer Haim

People Analytics, HR Data Strategy, Organizational Research – Consultant, Mentor, Speaker, Influencer

AI for HR – Five themes that you must understand (Part 2)

In part 1 of this article, I called HR leaders to start the journey to AI by understanding five themes: What AI is - or isn't? How accurate is AI? Why AI prone to bias? How should people react to AI? How legal frameworks deal with AI? In this part of the article, I discuss the last two themes.
Photography by Littal Shemer Haim ©
(Reading Time: 4 minutes)

The last module of “The People Analytics Journey” – the introductory course for HR professionals that I taught in Tel Aviv – was dedicated to the future of People Analytics. We discuss the question – Will People Analysts always be human?, and I offered some practice guideline in Procurement and Ethics. As HR practitioners still lag in their understanding of analytics and AI, I think that this module illustrated the path needed to close the gap, without the math and the coding, of course.

In part 1 of the article AI for HR – Five themes that you must understand, I emphasized that the realm of work changes, as every stage of the employee lifecycle is affected by AI. To face the difficulties that we encounter, I called HR leaders to start by understanding five themes: 1) What AI is – or isn’t? 2) How accurate is AI? 3) Why AI prone to bias? 4) How should people react to AI? 5) How legal frameworks deal with AI? In this part of the article, I discuss the last two themes.

How should people react to AI?

There is no magic in AI. However, people who don’t understand algorithms and models may attribute magic to it, simply because they have no other explanation. But how can one discuss the fairness of a misperceived magical phenomenon? How can we, as a society, claim for fairness and justice in the implemented AI solutions, and especially in the workplace, when we don’t understand how AI is different from everything else we know? 

In the previous part of this article, I discussed the accuracy of a Machine Learning model. Unfortunately, people may assume that if a model is accurate (and remember that there is no such thing as 100% accuracy in ML), it is also good enough in terms of fairness. But the accuracy is only one measure to evaluate a model. Unfortunately, accurate models might lead to unfair decision making in organizations, due to bias, or due to an imperfect human decision making that follows.

The challenge gets even larger and more complicated because it is not clear who is the subject of our fairness attributions. Is it the entire organization? The person who represents an organization? The organizational functionality that AI holds? And if they perceive unfairness related to one of these entities, what should people do?

People don’t necessarily know their rights or how to fight for them. They also don’t necessarily know what data their employer uses and how. When presented with a technical solution at work, people may assume that it’s accurate, but they don’t necessarily know how it might weaken their condition. New learning paths and education are needed, and people should start expecting that from their employers.

New learning paths and education are critical because fear sometimes emerges when AI apps replace some roles of humans at work. In future hybrid teams, where humans and AI will work side by side, people might not be able to keep up with the machines. Specifically, they won’t be able to know what exact data algorithms are using and how the data is processed. The complexity is too much for a human to handle, and that may also contribute to fear.

Therefore, HR professionals who perceive their role as handling relationships between people and organizations should start exploring the domain of AI from that angle too. They should consider offering new learning paths that address responsibility for fairness in AI usage. As I stated before when I previously discussed the new roles of HR leaders in the fourth industrial revolution, the discussion about employee experience is pointless without exercising data transparency and fairness. Hopefully, someday, organizations will be rated based on this dimension too.

Useful sources for understanding issues in AI-based decision making are volunteering organizations that effort to prevent injustice, like the Algorithmic Justice League, and the Algorithm Watch, that study the effects of discrimination in AI and publish AI misuse incidents. I hope that the HR sector, and especially its education institutions, will start a collaboration with such organizations, or establish its own.

How legal frameworks deal with AI?

Two years ago, the legal environment changed, with the arrival of the GDPR – the European Union’s General Data Protection Regulation. Recently, the CCPA – California Consumer Privacy Act also went into effect. Other countries and states will adopt similar regulations sooner or later. A lot has been written about data privacy, following Article 9 of the GDPR. Less discussed, at least in HR circles, as much as I know, is Article 22 of the GDPR, which states the right not to be subject to a decision based solely on automated processing, and the right to obtain human intervention and explanation.

Unfortunately, only some kinds of algorithms are easy to explain, e.g., models based on Logistic Regression. More complex algorithms, like Deep Learning, are impossible to interpret. So one may wonder how organizations might violate the law simply because of using algorithms that are hard to explain.

The challenge is not limited to the complexity of algorithms. Sometimes AI developers rely on packages or libraries, i.e., code added to a program that someone else developed, instead of writing the entire program from scratch. But what if those libraries are biased but not checked for bias? What if minorities’ data is ignored in those libraries? It is not clear in such cases who is in charge and who needs to fix it.

I’m not a jurist, and obviously, none of the opinions I share is legal advice. However, since we are in an era when the laws that govern AI are still developing, I think that those who are responsible for the implementation of AI solutions in the organization should follow development not only on the technological side but also on the legal aspects of AI. That’s a lot of burden on HR professionals.

Nevertheless, to maintain the relations between employers and employees, the HR sector should not skip this topic. They must be aware of restrictions and regulations, keep close contact with legal departments, and other professionals, such as CIOs and data security teams. Furthermore, they should keep exploring how other organizations implemented AI successfully and concerning the legal aspect too.

Littal Shemer Haim

Littal Shemer Haim

Littal Shemer Haim brings Data Science into HR activities, to guide organizations to base decision-making about people on data. Her vast experience in applied research, keen usage of statistical modeling, constant exposure to new technologies, and genuine interest in people’s lives, all led her to focus nowadays on HR Data Strategy, People Analytics, and Organizational Research.

+60 Articles

Visualizing Absenteeism At Work

To demonstrate how visualization is helpful, I picked a relatively common topic in People Analytics: Employee absenteeism. I present two...

Leave a Comment

Your email address will not be published. Required fields are marked *

Thoughts and ideas related to People, Data, Work, and Ethics.

Your browser doesn't support the HTML5 CANVAS tag.

Join many People Analytics enthusiasts, and get new and featured articles delivered
straight to your inbox!


The People Analytics Journey, an introductory course for HR professionals, covers real-world use cases of analytics and enables them to be familiar with data science terms and competencies.

All images and texts on this website are copyrighted © Littal Shemer Haim ALL RIGHTS RESERVED

Stay Tuned!

Subscribe and get notified about featured articles weekly