Littal Shemer Haim

People Analytics, HR Data Strategy, Organizational Research – Consultant, Mentor, Speaker, Influencer

Ethics in People Analytics and AI at Work – Best Resources

Part of my continuous learning, collaboration, and contribution is a comprehensive resource list, updated monthly. It includes four categories: strategic thinking, practical advice, product reviews, and a social context.
Photography by Littal Shemer Haim ©

Ethics in People Analytics and AI at Work
Best Resources Discovered Monthly
Edition #2 – July 2020

There is a severe knowledge gap. Business leaders’ and HR practitioners’ quantitative abilities are based on the descriptive or inferential statistics that we all learned. Machine learning is entirely different. To understand it and evaluate it to the level of dealing with potential risks, let alone algorithm auditing, a systematic approach and a practical methodology is needed.

Part of my continuous learning, collaboration, and contribution, which hopefully lead to an articulation of a solution for evaluating the Ethics of workforce AI, is a comprehensive resource list that will be updated monthly. For now, I decided to include four categories in it: strategic thinking, practical advice, product reviews, and a social context.

Why these categories? I hope that such a categorization will facilitate learning in the field. Particularly, leaders need to understand how to incorporate questions about values in their businesses, starting in their strategic planning. Then, they may need a helping hand to translate those values and plans into daily practices and procedures. Those practices can be demonstrated in discussions and reviews about specific products. But at the end of the day, business leaders influence the employees, their families, their communities, and society. Therefore, this resource list must include a social perspective too.


Workforce AI Ethics in strategic thinking

Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward

Samuele Lo Piano

More and more decisions related to the people aspects of the business are being based on machine-learning algorithms. Ethical questions are raised from time to time, e.g., when “black box” algorithms create controversial outcomes. However, until writing these lines, I have not found a single standard or framework that guides the HR-Tech industry beyond regional regulations.

By the time such a standard established, any practitioner who deals with the subject needs a thorough review of literature that leads to available tools and documentation. This Nature’s article offers the solutions. Although it addresses ethical questions related to risk assessments in criminal justice systems and autonomous vehicles, I consider reading it a strategic step towards ethical considerations in the procurement of workforce AI. Particularly, the article focuses on fairness, accuracy, accountability, and transparency, and offers guidelines and references for these issues.

The article lists research questions around the ethical principles in AI, offers guidelines and literature on the dimensions of AI ethics, and discusses actions towards the inclusion of these dimensions in the future of AI ethics. If you start the journey toward understanding the ethics of workforce AI, you should use this article as an intellectual hub for further exploration of academic and practical conversations.

(Thank you Andrew Neff for the tweet)

Workforce AI Ethics in practical advice

23 sources of data bias for #machinelearning and #deeplearning

ajit jaokar

This list includes 23 types of bias in data for machine learning. Actually, it quotes an entire paragraph of this survey results on bias and fairness in ML. Why I put this content in the practical advice section of this monthly review? I think that although most business leaders in organizations may not be legally responsible for such biases in workforce AI, at least not directly, they do need to be aware of them, ethically. After all, AI support decision-making, but the last words are still owned by humans, who must take into account everything, including justice and fairness.

It’s good to have such a list. I advise you to come back to it from time to time, to refresh your memory and be inspired. So, what kind of biases you can find in this list? Plenty: Aggregation Bias, Population Bias, Simpson’s Paradox, Longitudinal Data Fallacy, Sampling Bias, Behavioral Bias, Content Production Bias, Linking Bias, Popularity Bias, Algorithmic Bias, User Interaction Bias, Presentation Bias, Social Bias, Emergent Bias, Self-Selection Bias, Omitted Variable Bias, Cause-Effect Bias, Funding Bias. Did you try to test yourself and count how many of these biases you already know?

Some biases listed here can be resolved by research methodology. That’s the reason I include some examples of such biases in my introductory courses. So if you are a People Analytics practitioner, don’t hesitate to re-open your old notebooks. Here’s one of my favorites, i.e., I enjoy presenting it to students: Simpson’s Paradox! It arose during the gender bias lawsuit in university admissions against UC Berkeley. Sometimes subgroups, and in this case – women, may be quite different. After analyzing graduate school admissions data, it seemed like there was a bias toward women, a smaller fraction of whom were being admitted to graduate programs compared to their male counterparts. However, when exploring admissions data separately and analyzing it across departments, findings reveal that more women actually applied to departments with lower admission rates for both genders.

Workforce AI Ethics in product reviews

Remote working: This company is using fitness trackers and AI to monitor workers’ lockdown stress

Owen Hughes 

PwC was harnessing AI and fitness-tracking wearables to gain a deeper understanding of how the work and external stressors are impacting employees’ state of mind. During the COVID-19 crisis, companies promote healthy working habits to ensure employees are provided with the support they need while working from home. What can a company offer beyond catch-ups on Zoom? PwC approach is novel, yet, to me, controversial.

The company has been running a pilot scheme that combines ML with wearable devices to understand how lifestyle habits and external factors are impacting its staff. Employees volunteered to use fitness trackers that collect biometric data and connect it to cognitive tests, to manage stress better. Factors such as sleep, exercise, and workload influence employee performance, Obviously, and balancing work and home life benefits mental health and wellbeing.

Volunteering rates were higher than expected. Understanding of human performance and human wellness is, clearly, an interest of both employees and employers. However, in my opinion, it must initiate a discussion about the boundaries of organizational monitoring. Is it OK to collect employee biometric measures, e.g., pulse rate and sleeping patterns, and combine them with cognitive tests and deeper personality traits, in the organization arena? If it does, how far is it OK to go with genetic information? How different are these answers in case the employer also offers medical insurance as a benefit to its employees? Tracking mental and physical responses to understanding work may be essential. Still, employers may provide education and tools without being directly involved in data collection and maintenance. Even when volunteered, there always a self-selection bias among employees (see the previous category in this review), and so, the beneficial results are not equally distributed.

(Thank you David Green, for the tweet)

Workforce AI Ethics in a social context

Man is to Programmer as Woman is to Homemaker: Bias in Machine Learning

Emily Maxie

We often hear about gender inequities in the workplace. A lot of factors are at play: the persistence of traditional gender roles, unconscious bias, blatant sexism, lack of role models for girls who aspire to lead in STEM. However, technology is also to blame because machine learning has the potential to reinforce cultural biases. This article is not new, but it offers a clear explanation for the non-techies on how natural language processing programs exhibited gender stereotypes.

To understand the relationships between words, Google researchers created in 2013, a neural network algorithm which enables computers to understand human speech. To train this algorithm, they used the massive data set at their fingertips: Google News articles. The result was widely accepted and incorporated into all sorts of other software, including recommendation engines and job-search systems. However, the algorithm created troubling correlations between words. It was working correctly, but it learned the biases inherent in the text on Google News.

In order to solve the issue, researches had to identify the difference between a legitimate gender difference and a biased gender difference. They set out to determine the terms that are problematic and exclude them while leaving the unbiased terms untouched. Bias in training data can be mitigated, but only if someone recognizes that it’s there and knows how to correct it. Sadly, it would be impossible to tell if all the uses, in all kinds of software, are fixed, even if Google corrected the bias.

(Thank you Max Blumberg for highlighting this article)

Previous Editions

Edition #1 – June 2020

Workforce AI Ethics in strategic thinking

Ethics and the future of work

Erica Volini, Jeff Schwartz, Brad Denny

The way work is done changes, as the integration between employees, alternative workforces, technology, and specifically automation, becomes more prevalent. Deloitte’s article Ethics and the future of work enumerate the increasing range of ethical challenges that managers face in result. Based on a survey, it states four factors at the top of ethical challenges related to the future of work: legal and regulatory requirements, rapid adoption of AI in the workplace, changes in workforce composition, and pressure from external stakeholders. Organizations are not ready to manage ethical challenges. Though relatively prepared to handle privacy and control of employee data, executives’ responses indicate that organizations are unprepared for automation and the use of algorithms in the workplace.

According to Deloitte, organizations should change their perspective when approaching new ethical questions, and shift from asking only “could we” to also asking “how should we.” The article demonstrates how to do so. For example, instead of asking “could we use surveillance technology?” organizations may ask “how should we enhance both productivity and employee safety?”.

Organizations can respond to ethical challenges in various ways. Some organizations create executive positions that focus on driving ethical decision-making. Other organizations use new technologies in ways that can have clear benefits for workers themselves. The point is that instead of reacting to ethical dilemmas as they arise, organizations should anticipate, plan for, and manage ethics as part of their strategy and mission, and focus on how these issues may affect different stakeholders.


Workforce AI Ethics in practical advice

Walking the tightrope of People Analytics – Balancing value and trust

Lucas Ruijs

The People Analytics domain will eventually transform into AI products. In the early days, most People Analytics practices were projects or internal tools developed in organizations. As the industry matures, more and more organizations automate, starting with their HR reporting. HR-tech products and platforms that offer solutions based on predictive analytics and natural language processing are not rare anymore, although mostly seen in large organizations. However, the discussion about Ethics in HR-tech is still in its infancy. In my opinion, the conversation between the different disciplines – HR and OD, ML and AI, and Ethics – are the building blocks of the People Analytics field in the future. The article Walking the tightrope of People Analytics – Balancing value and trust is an excellent example of such a multidisciplinary conversation.

People Analytics projects might go wrong in many ways. To prevent the harmful consequences of lousy analysis, HR leaders must ask essential questions about the balance of interests between the employer and the employees, the value delivered to each party, the fairness, and transparency of the analysis and the risk of illegal or immoral application of the results. The HR sector needs an ethical framework to address these questions.

This article takes this need a step further. It defines ethics, review its three primary paradigms, i.e., deontology, consequentialism, and virtue ethics. Then it derives practical principles from each method, respectively – transparency, function, alignment. Each of these principles offers three questions that should be raised before, during, and after an analytics project. This framework goes beyond the regulation. It helps to make sure that new analytics capabilities that improve decision making are not sacrificing employee care.

(Thank you David Green, for the tweet)

Workforce AI Ethics in product reviews

This startup is using AI to give workers a “productivity score”

Will Douglas Heavenarchive

In the last few months, the covid19 pandemic caused millions of people to stop going into offices and doing their jobs from home. A controversial consequence of remote work was the emerging use of surveillance software. Many new applications enable employers now to track their employees’ activities. Some record keyboard strokes, mouse movements, websites visited, and users’ screens. Others monitor interactions between employees to identify patterns of collaboration.

The MIT technology review covered a startup that uses AI to give workers a productivity score, which enables managers to identify those who are most worth retaining and those who are not. The review raises an important question: do you owe it to your employer to be as productive as possible, above all else? Productivity was always crucial from the organizational point of view. However, in a time of the pandemic, it has additional perspectives. People must cope with multi challenges, including health, child care, and balancing work at home with personal needs. But organizations struggle too, to survive. The potential conflicts of interest, and the surveillance available now, put additional weight on that question.

When runs in the background all the time, and monitoring whatever data trail a company can provide for its employees, an algorithm can learn typical workflows of different workers. It can analyze triggers, tasks, and processes. Once it has discovered a regular pattern of employee behavior, it can calculate a productivity score, which is agnostic to the employee role, though it works best with repetitive tasks. Though contributing to productivity by identifying what could be made more efficient or automated, such algorithms also might encode hidden bias, and also might make people feel untrusted.


Workforce AI Ethics in a social context

The Institute for Ethical AI & Machine Learning

The Institute for Ethical AI & Machine Learning is a UK-based research center that carries out technical research into processes and frameworks that support the responsible development, deployment, and operation of machine learning systems. The institute’s vision is to “minimize the risk of AI and unlock its full power through a framework that ensures the ethical and conscious development of AI projects.” My reading about this organization’s contribution is through a lens of workforce AI applications. However, this organization aims to influence all industries.  

Volunteering domain experts in this institute articulated “The Responsible Machine Learning Principles” that guide technologists. There are eight principles: Human augmentation, Bias evaluation, Explainability by justification, Reproducible operations, Displacement strategy, Practical accuracy, Trust by privacy, and Security risks. Each principle includes a definition, detailed description, examples, and resources. I think every workshop for AI developers should cover these principles, and especially in the HR-Tech industry.

The Institute for Ethical AI & ML offers a valuable tool, called AI-RFX. It is a set of templates that empowers industry practitioners who oversee procurement to raise the bar for AI safety, quality, and performance. Practically, this open-source tool converts the eight principles for responsible ML into a checklist.

Littal Shemer Haim

Littal Shemer Haim

Littal Shemer Haim brings Data Science into HR activities, to guide organizations to base decision-making about people on data. Her vast experience in applied research, keen usage of statistical modeling, constant exposure to new technologies, and genuine interest in people’s lives, all led her to focus nowadays on HR Data Strategy, People Analytics, and Organizational Research.

+50 Articles
Share on facebook
Share on twitter
Share on linkedin
Share on email

Leave a Comment

Your email address will not be published. Required fields are marked *

About This Blog

Thoughts and ideas related to People, Data, and the Future of Work.

Your browser doesn't support the HTML5 CANVAS tag.

Share on facebook
Share on twitter
Share on linkedin
Share on email

Enjoy Reading This?

Join People Analytics enthusiasts, and get new featured articles, delivered straight to your inbox!

All images and texts on this website are copyrighted © Littal Shemer Haim ALL RIGHTS RESERVED

Stay Tuned!

Subscribe and get notified about featured articles weekly