Littal Shemer Haim

People Analytics, HR Data Strategy, Organizational Research – Consultant, Mentor, Speaker, Influencer

Ethics in People Analytics and AI at Work – Best Resources

Part of my continuous learning, collaboration, and contribution is a comprehensive resource list, updated monthly. It includes four categories: strategic thinking, practical advice, product reviews, and a social context.
Photography by Littal Shemer Haim ©
(Reading Time: 19 minutes)

Ethics in People Analytics and AI at Work
Best Resources Discovered Monthly

Edition #4 – September 2020

There is a severe knowledge gap. Business leaders’ and HR practitioners’ quantitative abilities are based on the descriptive or inferential statistics that we all learned. Machine learning is entirely different. To understand it and evaluate it to the level of dealing with potential risks, let alone algorithm auditing, a systematic approach and a practical methodology is needed.

Part of my continuous learning, collaboration, and contribution, which hopefully lead to an articulation of a solution for evaluating the Ethics of workforce AI, is a comprehensive resource list that will be updated monthly. For now, I decided to include four categories in it: strategic thinking, practical advice, product reviews, and a social context.

Why these categories? I hope that such a categorization will facilitate learning in the field. Particularly, leaders need to understand how to incorporate questions about values in their businesses, starting in their strategic planning. Then, they may need a helping hand to translate those values and plans into daily practices and procedures. Those practices can be demonstrated in discussions and reviews about specific products. But at the end of the day, business leaders influence the employees, their families, their communities, and society. Therefore, this resource list must include a social perspective too.

 

Workforce AI Ethics in strategic thinking

The Ethics of AI Ethics: An Evaluation of Guidelines

Thilo Hagendorff

The advanced application of AI in many fields raises discussion on AI ethics. Some ethics guidelines are already published. Although overlapping, they are not identical. So, how can one evaluate ethics guidelines? This article compares 22 approaches. Its analysis provides a detailed overview of AI ethics and examines the implementation of ethical principles in AI systems.

Unfortunately, according to this article, AI ethics is currently failing: Ethics lacks a reinforcement mechanism, and so, deviations from various codes of ethics have no consequences. Integrated Ethics into institutions serves mainly as a marketing strategy. Reading ethics guidelines has no significant influence on software developers’ decision-making, who lack a feeling of accountability or a view of the moral significance of their work. Furthermore, economic incentives are easily overriding commitment to ethical principles and values.

In several areas, ethically motivated efforts are undertaken to improve AI systems, particularly in fields where specific problems can be technically fixed: privacy protection, anti-discrimination, safety, or explainability. However, some significant ethical aspects that I find relevant to Workforce AI are yet omitted from guidelines. These are a lack of diversity in the AI community, the weighting between algorithmic or human decision routines, “hidden” social costs of AI, and the problem of the public–private-partnerships and industry-funded research.

In order to close the gap between ethics and technical discourses, a stronger focus on technical details of AI and ML is required. But at the same time, AI ethics should focus on genuinely social aspects, uncover blind spots in knowledge, and strive for individual self-responsibility.

Workforce AI Ethics in practical advice

Career Planning? Consider These HR Technology Roles of the Future

Dave Zielinski

Artificial intelligence technologies and other automation solutions are disrupting the HR profession. A crucial part of HR response is to consider new responsibilities within their roles. It is not surprising to find this topic in HR-related content. However, it is encouraging to see that this sector feels AI Ethics as a part of its future domain. While general predictions about future roles are not necessarily useful, experts’ discussion about AI Ethics offers practical points that can serve us today.

Although the AI Ethics Officer is mentioned as a future role, its description shed some light on present necessities. As new technologies are adopted by HR and generate unprecedented amounts of data about employees and candidates, the data must be carefully assessed, used, and protected. Furthermore, since decisions to deploy AI and ML are often made in departments other than HR, HR leaders must have a voice in ensuring AI-generated talent data is used ethically, so potential bias is prevented.

What does this mean for HR practitioners in organizations today? First, it is time to establish new practices in collaboration with the legal team to ensure the algorithms’ results are transparent, explainable, and bias-free. Moreover, it is time to start considering the balance between stakeholders in the organization. The HR department should ask how technologies serve both employers and employees and not settle only in discussing what technologies they should be using.

(Thanks for sharing, Vijay Bankar)

Workforce AI Ethics in product reviews

Google Offers to Help Others With the Tricky Ethics of AI

Tom Simonite

This entry is not related solely to Workforce AI. However, since all tech giants are players in the HR-Tech industry this way or another, I find this article thought-provoking. Today organizations receive cloud computing solutions from vendors like Amazon, Microsoft, and Google. Will they outsource the domain of AI Ethics to those vendors too? It turns out that Google’s cloud division will soon invite customers to do so.

Google AI ethics services, which the company plans to launch before the end of the year, will include spotting racial bias in computer vision systems and developing ethical guidelines that govern AI projects. In the long run, it may offer AI auditing for ethical integrity and ethics advice. Will we see a new business category called EaaS, i.e., ethics as a service? And if so, would it be right to consider companies such as Google to suppliers of such services?

On the one hand, Google has learned some AI ethics lessons the hard way, e.g., accidentally labeling black people as gorillas, which is the tip of the iceberg when considering how facial recognition systems are often less accurate for black people. Therefore, Google can leverage its experience and power to promote AI Ethics. But on the other hand, a company seeking to make money from AI may not be the best moral mentor on restraining technology. The inherent conflict of interest is relatively straightforward. Nevertheless, it is worthwhile to stay tuned for Google’s training courses on the topic.

Workforce AI Ethics in a social context

Employers are tracking us. Let’s track them back

Johanna Kinnock

Employee surveillance grows, and most employers are tracking their workers in one way or another. Research firm Gartner says half of the companies were already using “non-traditional” listening techniques like email scraping and workspace tracking in 2018. They estimate the figure to have risen to around 80% by now. Should employees worry? Should they respond to protect themselves? Workplace data expert Christina Colclough thinks they should. Colclough has created an app, WeClock, that enables employees to track their data and share it with unions.

Employees and their unions need to push back to ensure that their whole online existence doesn’t become their employers’ property. Data from employee surveillance is used to boost productivity, gain competitive advantage, and grow profits, but it cements the position of power that employers have over employees. Regulation for individual rights to data does not offer sufficient remedy yet. Decisions about employees and candidates present or take away certain opportunities based on past actions. Algorithms may not show certain job offers or career opportunities. There is a vast gap between what companies know about employees and what employees know about themselves.

Digitization doesn’t necessarily mean that only employers should have control and access over employee data. The app WeClock enables employees to track, and share with their unions, things like how far they must travel to work, whether they’re taking their allotted breaks, and how long they spend working out of hours. This will provide a source of aggregate data about critical issues affecting employee wellbeing.

Previous Editions

Edition #3 – August 2020

Edition #2 – July 2020

Edition #1 – June 2020

Edition #3 – August 2020

Workforce AI Ethics in strategic thinking

Questions about your AI Ethics

John Sumser

Do words like bias, privacy, liability, design, and management are raised in strategic discussions in your organization? And if so, are such words followed by an exclamation mark or a question mark? I consider this article as strategic, not merely because it covers 24 ethical questions that you should think about when implementing AI, but because it is actually an infinite list of questions. Each question you raise may bring more questions instead of answers. As AI technology evolves and penetration rates in organizations sharply increase, this list will probably demonstrate some of our routine discussions.

Some questions I find most important are: What are the limits of our intrusion into worker’s behavior and sentiments? What rights do employees have on information about themselves? How do we treat our workers who are not employees (gig workers, temps, subcontractors)? Is our machine-led learning system actually developing our organization in the direction we want? How, exactly, do you tell if the machine is producing the results you actually want and need? But read through the entire list, and add your own.

The ethics of AI is more than a committee that produces hard rules. The implementation is not only technical but rather an obligation to have a clear sense of what the organization’s ethics are. It may bring many new questions. However, in a reality of rapidly evolving technologies, don’t be surprised that a reasonable answer may be ‘I don’t know’. Simply follow it with ‘How do we find out?’

Workforce AI Ethics in practical advice

INSIGHT: Hiring Tests Need Revamp to End Legal Bias

Ron Edwards

Do artificial intelligence push recruitment practices toward less fairness? Pending legislation in New York City and California may suggest it does. Is it a first step in ending legal hiring bias? This call to update legislation in the US, specifically, revamp hiring tests to end legal bias is an eye-opening perspective to all prospects and clients of AI solutions. Although targeted to government institutions, its argument can be considered as advice to everyone in this field. Don’t wait for regulation to critic what vendors put on the shelves.

The article describes how hiring tools can negatively impact women, people of color, and those with disabilities. e.g., analyzing facial expressions using AI software, or collecting information unrelated to a job in question. Employers use cognitive ability assessments that enable significantly more white candidates to pass, in comparison to minorities. A high-profile failure is also mentioned: Amazon built an AI hiring tool that filtered out women’s resumes for engineering positions.

For workforce diversity to improve, 20th-century laws should be updated in accordance with 21st-century technologies. California and New York City are considering legislation that would set standards for AI assessments in hiring. Its requirements include pre-testing for bias, annual auditing to ensure no adverse impact on demographic groups, and candidates’ notification about the characteristics assessed by AI tools — a positive direction that organizations should embrace even before the long processes of legislation end because all candidates deserve equal chance to get hired, promoted, and be rewarded consistent with their talents.  


(Thank you Jouko van Aggelen for sharing)

Workforce AI Ethics in product reviews

Why using technology to spy on home-working employees may be a bad idea

Gabriel Burdin, Simon D. Halliday, and Fabio Landini

I’ve already offered in this section dystopian descriptions of employee surveillance while working from home. Some remote employees are photographed along with their desktop screenshots every few minutes. Others are tracked while browsing the web, make online calls, post on social media, and send private messages. The purpose of such surveillance solutions is to provide employees incentives to maintain their productivity, or in other words, prevent them from slacking off or shirking on working hours. However, psychological experiments reveal that instead of boosting or maintaining productivity, the variety of surveillance solutions might lead to the opposite consequence.

Research findings show that using technology to spy on home-working employees may be a bad idea after all. The standard economic theory would predict that intensive online workplace surveillance is effective since employees are motivated purely by self-interest and care only about their material payoffs. However, empirical evidence suggests that people have more complex motives. Alongside material payoffs, people value autonomy and dislike external control. They are also motivated by reciprocity and their beliefs about others’ intentions. Employees reward trusting employers who avoid control with their own efforts. Employers may trigger employees’ positive reciprocity and support their productivity simply by desist greater control.

Interestingly, the debate about remote workforce surveillance, which I included in previous editions of this monthly review, was focused mainly on employee privacy and the blurred boundaries between work and non-work. These perspectives, as much as important, are not comprehensive enough to understand the employment relations and conflicts. While employers would like to boost productivity for profit, surveillance technologies that monitor work from home might be the wrong solution, because it signals distrust and reduces intrinsic motivation to perform well. Ignoring the potential reactions to surveillance solutions may undermine the goal of increased productivity, let alone harming employees’ dignity.

Workforce AI Ethics in a social context

21 HR Jobs of the Future

Jeanne C. Meister, Robert H. Brown

Some writers perceive the Covid19 times as a tremendous opportunity for the HR sector to lead organizations in navigating the future. But a more realistic perspective would emphasize that in this turbulent time even the best intentions to support the people and guiding them to acquiring new skillset and embracing new career paths won’t help if the business crush due to covid19. In other words, it’s not just the employees who need to overcome, the organizations which employ them need to survive the crisis. However, I do witness a mindset shift in the HR sector, which in my opinion represents a continuous development, that covid19 may accelerate but certainly did not create. For that reason, I was happy to read about research that demonstrated such a shift, and creatively described 21 HR jobs of the future.

Nearly 100 CHROs, CLOs, and VP’s of talent and workforce transformation participated in brainstorming and considered economic, political, demographic, societal, cultural, business, and technology trend to envision how HR’s role might evolve over the next 10 years. The hypothetic future HR roles they created represent a growing understanding of crucial issues such as individual and organizational resilience, organizational trust and safety, creativity and innovation, data literacy, and human-machine partnerships. Those issues and the roles derived are not necessarily in the HR domain. However, the perceptions of HR leaders represent pivoting in the organizational state of mind.

As questions start being raised around the potential for bias, inaccuracy, and lack of transparency in workforce AI solutions, more senior HR leaders understand the need for systematically ensuring fairness, explainability, and accountability. The writers believe this could lead to HR roles such as the Human Bias Officer, responsible for helping mitigate bias across all business functions. I believe it’s an encouraging direction in organizations’ agendas toward responsibility in the broad social context. And so, I’m happy to end this monthly edition with such a positive perspective.


Edition #2 – July 2020

Workforce AI Ethics in strategic thinking

Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward

Samuele Lo Piano

More and more decisions related to the people aspects of the business are being based on machine-learning algorithms. Ethical questions are raised from time to time, e.g., when “black box” algorithms create controversial outcomes. However, until writing these lines, I have not found a single standard or framework that guides the HR-Tech industry beyond regional regulations.

By the time such a standard established, any practitioner who deals with the subject needs a thorough review of literature that leads to available tools and documentation. This Nature’s article offers the solutions. Although it addresses ethical questions related to risk assessments in criminal justice systems and autonomous vehicles, I consider reading it a strategic step towards ethical considerations in the procurement of workforce AI. Particularly, the article focuses on fairness, accuracy, accountability, and transparency, and offers guidelines and references for these issues.

The article lists research questions around the ethical principles in AI, offers guidelines and literature on the dimensions of AI ethics, and discusses actions towards the inclusion of these dimensions in the future of AI ethics. If you start the journey toward understanding the ethics of workforce AI, you should use this article as an intellectual hub for further exploration of academic and practical conversations.

(Thank you Andrew Neff for the tweet)

Workforce AI Ethics in practical advice

23 sources of data bias for #machinelearning and #deeplearning

ajit jaokar

This list includes 23 types of bias in data for machine learning. Actually, it quotes an entire paragraph of this survey results on bias and fairness in ML. Why I put this content in the practical advice section of this monthly review? I think that although most business leaders in organizations may not be legally responsible for such biases in workforce AI, at least not directly, they do need to be aware of them, ethically. After all, AI support decision-making, but the last words are still owned by humans, who must take into account everything, including justice and fairness.

It’s good to have such a list. I advise you to come back to it from time to time, to refresh your memory and be inspired. So, what kind of biases you can find in this list? Plenty: Aggregation Bias, Population Bias, Simpson’s Paradox, Longitudinal Data Fallacy, Sampling Bias, Behavioral Bias, Content Production Bias, Linking Bias, Popularity Bias, Algorithmic Bias, User Interaction Bias, Presentation Bias, Social Bias, Emergent Bias, Self-Selection Bias, Omitted Variable Bias, Cause-Effect Bias, Funding Bias. Did you try to test yourself and count how many of these biases you already know?

Some biases listed here can be resolved by research methodology. That’s the reason I include some examples of such biases in my introductory courses. So if you are a People Analytics practitioner, don’t hesitate to re-open your old notebooks. Here’s one of my favorites, i.e., I enjoy presenting it to students: Simpson’s Paradox! It arose during the gender bias lawsuit in university admissions against UC Berkeley. Sometimes subgroups, and in this case – women, may be quite different. After analyzing graduate school admissions data, it seemed like there was a bias toward women, a smaller fraction of whom were being admitted to graduate programs compared to their male counterparts. However, when exploring admissions data separately and analyzing it across departments, findings reveal that more women actually applied to departments with lower admission rates for both genders.

Workforce AI Ethics in product reviews

Remote working: This company is using fitness trackers and AI to monitor workers’ lockdown stress

Owen Hughes 

PwC was harnessing AI and fitness-tracking wearables to gain a deeper understanding of how the work and external stressors are impacting employees’ state of mind. During the COVID-19 crisis, companies promote healthy working habits to ensure employees are provided with the support they need while working from home. What can a company offer beyond catch-ups on Zoom? PwC approach is novel, yet, to me, controversial.

The company has been running a pilot scheme that combines ML with wearable devices to understand how lifestyle habits and external factors are impacting its staff. Employees volunteered to use fitness trackers that collect biometric data and connect it to cognitive tests, to manage stress better. Factors such as sleep, exercise, and workload influence employee performance, Obviously, and balancing work and home life benefits mental health and wellbeing.

Volunteering rates were higher than expected. Understanding of human performance and human wellness is, clearly, an interest of both employees and employers. However, in my opinion, it must initiate a discussion about the boundaries of organizational monitoring. Is it OK to collect employee biometric measures, e.g., pulse rate and sleeping patterns, and combine them with cognitive tests and deeper personality traits, in the organization arena? If it does, how far is it OK to go with genetic information? How different are these answers in case the employer also offers medical insurance as a benefit to its employees? Tracking mental and physical responses to understanding work may be essential. Still, employers may provide education and tools without being directly involved in data collection and maintenance. Even when volunteered, there always a self-selection bias among employees (see the previous category in this review), and so, the beneficial results are not equally distributed.

(Thank you David Green, for the tweet)

Workforce AI Ethics in a social context

Man is to Programmer as Woman is to Homemaker: Bias in Machine Learning

Emily Maxie

We often hear about gender inequities in the workplace. A lot of factors are at play: the persistence of traditional gender roles, unconscious bias, blatant sexism, lack of role models for girls who aspire to lead in STEM. However, technology is also to blame because machine learning has the potential to reinforce cultural biases. This article is not new, but it offers a clear explanation for the non-techies on how natural language processing programs exhibited gender stereotypes.

To understand the relationships between words, Google researchers created in 2013, a neural network algorithm which enables computers to understand human speech. To train this algorithm, they used the massive data set at their fingertips: Google News articles. The result was widely accepted and incorporated into all sorts of other software, including recommendation engines and job-search systems. However, the algorithm created troubling correlations between words. It was working correctly, but it learned the biases inherent in the text on Google News.

In order to solve the issue, researches had to identify the difference between a legitimate gender difference and a biased gender difference. They set out to determine the terms that are problematic and exclude them while leaving the unbiased terms untouched. Bias in training data can be mitigated, but only if someone recognizes that it’s there and knows how to correct it. Sadly, it would be impossible to tell if all the uses, in all kinds of software, are fixed, even if Google corrected the bias.

(Thank you Max Blumberg for highlighting this article)


Edition #1 – June 2020

Workforce AI Ethics in strategic thinking

Ethics and the future of work

Erica Volini, Jeff Schwartz, Brad Denny

The way work is done changes, as the integration between employees, alternative workforces, technology, and specifically automation, becomes more prevalent. Deloitte’s article Ethics and the future of work enumerate the increasing range of ethical challenges that managers face in result. Based on a survey, it states four factors at the top of ethical challenges related to the future of work: legal and regulatory requirements, rapid adoption of AI in the workplace, changes in workforce composition, and pressure from external stakeholders. Organizations are not ready to manage ethical challenges. Though relatively prepared to handle privacy and control of employee data, executives’ responses indicate that organizations are unprepared for automation and the use of algorithms in the workplace.

According to Deloitte, organizations should change their perspective when approaching new ethical questions, and shift from asking only “could we” to also asking “how should we.” The article demonstrates how to do so. For example, instead of asking “could we use surveillance technology?” organizations may ask “how should we enhance both productivity and employee safety?”.

Organizations can respond to ethical challenges in various ways. Some organizations create executive positions that focus on driving ethical decision-making. Other organizations use new technologies in ways that can have clear benefits for workers themselves. The point is that instead of reacting to ethical dilemmas as they arise, organizations should anticipate, plan for, and manage ethics as part of their strategy and mission, and focus on how these issues may affect different stakeholders.

 

Workforce AI Ethics in practical advice

Walking the tightrope of People Analytics – Balancing value and trust

Lucas Ruijs

The People Analytics domain will eventually transform into AI products. In the early days, most People Analytics practices were projects or internal tools developed in organizations. As the industry matures, more and more organizations automate, starting with their HR reporting. HR-tech products and platforms that offer solutions based on predictive analytics and natural language processing are not rare anymore, although mostly seen in large organizations. However, the discussion about Ethics in HR-tech is still in its infancy. In my opinion, the conversation between the different disciplines – HR and OD, ML and AI, and Ethics – are the building blocks of the People Analytics field in the future. The article Walking the tightrope of People Analytics – Balancing value and trust is an excellent example of such a multidisciplinary conversation.

People Analytics projects might go wrong in many ways. To prevent the harmful consequences of lousy analysis, HR leaders must ask essential questions about the balance of interests between the employer and the employees, the value delivered to each party, the fairness, and transparency of the analysis and the risk of illegal or immoral application of the results. The HR sector needs an ethical framework to address these questions.

This article takes this need a step further. It defines ethics, review its three primary paradigms, i.e., deontology, consequentialism, and virtue ethics. Then it derives practical principles from each method, respectively – transparency, function, alignment. Each of these principles offers three questions that should be raised before, during, and after an analytics project. This framework goes beyond the regulation. It helps to make sure that new analytics capabilities that improve decision making are not sacrificing employee care.

(Thank you David Green, for the tweet)

Workforce AI Ethics in product reviews

This startup is using AI to give workers a “productivity score”

Will Douglas Heavenarchive

In the last few months, the covid19 pandemic caused millions of people to stop going into offices and doing their jobs from home. A controversial consequence of remote work was the emerging use of surveillance software. Many new applications enable employers now to track their employees’ activities. Some record keyboard strokes, mouse movements, websites visited, and users’ screens. Others monitor interactions between employees to identify patterns of collaboration.

The MIT technology review covered a startup that uses AI to give workers a productivity score, which enables managers to identify those who are most worth retaining and those who are not. The review raises an important question: do you owe it to your employer to be as productive as possible, above all else? Productivity was always crucial from the organizational point of view. However, in a time of the pandemic, it has additional perspectives. People must cope with multi challenges, including health, child care, and balancing work at home with personal needs. But organizations struggle too, to survive. The potential conflicts of interest, and the surveillance available now, put additional weight on that question.

When runs in the background all the time, and monitoring whatever data trail a company can provide for its employees, an algorithm can learn typical workflows of different workers. It can analyze triggers, tasks, and processes. Once it has discovered a regular pattern of employee behavior, it can calculate a productivity score, which is agnostic to the employee role, though it works best with repetitive tasks. Though contributing to productivity by identifying what could be made more efficient or automated, such algorithms also might encode hidden bias, and also might make people feel untrusted.

 

Workforce AI Ethics in a social context

The Institute for Ethical AI & Machine Learning

The Institute for Ethical AI & Machine Learning is a UK-based research center that carries out technical research into processes and frameworks that support the responsible development, deployment, and operation of machine learning systems. The institute’s vision is to “minimize the risk of AI and unlock its full power through a framework that ensures the ethical and conscious development of AI projects.” My reading about this organization’s contribution is through a lens of workforce AI applications. However, this organization aims to influence all industries.  

Volunteering domain experts in this institute articulated “The Responsible Machine Learning Principles” that guide technologists. There are eight principles: Human augmentation, Bias evaluation, Explainability by justification, Reproducible operations, Displacement strategy, Practical accuracy, Trust by privacy, and Security risks. Each principle includes a definition, detailed description, examples, and resources. I think every workshop for AI developers should cover these principles, and especially in the HR-Tech industry.

The Institute for Ethical AI & ML offers a valuable tool, called AI-RFX. It is a set of templates that empowers industry practitioners who oversee procurement to raise the bar for AI safety, quality, and performance. Practically, this open-source tool converts the eight principles for responsible ML into a checklist.

Littal Shemer Haim

Littal Shemer Haim

Littal Shemer Haim brings Data Science into HR activities, to guide organizations to base decision-making about people on data. Her vast experience in applied research, keen usage of statistical modeling, constant exposure to new technologies, and genuine interest in people’s lives, all led her to focus nowadays on HR Data Strategy, People Analytics, and Organizational Research.

+50 Articles
Share on facebook
Share on twitter
Share on linkedin
Share on email

Leave a Comment

Your email address will not be published. Required fields are marked *

About This Blog

Thoughts and ideas related to People, Data, and the Future of Work.

Your browser doesn't support the HTML5 CANVAS tag.

Share on facebook
Share on twitter
Share on linkedin
Share on email

Enjoy Reading This?

Join People Analytics enthusiasts, and get new featured articles, delivered straight to your inbox!

All images and texts on this website are copyrighted © Littal Shemer Haim ALL RIGHTS RESERVED

Stay Tuned!

Subscribe and get notified about featured articles weekly