Dr. Yi Chu
4 min read
Table of contents
Following on our previous post introducing Inclusion Advisor, the HR industry’s first AI-driven, in-the-moment DE&I coaching tool, today we’re taking you behind the scenes to hear from the natural language processing (NLP) team about how Inclusion Advisor went from an ambitious idea to an impactful product.
Inclusion Advisor was a natural next step for Workhuman®, as it builds on our goal of giving employees the opportunity to recognize each other in an authentic and meaningful way, making sure everyone feels seen, celebrated, and appreciated for who they are and what they do.
We saw the need for disruptive technology that would help ensure workplaces are safe, inclusive spaces for all. Diversity, equity, and inclusion (DE&I) are at the forefront of this mission, as the Human Workplace Index (HWI) shows that DE&I is a major factor in the decision to stay at a company for almost one-half of employees.
Ultimately, the idea for Inclusion Advisor was born from a desire to examine workplace language, unpack unconscious bias, and empower people to communicate in an inclusive way.
Dr. Yi Chu, senior director of the NLP team at Workhuman, explains:
Developing technology for social good has always been my passion. I came to the U.S. to pursue a PhD in Computer Science with a focus on developing intelligent assistive technology for elderly people with cognitive disabilities. I see the potential of AI technology in changing societal norms and building unconventional solutions to help address the needs of marginalized populations in society.The more I worked on this project, the more I understood how prevalent the issue of unconscious bias is in the workplace, and the importance of leveraging AI and technology to detect it. I felt strongly motivated because I realized I had lived through many moments of unconscious bias in my own life."
She, along with the rest of the team, saw that there were three key factors that set Inclusion Advisor up for success.
Both the second and third factors refer to one key element— people —who are the most essential part of this work. We have been able to harness the power of this trifecta from the very beginning because we know we can make it happen, and to the team, it felt more like a mission rather than an option.
Unconscious bias is a relatively new area of study and an emerging topic in NLP. There wasn’t existing technology to build upon, so we conducted our own research and built this technology from scratch.
It took more than four years of effort, involving both internal and external subject matter experts, to build Inclusion Advisor. We manually analyzed written messages to create the most comprehensive and unique taxonomy of implicit bias based on real workplace communication data. This effort has resulted in the industry’s first knowledge database and guidelines to systematically catalogue implicit bias in workplace communication.
In going from concept to computational modeling of bias, we have applied a multidisciplinary approach, tapping into diverse disciplines to develop our models and solutions. We are leveraging state-of-the-art NLP and machine learning techniques to capture the subtleties of natural language and incorporating academic and internal behavioral science research to maximize the tool’s impact.
Another important principle we’ve followed throughout the process of designing Inclusion Advisor is human-centered design. People have always been the focus of this project, from the earliest stages of development to production and beyond, ensuring Inclusion Advisor is delivering a positive, meaningful user experience.
Human-centered design has become an emerging theme in the past several years, as experts from both human-computer interaction and NLP are coming together to explore important design questions.
Human-centered design is critical for any AI product, but it’s especially important for us because of the subtlety and subjectivity of the problem we’re trying to address, and the fact that there is no precedent for this kind of tool.
We’ve made it a priority to integrate human insights: from running user surveys and interviews in the initial discovery phase, to iterative interface design, data collection and annotation, model evaluation, and live testing with our pilot users, humans have been involved every step of the way.
This effort extends far beyond the launch phase, as we continue to collect client and user feedback through both quantitative and qualitative methods. All of this feedback helps us continually refine the tool. This level of human involvement is tremendous and essential in addressing the challenges of our work. One example is how we’ve incorporated our clients’ educational materials on the topic of inclusive language into the tool.
The team took a data-driven approach when exploring the data and categorizing bias, meaning we developed bias categories based on actual recognition messages rather than trying to predict what biases might be present.
As the goal of a recognition message is to express gratitude and recognize colleagues for a job well done, we expected most messages to be positive in terms of tone and content. However, we found that a significant number of messages contained bias based on gender, rank, and other factors.
We were shocked by the number of messages that recognized women for their personal appearance and personality traits. One message that sticks out thanked the awardee for “always having a smile on her face” and referred to her “pretty red hair,” which is inappropriate language for a company-wide platform intended to recognize employees for their work contributions and achievements.
Inclusion Advisor has been launched to hundreds of thousands of employees at several companies (so far), and we are planning to roll it out to our base of 6+ million users next. Clients and users have provided incredibly helpful feedback, as well as our colleagues at Workhuman who worked hard to gather feedback and develop solutions to optimize the user experience.
The team is especially proud of the impact Inclusion Advisor is having; when a message is flagged for bias, people choose to edit their message 65% of the time, and they remove the bias 55% of the time. This means that Inclusion Advisor is effective in helping remove unconscious bias more than half the time it is found.
Merck, one of our partners for the pilot version of Inclusion Advisor and recipient of the Brandon Hall Gold Award for "Best Advance in Rewards and Recognition Technology" in 2021, has found the tool an excellent complement to their DE&I education and programs.
Celeste Warren, the Vice President of Global Diversity and Inclusion at Merck said: “It helps us recognize people and ensure we’re using words that are inclusive and instill confidence, and it really helps people feel good about their reward.”
In a user feedback interview, one user said, “Not only did I change [my language] here, I’m more mindful when I’m writing emails today. I use the same words, and I understand how it can be perceived.”
Receiving feedback like this is not only personally rewarding for everyone working on the tool, but also shows that Inclusion Advisor is something people want to use. Users are copying and pasting emails, instant messages, and other forms of communication into Inclusion Advisor because they see the value of this kind of tool.
We’re looking forward to integrating the tool into the many different channels of communication employees use in the workplace, including Teams, Outlook, Slack, and more.
Talent development and performance feedback are additional areas we’re exploring in terms of future expansion. Finally, we’re working on rolling out Inclusion Advisor Advanced, which will offer improved coverage and accuracy in detecting bias and provide hyper-specific advice to help users make their language more inclusive.
About the author
Dr. Yi Chu
Security & Privacy
News & Press