In my previous post “Bias in AI“, I gave and explanation of how bias can arise in AI.  I aimed to give an impression that bias – particularly prejudicial bias – can only be addressed through company-wide solutions.  In this blog post I want to offer some suggestions on how companies can address prejudicial biases

Types of biases

 To direct the discussion, I organize the types of bias into a triangle that views sources of bias from the perspective of a company.  At the base are ingrained, societal biases which can influence many areas of a business.  Next are the business-related practices and finally those related to machine learning.  The order suggests where the most effort is needed to address bias, so I’ll follow this order going forward. 

In this article, I will be covering the following:

Societal bias 

A company should not dismiss extrinsicsocietal bias.  Business leaders may be tempted to say

“We hire fewer female developers because there are fewer female developers on the market”.

While this may be true, everyone knows that it should not be this way. 

Institutions as well as individuals should take a responsibility to make a positive impact on society.  For example, a tech firm can support their female developers in community outreach to bring tech to school age girls.  I have been impressed with the outreach that I see at TheVentury.  

We have female developers and project leaders who engage with school age girls to show that the tech field can be a place for them.  We also engage with organizations like Female Factor & Female Founders to support women who are founders or have influential roles in their companies.  Supporting outreach not only helps to reduce biases, but also generates exposure for the company.    

 

Bias in business practices 

Existing biases can easily enter business practices unless they are given active consideration.  Hiring is an obvious example of this.  Certain groups may not be well represented in the employee pool.  Unless a company puts effort into reaching out to such groups, they will continue to be under represented.  It is important to recognize that diversity brings value to the company. 

Many studies have shown that having a diverse team improves performance and results.  

In addition to hiring, businesses should think carefully on how societal biases relate to their products or services.  Project goals should be critiqued for potential biases.  Questions like “Who is excluded by this product?”, “Which groups will have difficulties to access this service?”, and “How can we ensure that our product reaches a diverse audience?” can help to address biases. 

I don’t aim to give a complete prescription here because it depends on the business model, but consideration should be given in this manner to reduce bias.  

Diverse team

Bias in the development process 

Biases can be introduced at the development stage as well.  The most common causes are tight development schedules and hard deadlines.  When developers are under pressure to deliver, they focus on the immediate outcomes and are unable to look at the broader picture.

To address this, developers should be given sufficient time to consider broader implications of projects and how bias can enter. 

Starting with a diverse team will also help to identify potential biases.  From the project management perspective, addressing bias should be thought of in the same way as testing. 

It requires more resources up-front, but insures a more reliable and extensible product in the end. 

Identifying bias through data analytics

Since bias enters AI principally through the data, understanding the data is a key component of identifying and mitigating bias.  When projects are under pressure, there may not be sufficient time to explore and understand the data.  It is easy to shove data into a model, measure the performance, and move on. 

The real work should be to understand the data, the model, and the entire process in order to identify any issues, such as bias. 

Therefore, time and resources need to be dedicated these important elements of data science.   

Tech solutions

When I started researching bias in AI, my aim was to understand and expand on the tech solutions.  However it became clear to me that tech solutions are just a bandage over a deeper wound. 

Yes, tech solutions can help you bring a better product to market, but this alone cannot address the existing biases. 

To really solve bias means to address it at every level as I described above. 

Only then should you feel confident to apply a tech solution.   

Robot Image

Mathematical definitions

 Biases are measured through formulas that mathematically capture our concept of bias.  For example you can calculate the fraction of male or female hires with respect to the field of applicants. 

If they are equal or nearly equal than there is no bias.  There are many ways to mathematically define bias and I could dedicate an entire blog post to the definitions, but fortunately someone already has.  If you are more interested in the mathematical definitions, check out this blog post,

Each measurement of bias tells you something different about the model so we normally checks against multiple formulas.  It is very possible that some formulas will show bias while others will not. 

The data scientists along with project managers must decide if further action should be taken based on the full set of results.  

It is important to note that the AI cannot discover our biases for us – it knows nothing about human values.  We therefore can only measure bias with respect to attributes that we have identified.  This is an important weakness in relying on a tech solution.  

Bias mitigation

If biases are identified, there are a few possible remedies.  The most obvious solution is to not use the data source or discard the model, but this is often not realistic given the needs for data and the demands of customers.  Furthermore, even if other data sources are available and other models can be developed, they may have the same, or different biases.   

There are essentially three places where we can attempt to mitigate bias.  The most direct method is to modify the data source.  

Since the data contain all of the information and correlations modifying the data is the most direct method.   

In some cases, the data may not be modifiable, perhaps because it is provided by another source.  In this instance, instead of modifying the data, the model itself can be modified.  Indeed the model is just a large set of weights that approximate the data.  By modifying those weights, the biases can be corrected.   

The last method is to modify the output of the model, in the case where one has no access to the data or the model creation.  This method corrects the output to remove bias, but has the least flexibility.   

Tools

There are a number of enterprise tools that are available to address bias.  I would like to highlight an open-source framework developed by IBM called IBM fairness 360This framework covers the full tech solution including bias measurement using many different metrics and algorithms that apply the three types of mitigation methods.  You can find some demos on their website. 

The framework can be used through its API, allowing anybody with some tech experience to plug in their data, measure bias, and apply mitigation methods. 

I will emphasize that such a tool should not be used blindly.  It can help in understanding your data and models.  If you use a bias mitigation method then you still need to understand what modifications were made and ensure that the performance is maintained in all aspects.

At TheVentury we have already explored using fairness 360, so if you would like to evaluate and mitigate bias in your data, let us know!  We can help with not only the technical aspects, but also how to address bias in the broader ways that I discussed above.

The Takeaways

Bias is a deeply ingrained problem that does not have simple solutions.  It results from historical norms, is reflected in human behavior, and is therefore baked into the data.  Indeed, as our understanding of bias has improved over time, we recognize our own biases and correct our behavior. 

This is unfortunately a slow process, so we need some solutions to overcome biases in data.  Importantly we should also work to speed up the process by addressing bias directly.   

Primarily this means that time and resources need to be dedicated to understanding and addressing biases. 

  • Data scientists should have time to understand the data and to develop models that are explainable. 
  • Development and project goals should be evaluated for possible bias.
  • Hiring a diverse team is a good step towards confronting these biases.  
  • Even overall business goals should be evaluated in the context of bias.  
  • Finally, we should all take a responsibility to address bias.  Companies can contribute in one way by supporting their employees in outreach programs.   

This makes economic sense as well.  As I pointed out in my first post on this topic, large AI projects have been abandoned because of unmanageable bias.  By dedicating resources upfront, you can avoid losses due to abandoned projects. 

Sign up to stay in the loop

We are constantly working on new ventures and exiting projects. We would love to share the latest news with you so let's keep in touch!

Share this post

Other posts you might like

MARCH 17, 2020

Get ready for Voice Search!

This article will give you insights...

Read More

NOVEMBER 15, 2019

How AI will disrupt...

In the following article, I will talk...

Read More

SEPTEMBER 24, 2019

Bias in AI – A deep

How do we ensure that Artificial...

Read More