In my previous post “Bias in AI“, I gave and explanation of how bias can arise in AI. I aimed to give an impression that bias – particularly prejudicial bias – can only be addressed through company-wide solutions. In this blog post I want to offer some suggestions on how companies can address prejudicial biases.
Types of biases
To direct the discussion, I organize the types of bias into a triangle that views sources of bias from the perspective of a company. At the base are ingrained, societal biases which can influence many areas of a business. Next are the business-related practices and finally those related to machine learning. The order suggests where the most effort is needed to address bias, so I’ll follow this order going forward.
“We hire fewer female developers because there are fewer female developers on the market”.
While this may be true, everyone knows that it should not be this way.
Institutions as well as individuals should take a responsibility to make a positive impact on society. For example, a tech firm can support their female developers in community outreach to bring tech to school age girls. I have been impressed with the outreach that I see at TheVentury.
We have female developers and project leaders who engage with school age girls to show that the tech field can be a place for them. We also engage with organizations like Female Factor & Female Founders to support women who are founders or have influential roles in their companies. Supporting outreach not only helps to reduce biases, but also generates exposure for the company.
Bias in business practices
Many studies have shown that having a diverse team improves performance and results.
In addition to hiring, businesses should think carefully on how societal biases relate to their products or services. Project goals should be critiqued for potential biases. Questions like “Who is excluded by this product?”, “Which groups will have difficulties to access this service?”, and “How can we ensure that our product reaches a diverse audience?” can help to address biases.
I don’t aim to give a complete prescription here because it depends on the business model, but consideration should be given in this manner to reduce bias.
Bias in the development process
To address this, developers should be given sufficient time to consider broader implications of projects and how bias can enter.
Starting with a diverse team will also help to identify potential biases. From the project management perspective, addressing bias should be thought of in the same way as testing.
It requires more resources up-front, but insures a more reliable and extensible product in the end.
Identifying bias through data analytics
The real work should be to understand the data, the model, and the entire process in order to identify any issues, such as bias.
Therefore, time and resources need to be dedicated these important elements of data science.
Yes, tech solutions can help you bring a better product to market, but this alone cannot address the existing biases.
To really solve bias means to address it at every level as I described above.
Only then should you feel confident to apply a tech solution.
If they are equal or nearly equal than there is no bias. There are many ways to mathematically define bias and I could dedicate an entire blog post to the definitions, but fortunately someone already has. If you are more interested in the mathematical definitions, check out this blog post,
Each measurement of bias tells you something different about the model so we normally checks against multiple formulas. It is very possible that some formulas will show bias while others will not.
The data scientists along with project managers must decide if further action should be taken based on the full set of results.
It is important to note that the AI cannot discover our biases for us – it knows nothing about human values. We therefore can only measure bias with respect to attributes that we have identified. This is an important weakness in relying on a tech solution.
There are essentially three places where we can attempt to mitigate bias. The most direct method is to modify the data source.
Since the data contain all of the information and correlations modifying the data is the most direct method.
In some cases, the data may not be modifiable, perhaps because it is provided by another source. In this instance, instead of modifying the data, the model itself can be modified. Indeed the model is just a large set of weights that approximate the data. By modifying those weights, the biases can be corrected.
The last method is to modify the output of the model, in the case where one has no access to the data or the model creation. This method corrects the output to remove bias, but has the least flexibility.
The framework can be used through its API, allowing anybody with some tech experience to plug in their data, measure bias, and apply mitigation methods.
I will emphasize that such a tool should not be used blindly. It can help in understanding your data and models. If you use a bias mitigation method then you still need to understand what modifications were made and ensure that the performance is maintained in all aspects.
At TheVentury we have already explored using fairness 360, so if you would like to evaluate and mitigate bias in your data, let us know! We can help with not only the technical aspects, but also how to address bias in the broader ways that I discussed above.
Bias is a deeply ingrained problem that does not have simple solutions. It results from historical norms, is reflected in human behavior, and is therefore baked into the data. Indeed, as our understanding of bias has improved over time, we recognize our own biases and correct our behavior.
This is unfortunately a slow process, so we need some solutions to overcome biases in data. Importantly we should also work to speed up the process by addressing bias directly.
Primarily this means that time and resources need to be dedicated to understanding and addressing biases.
- Data scientists should have time to understand the data and to develop models that are explainable.
- Development and project goals should be evaluated for possible bias.
- Hiring a diverse team is a good step towards confronting these biases.
- Even overall business goals should be evaluated in the context of bias.
- Finally, we should all take a responsibility to address bias. Companies can contribute in one way by supporting their employees in outreach programs.
This makes economic sense as well. As I pointed out in my first post on this topic, large AI projects have been abandoned because of unmanageable bias. By dedicating resources upfront, you can avoid losses due to abandoned projects.