Addressing Bias in AI for Better Marketing
Artificial Intelligence (AI) appears to be going through a bit of a renaissance of late. With the explosion into the public eye of ChatGPT, DALL-E, and Midjourney to name just a few, it seems every tech company in Silicon Valley is becoming more public with their investments into the world of AI.
This is by no means a new initiative. AI is, and has been, widely used in many fields, from healthcare to financial services to marketing. Businesses have been using AI for efficiency in many ways. But with the boom in awareness over these new tools, it has become even more apparent that the likes of Google, Microsoft, Apple, and Amazon are placing big bets on the future of the technology.
This also raises concerns. AI algorithms are created by humans, and we are well aware that humans are biased creatures. So how does this translate into an algorithm? Where does the human bias get filtered out of the system? Is it even being addressed?
AI algorithms, machine learning models, and even the datasets that feed them, all, unfortunately, run the risk of inheriting biases. They’re actually very likely to. It is an unfortunate circumstance of this newfound need to produce technology that produces results quickly. There was always the chance of bias, but when companies are trying to keep up with the boom, it is harder to avoid AI inheriting the biases of their creators. This can affect decisions that can affect how a customer is marketed to. And we all know first impressions and the customer experience directly correlate ultimately with customer loyalty.
Human biases are well-documented. From association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. The Decision Lab lists close to 100 cognitive biases that all run the risk of influencing results. And these are by no means stagnant, as the “Google Effect” is a more recent development attributing to our digital amnesia. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.
For example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes.
Joy Buolamwini at MIT working with Timnit Gebru found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data (Harvard Business Review). Flawed data sampling in which groups are over- or underrepresented in the training data is another source of bias.
To help make sure your AI is interpreting data correctly with the least amount of bias, you have to look at the results. But you also need to be aware that you alone are not going to be the best judge. Here are just a few steps to help begin removing bias from AI:
Diversify Your Team
One of the most effective ways to remove human bias from AI is to diversify your team. When people from different backgrounds work together, they bring different perspectives and experiences to the table. This can help identify and eliminate biases that may not be apparent to everyone. By having a diverse team, you can review your data for subtle biases that will affect your messaging and campaigns.
Use Representative Data
Another way to remove human bias from AI is to use representative data. This means ensuring that the data used to train the AI algorithm is diverse and reflects the real world. For example, if you are creating a customer journey algorithm, you should ensure that the data used to train it includes people of different ages, genders, races, incomes, etc. and has a realistic sampling. This will help prevent the algorithm from being biased against certain groups of people.
Regularly Review and Audit Your Data
Finally, it is important to regularly review and audit your data to ensure that the conclusions algorithms are making are accurate and they deliver the intended outcomes. This can involve testing the algorithm with different data sets and scenarios to identify any biases. By regularly reviewing and auditing your AI algorithms with the results, you can ensure that you are not isolating any given group in any way.
In conclusion, removing human bias from AI when reviewing data is essential to ensure that the algorithms are fair and accurate. By diversifying your team, using representative data, and regularly reviewing and auditing your algorithms with their results, you can ensure that your AI algorithms are more closely reflecting real world scenarios.