At this moment in history it’s impossible not to see the problems that arise from human bias. Now magnify that by compute and you start to get a sense for just how dangerous human bias via machine learning can be. The damage can be twofold:
- Influence. If the AI said so it must be true… people trust outputs of AI, so if human bias is missed in the training it could compound the problem by infecting more people;
- Automation. Sometimes AI models are plugged into a programmatic function, which could lead to the automation of bias.
But there is potentially a silver machine-learned lining. Because AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves.