We love our prediction machines. 

Modern statistics helped us make more informed decisions. It is supposed to help us make more objective judgments by moving us away from gut instincts or the claims of self-appointed sages.  But, we often forget how deeply flawed and harmful they can be.

Take data as an example. How we collect data drives the decisions we make with that data. But what if you left something out? 

Melinda Gates, my former employer, points out how deeply sexist data sets can be because researchers have often not taken a gender lens to their research. In a number of countries I’ve worked in, there were always appalling data holes because women had difficulty accessing banks, medical facilities and the internet. She says to Business Insider, “We think data is objective and that’s one of the things that surprised Bill and I the most”. 

cars speeding on a highway

How worried should we be?

AI is on a similar path to when traditional statistical tools were first adopted – we should expect widespread activation in a short amount of time. 

As the economists Ajay Agrawal, Joshua Gans and Avi Goldfarb argue in their book Prediction Machines: The Simple Economics of Artificial Intelligence, recent developments in AI and computing capabilities mean that the technology will be even more ubiquitous. “Not only are we going to start using a lot more of it, but we are going to see it emerge in surprising new places.” 

But AI requires us to face the same challenges. Few understand how this technology will work and many have begun to disregard its limitations. Because of how quickly AI is being adopted, we have an even greater obligation to grapple with its failings. A failure to do so can lead to serious inequities. And there is no one better to explain the limitations of AI than the data scientists themselves. 

I have noticed that there is a profound disconnect between those who understand the technology and those who don’t.

I have noticed that there is a profound disconnect between those who understand the technology and those who don’t. The former is deeply cautious about the accuracy of any output and will place significant qualifications on any output. They repeatedly point out the problems involved in imperfect data sets and the limitations of each type of algorithms. 

With those who don’t understand the technology, you rarely hear such nuance. This is dangerous, as executives will have a great incentive to blame the algorithms for unpopular decisions. 

The Financial Times columnist Andrew Hill writes, “Unchecked, one extreme outcome would be a sort of strategy singularity, in which slavish deference to the algorithm destroys managers’ ability to evaluate plans at all.”

Technology to advance all, not the few

woman does physiotherapy with help of a robot

These biases will have a disproportionate impact on marginalized groups. Recent research demonstrated significant gender and racial biases in AI. Facebook’s ad algorithms contained significant biases vis-à-vis minorities – they were more likely to be shown ads for jobs as taxi drivers and janitors. 

Policing has been another area in which there are deep concerns about bias. The US COMPAS algorithm used in sentencing hearings was found to have deep racial biases especially against people who are black. 

If we want to embrace the widespread use of AI technology then we need to make equal efforts to understand and address these biases. I think we can start with a few basic first steps. 

  1. Recognize the problem and acknowledge that AI is both empowering and flawed. 
  2. Make efforts to give greater say to marginalized voices in the development and deployment of these tools. A recent report from NYU points out that the dominance of white males in the fields runs the risk of reinforcing existing biases. Active recruitment and support for women, other gender identities and people of colour into the fields is a must. 
  3. Be transparent. Developing platforms for dialogue and feedback is critical. Communicate how algorithms are deployed and invite feedback from a broad base of stakeholders on what could be done better. 

So much of the anxiety related to AI tools stems from a lack of clarity on how these algorithms work and how they are used. And worse, there are few mechanisms in place to provide such feedback directly with decision makers. 

These suggestions aren’t new. We’ve used them before each time an organization comes under scrutiny for failing to address these biases. Take university admissions, for example. Currently, multiple Ivy League schools face lawsuits for discrimination in their admissions process despite all of the data one needs to submit to apply. Firms deploying AI tools should pay attention lest they too be swept up into a similar controversy.

Source