AI is having a huge impact on our society. According to the McKinsey Global Institute, Artificial Intelligence has the potential to create an additional thirteen trillion dollars in worldwide economic activity by 2030, which adds up to an additional 1.2 percent in GDP growth per year. But as more and more tasks are being performed by AI algorithms, we are discovering the degree to which AI can magnify both our most illuminating insights as well as our most pernicious prejudices. To achieve the potential inherent in AI, it’s essential that we understand that these technologies – particularly machine leaning – are most effective and useful when they’re less artificial…and more intelligent.
The Problem with Focusing on Big Data
The rapid rise in AI is currently being driven largely by advances in machine learning, which, in turn, is being driven by the increasing availability of data. With the basic concept being to teach a computer program to perform a task such as translating text from one language to another or recognizing people in photos, the mantra for machine learning is “the more data, the better.” But this focus on the power of Big Data can result in the mistaken belief that data represents the unbiased truth.
Take, for example, the case of data dashboards, which can be valuable tools for executives and frontline managers to gain snapshots of key performance indicators, marketing metrics, and other business barometers that allow them to take swift action rather than having to wait for weekly or monthly reports.
As Harvard Business Review points out, data dashboards can be misleading in various ways. One way data can mislead happens when the IT specialist or consultant who designed the dashboard is not intimately familiar with the business model and priorities of the company, which will likely result in the dashboard containing information that isn’t particularly relevant. Other misunderstandings can occur due to the way data is presented in a dashboard. And, as commonly occurs with data visualizations, people tend to see causality where it doesn’t exist.
This example highlights the fact that humans tend to jump to conclusions and make other logical errors when interpreting data, and this is a problem that more data and fancier tools won’t necessarily solve.
AI in Online Advertising
Deep learning can make good predictions of future events by studying huge volumes of data that describe the past and present. And one area where this technology is generating huge profits is online advertising. Since many online ads will only be paid for if someone clicks on them, companies directly benefit from matching ads more closely to what consumers want. They do this by tracking our online lives to learn what our interests and desires are, and Google, Microsoft, and Alibaba are all using deep learning to improve their click prediction, making targeted advertising the most lucrative application of AI in the tech industry.
Online advertising, however, also provides examples of tech run amok. One such case was the finding of Google search bias toward the company’s own services, for which European Union regulators fined Google $2.7 billion in June 2017.
Even more troubling was the case of big-name companies inadvertently funding terror. In February 2017, reports of companies such as Mercedes-Benz, Marie Curie, Honda, and Disney helping to support groups such as ISIS, neo-Nazis, and other violent extremists served as a wake-up to the dangers of programmatic advertising. This disturbing situation shined a light on how, by automating ad placement, the profit motive for media agencies can easily overcome the aim of delivering better results for their clients.
AI Bias: Magnifying Prejudices
One of biggest problems tech companies are currently grappling with is that their AI has to be trained with data supplied by humans, and that data could be biased. This is a big problem because biases don’t just get baked into the software through the machine learning process; they become amplified. Computer scientists have shown that several large collections of labeled photos that have been used to train image-recognition programs ended up magnifying gender bias by causing the software studying the photos and their labels to create an even stronger association between women and cooking than the photos themselves originally reflected. And, much more embarrassingly, in 2015, Google’s photo service tagged images of black people as gorillas.
AI Medical Diagnoses: Amplifying Insights
Despite the way machine learning may not just perpetuate but can end up magnifying our biases, AI is also capable of picking up on the best of our human insight and expertise, amplifying our knowledge to help save human lives.
The Human Diagnosis Project is one of several AI platforms that are crowdsourcing medical diagnoses by funneling information provided by primary care doctors to the appropriate specialists, using natural language processing technology that identifies keywords. Specialists who have been recruited into the system determine the most likely diagnoses and recommend treatment. The network’s machine learning algorithms then validate each specialist’s findings by checking them against all the previously stored case reports, as well as weighing each according to confidence level, and ultimately coming up with a single suggested diagnosis. The results are that every solved case makes the whole system smarter, and patients who may not have the time or financial resources to follow up with a specialist on their own are getting the care they need.
Meeting AI’s Full Potential with Human Wisdom
As we’ve seen, AI can create ethical and legal troubles, but it can also increase profits, provide valuable services, and even save lives. In other words, much like us humans, AI can do dumb things as well as be super smart.
Because machine learning has a way of absorbing and internalizing the lessons – including unintended ones – of whoever is providing the training material, artificial intelligence at present is a double-edged sword. Thus, it seems that the key to meeting AI’s full potential takes us back to the wisdom of the Ancient Greeks: Know thyself.