When Artificial Intelligence knows something about you

Those were the words of advice offered up by philosopher, historian, and best-selling author Yuval Noah Harari during an on-stage conversation at Stanford University last week. The prolific writer (and agitator) has long been a critic of artificial intelligence applications that track, aggregate, and learn from our every move, gleaning insights about us that we are sometimes oblivious to ourselves.

“Get to know yourself better,” Harari said again in more modern English, “because now, you’ve got competition.”

Stanford’s Human-Centered AI Institute, which aims to develop technologies to benefit humanity, co-sponsored the event. The conversation also featured computer science professor Fei-Fei Li, a pioneer in AI research and co-director of the multidisciplinary institute. Both speakers focused on what the future holds for AI, and how it can be utilized to “support rather than subvert” human interests. Not surprisingly, Harari and Li did not always see eye to eye on the best path forward or on the scope and severity of the harm AI can unleash.

One of Li’s suggestions, for examples, was to develop AI systems that can explain their processes and decisions. But Harari argued that these technologies have become too complex to be explainable, and that this level of complexity can undermine our autonomy and authority.

While the conversation was mostly fruitful and productive, there were a few friendly jabs.

“I’m very envious of philosophers, because they can propose questions and crises, but they don’t have to answer them,” said Li. (Even Harari chuckled at that one.)

Perhaps, Harari’s laughter came from knowing that he was about to offer some solutions, though even his very simplest takeaway—“know thyself”—is easier said than done. The challenge of knowing ourselves better than AI systems can is best illustrated with an anecdote shared by the philosopher himself. Harari told the audience that he didn’t realize he was gay until he was 21 years old. “I’m with myself 24 hours a day,” he said, yet an AI system could have concluded his sexual identity faster than he could.

“What does it mean to live in a world where you can learn something so important about yourself from an algorithm?” he asked the audience. “And what if that algorithm doesn’t share [this information] with you but with others—advertisers or an authoritarian regime?”

The risks of AI knowing too much about us are real and are starting to be addressed—both by outside critics like Harari and increasingly, by engineers, educators, and other insiders like Li. But what about the risks of the flip side, when AI systems know too little about us, or about entire demographics?

Also last week, I attended Women Transforming Technology, an event that took place at the Palo Alto campus of technology company VMware. There, Joy Buolamwini, a researcher at the Massachusetts Institute of Technology’s Media Lab, discussed problems of bias in AI applications. Much of Buolamwini’s work has centered on the inability of facial recognition systems to accurately identify the faces of women and, to a much greater extent, people of color. As you can probably guess, these systems tend to have the hardest time recognizing the faces of women of color.

“These are the under-sampled majority of the world—women and people of color,” Buolamwini told her audience.

The bias in many facial recognition applications starts with the data sets used to train these AI systems. According to Buolamwini, the vast majority of the pictures fed into these self-learning systems are of subjects who are male and white. The benchmarks used to assess accuracy on these systems, therefore, are also optimized for male, white faces. This has vast and potentially dangerous implications: Just imagine a self-driving vehicle that doesn’t detect someone with dark skin as accurately as it can “see” someone with light skin.

It is these types of risks that led Buolamwini to start the Algorithmic Justice League, an organization aimed at highlighting and alleviating bias from AI systems. The “collective,” as the M.I.T. researcher calls it, brings together coders, activists, regulators, and others to work together to raise awareness on these important technological and societal issues.

Buolamwini’s work has likely led to improvements. During her talk she pointed to recent increases in the accuracy of detecting non-white and non-male subjects by facial recognition from IBM, Facebook and other companies. But here’s the rub: While Buolamwini is clearly pushing for more improvements in these systems, she is also very worried about the applications of facial recognition technologies that do know enough about all people.

“You can have accurate facial recognition and put it on some drones, but it might not be the world you want to live in,” Buolamwini told me during a sit-down interview after her talk.

Buolamwini gave another example: If a system is biased and it’s being deployed for law enforcement purposes, you can’t justify using that system. Now, let’s say you’ve fixed that bias. Then, the question becomes, in Buolamwini’s words, “Do we want to live in a mass surveillance state?”

That is one question I’m pretty sure that Buolamwini, Harari, and Li would answer the same way: No.