“This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification,” said a blog post from January 2018. After many pointed out the conflict of the sale and its critique of the Trump administration’s decision, it backtracked. It updated its statement to note: “Microsoft is not working with U.S. Immigration and Customs Enforcement or U.S. Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose.” The vague response left Microsoft with a lot of wiggle room. It didn’t outright deny the use of its services for facial recognition, only that it wasn’t aware of it being used to separate children. In his book, as noticed by the Guardian, Smith gives a concrete answer. He says that the initial marketing didn’t line up with what the tech was actually being used for. “A marketing statement made several months earlier now looked a good deal different,” he wrote. “As we dug to the bottom of the matter, we learned that the contract wasn’t being used for facial recognition at all. Nor, thank goodness, was Microsoft working on any projects to separate children from their families at the border.” In an interview with The Guardian about the book, Smith also posed the question of legality versus morality. He pushed for a world where big tech companies don’t follow an “if it’s legal, it’s okay” mantra. The statement is particularly relevant today. Microsoft has been criticized by a group of employees due to a recent partnership with oil giant Chevron, which may enable more profitable fossil fuel extraction.
A No Mass Surveillance Philosophy
Regardless, Smith says Microsoft won’t be selling its facial recognition tech for use in mass surveillance. “If we thought that the US government or that Ice was going to deploy facial recognition for mass surveillance, we would object to that and I don’t think we would do it,” he explains. “If an agency in any government wants to deploy facial recognition in a manner that we believe will result in unfair bias and discrimination, that’s something that we won’t do.” Microsoft currently works with US military on several contracts, including the controversial sale of its HoloLens 2 headsets for use in warfare. Though the company is in favor of regulation, he still believes an outright ban would be bad overall. “I think whenever you want to ban a technology, you also have to ask, well, what are the potentials for it to do good as well? And so then the question is how do you strike the balance? I don’t think that you strike that balance by banning all use. You strike that balance by banning the harmful use,” he says. Senators have previously criticized Microsoft for working with Chinese researchers to advance facial recognition. They point to papers co-published by Microsoft Asia and the National University of Defense Technology that may be used to aid mass surveillance in the region. At the time it noted that the research was “guided by our principles, fully complies with US and local laws, and the research is published to ensure transparency”. As the debate rages, Microsoft continues its stance that it will continue to sell facial recognition tech, but not for malicious use. Of course, as positive facial tech can be repurposed for use in other areas, that will be a difficult task.