Facial recognition technology, like so many other capabilities of the digital age, is improving at a blinding clip. Companies are working to build smarter and more accurate programs based on deep learning and algorithms to identify people quickly in the name of security.

China set out several years ago to begin developing a facial recognition system capable of identifying an individual within the country’s 1.3 billion residents within three seconds. This network requires up to 13 terabytes of storage but could single someone out with 90 percent accuracy.

This massive database is still in development, but with hard drives holding up to 10 terabytes small enough now to fit in a suitcase and be transported by commercial or private jets or other modes, there are concerns the information could fall into the wrong hands and be used for dangerous purposes.

The problem is that technology always evolves faster than the laws that govern it.

Two multinational tech companies, Microsoft and Amazon, are trying to right the ship before we all sink.

Among the problems with facial recognition software, as admitted by both companies and widely acknowledged within the industry, is that it’s an imperfect science that often tends to have glaring race-based problems. Facial recognition technology developed in the United States, France, Germany and are prone to misidentifying people of African descent or failing to recognize them at all. The same is true for technology developed in China, Japan and South Korea – those systems have trouble identifying Caucasian faces.

One MIT student said she needed to “borrow” a white female classmate in order to complete an artificial intelligence assignment because the software didn’t recognize her as a person; she also detailed having to use an expressionless white plastic mask to be detected as human at another time.

The need for public and private cooperation

As technology improves and programs become more sensitive and better able to identify facial patterns with great consistency and accuracy, it leaves wide open the possibility that these technologies could be used for nefarious purposes.

The United States is moving quickly to adopt facial recognition software for all passengers on international flights, starting with the nation’s top 20 airports by 2021. But some question whether the Department of Homeland Security is charging ahead in the name of fighting terrorism without properly vetting the system and without instituting regulatory standards.

If enacted, the facial recognition system would scan more than 100 million airline travellers coming into or out of the U.S., including U.S. citizens.

“Facial recognition will require the public and private sectors alike to step up – and to act.”

Brad Smith, President – Microsoft
Facial recognition technology: The need for public regulation and corporate responsibility

Microsoft President Brad Smith wrote a blog post imploring government regulation and oversight of sensitive technology, namely facial recognition capabilities, for the protection of human rights. “In a democratic republic, there is no substitute for decision making by your elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” he wrote.

He’s not demanding that his company and others slam the brakes on developing facial recognition; instead, he wants governments to establish parameters for acceptable uses while also providing protection and legal recourse for people falsely held responsible for actions because they were misidentified by such software.

public security camera

For every good the technology can serve – like finding a lost child quickly through surveillance cameras – there are grave dangers as well – someone being followed and placed on watch lists for attending a political meeting, for example. Governments need to recognize this great responsibility and help companies properly, carefully and judiciously develop their technologies without violating civil rights, he says.

Similarly, Michael Punke, the vice president of global public policy for Amazon Web Services, wrote that his company has been falsely accused of creating a software that could be used with a racial bias to violate civil rights. The company claims that any examples of such tests were the result of improper use and that in the two years Amazon has offered Amazon Rekognition, there has not been a “single report of misuse by law enforcement.” The company supports calls for a national legislative framework to balance protecting individual’s rights with government applications for public safety.

There should be open, honest and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.

Michael Punke, VP of Global Public Policy – AWS
Some Thoughts on Facial Recognition Legislation

Punke lists a series of recommendations, including the adoption of a 99 percent confidence score when facial recognition technology is used by law enforcement agencies to identify people accused of crimes. In such circumstances, human verification and confirmation should also be required as a safeguard against misidentification.

“New technology should not be banned or condemned because of its potential misuse,” he concludes. “Instead, there should be open, honest and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced… We will continue to work with partners across industry, government, academia, and community groups on this topic because we believe strongly that facial recognition is an important, even critical, tool for business, government and law enforcement use.”

Source