Rohit Deshpande Sunday, 21 June 2020

Smile for the Camera!

Rohit Deshpande
Sunday, 21 June 2020

Smile for the Camera!

On 15th September 2014, Steve Tully, a financial advisor from Denver, was beaten up by three men dressed in black jackets, accusing him of beating up their brother in blue, a policeman. He was then arrested for a string of bank robberies and charged with assaulting a police officer. He was detained for two months and released only after evidence was provided, confirming his alibi. After surveillance footage from the robberies, Steve had misidentified as the bank robber and matched him with the bank robber.

But irresponsible usage of facial recognition isn't a problem of the past. As technology has furthered, so has misuse of it. Most recently, reports have surfaced claiming that police in the U.S. are using facial recognition on social media posts to identify people with outstanding warrants in the George Floyd protests and arresting these people.

Facial recognition technology has become the new technology of choice for governments across the globe to implement. U.K. recently started using facial recognition technology on London streets, while finding anyone who covered their face in the allocated zones. Australia has initiated its facial recognition program called the capability, and the most ironic of these is from China, where their facial recognition system is called Skynet.

Currently, the Chinese government is capable of tracking every registered citizen in significant cities for up to one week in the past; they can link people to cars, relatives, and friends, making it the most comprehensive surveillance system.

While these systems raise deep philosophical questions in the long term regarding their usage and ethics, in a short time, the technology's imperfections pose a more significant threat. As it stands, the facial recognition technology is a work in progress, with high rates of misidentification and false matches when it comes to matching people, especially in real life. When a system was put into test in the U.K., B.B.C. found that only 8 out of 42 matches by the system were verifiably correct. In research conducted on many of these software's, it was found that the algorithms while adept at recognizing white male faces showed higher errors when recognizing black people and females. In many cases, black female's genders were misidentified, and in some cases, their faces were not registered at all.

Another study identified that Asian and African American people were almost 100 times more likely to be misidentified by facial recognition than white people. This raises the possibilities of a Steve Tully like wrongful arrest.

Even if facial recognition were to become a comprehensive technology with a 0% error rate, we as a society need to ask ourselves if we want this technology to track our every move and how and who should get to use it. Some glimmers of hope have presented in the past few months. In the wake of police using facial recognition against protesters, Amazon and Microsoft halted their programs and suspended working with the police on facial recognition.

However, these are just bandaids on a deep wound. The goodwill of private companies cannot regulate proper usage of facial recognition, concrete legislation is needed before this technology can be implemented in society. Before this technology makes private lives obsolete, laws must be passed, ensuring proper use of this technology. If not, get used to smiling, cause a camera will be watching.

Trending