Facial Recognition Systems, Bias and Surveillance: IBM Bows Out

Facial recognition technology has become more accurate and reliable in the past few years. The technology has great potential benefit in a number of venues and scenarios mostly in the context of security for – airports, shopping centres, other venues and law enforcement. However, in practice, there are significant concerns – legal and ethical – that remain unanswered.

One significant issue is whether the technology is sufficiently accurate enough, and what are the consequences if it is not? In common with all AI, it is important to consider if there are hidden biases in these technologies?

Last week, IBM’s CEO announced that the tech firm will no longer offer, develop or research facial recognition software. He stated the reason was because IBM is strongly against the technology’s use in “mass surveillance, racial profiling, violations of basic human rights and freedom.”

Aperion’s Interview with Facial Recognition Expert, Dr. Abbas Bigdeli, Regarding Racial Bias

Aperion Law is proud to work with  Aervision Technologies - an Australian tech company which has developed world class Biometrics and Artificial Intelligence solutions. We asked Dr. Abbas Bigdeli, (CEO of Aervision) for his views and insights on the current scenario regarding the technology, specifically referring to IBM’s announcement to stop facial recognition research in the pursuit of racial justice:

Q. Do you have an angle on this? Why would IBM do this? Is it as clear as it sounds? Is there an ulterior motive?

To be honest, I think it’s more of a marketing and PR stunt by IBM than a genuine concern. I would probably think differently if IBM wasn’t lagging behind everyone else with FR.

Q. Do you have personal experience of the inherent bias?

I don't have any personal experience with the inherent bias but I have reviewed the study that demonstrates the bias and agree that like any Deep Learning based classification engine, the training data can skew the engine. In fact, I would say, given the available training data is biased towards white Caucasians usually, it would be more discriminatory towards that type of ethnicity than the reverse. Hence making it more probable to misidentify faces of non-Caucasians.

Q. Are there viable solutions?

I believe so. This is exactly what we do at Aervision.  We fuse multiple FR engines that have been trained on much more diverse datasets.

Q. How is society going to address the risks? Technologically… or with legal regulation? (Or both?)

I think there are far deeper discussions needed about this. What do we think about biometrics in general? What are the implications of passive vs. active biometrics? Who is to hold/store templates and who has access to them and many other questions are still unanswered.

Facial Recognition Software Privacy Concerns in Australia

So maybe IBM saw a chance to be virtuous and ethical which, of course, reads better than an admission that they were losing this particular technology race. In any event, Dr Bigdeli reminds us of the bigger issues. Half of Australians believe that their privacy is being invaded by facial recognition being used in public places, according to research by Monash University. With the Australian government already developing a national facial recognition database, two-thirds of respondents in the study expressed security concerns for its use.

The Australian Privacy Foundation says the proposal is highly invasive, because the system could be integrated into a number of other systems that collect facial data. The Australian Human Rights Commission says facial technology remains unreliable, and wrongful identification used by law enforcement can have disastrous consequences for the person involved.

We believe that improved accuracy and reliability of the technology will soon remove most risk of false identifications.

So, the larger question remains: Is it OK to permit mass (highly accurate) surveillance of the general public?