By | Abhjit Bhaduri
Designed by behavioural scientists, the new algorithms being deployed by companies are successful in goading people to act in a certain way, besides eroding their privacy. Is the consumers & employees right to choose being taken away – with our consent …
This started many years back when we signed up for ‘free’ email services. The big providers like Microsoft, Google, Yahoo and many others offered free email. The paid email services did not stand a chance. Maybe they made the best of a bad bargain and sold our data to the highest bidder before they pulled down the shutters and went home.
Who gave them the right to sell our data? We did, when we agreed to the End User Licensing Agreement for that company.
I do I do … oh yes, yes, yes
Most people simply press “I agree” (without reading) whenever that document comes up and start using the software. If you did, would you comprehend the legal jargon? For example, Apple users have agreed to not use their software to make nuclear weapons. Serious.
“You also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of nuclear, missiles, or chemical or biological weapons.”
As software eats the world, tech companies need to pause and think if we need to have a common language around the ethics of algorithms. In 2012, Target was able to use its algorithms to know that a teen was pregnant even before her father did. While it was a great advantage to the retailer in acquiring the teen as a customer, human dignity and sensitivity were missed out.
Algorithms are scarily accurate
Facebook’s face recognition algorithms can pick a face out of a crowd with 97.25 per cent accuracy. Imagine what that algorithm could do if it was put to use for something sinister. Self-driving cars will have to make decisions about whose life is more precious — the owner’s or the person on the road. These decisions will need to be thought out ahead of time.
There is cause to worry
When the Microsoft chatbot Tay was launched, the designers of the software never thought Tay would turn racist in all of 24 hours of interaction with users. Software companies do not allow scrutiny of their proprietary algorithms. There is intellectual property to be protected. Currently that leaves no mechanism for someone knowledgeable to step in if needed to go over the code to see if everything is kosher.
The apps today are much more than a few lines of code thrown together to do something cute. They are capable of shaping our choices in a manner that are invisible to the one being prodded. Dopamine Labs, a small startup in Los Angeles, creates tools that can hook users to an app. Their site says so.
“Keeping users engaged isn’t luck: It’s science. Give users the right * of dopamine at the right moment and they’ll stay longer, do more, and monetize better.”
Dalton Combs and Ramsay Brown are skilled neuroscientists who have designed this app. Combs has a doctorate in the field of neuroeconomics — a branch that helps in identifying and understanding the chemistry and biology involved in decision-making. Brown, the COO, graduated with a doctorate in neuroinformatics, which involved developing tools to help neuroscientists better understand the brain.
The apps have so far been used for positive purposes. Root, a teaching tool for university students, has been used to drive a 9 per cent improvement in student attendance after integrating the Dopamine Labs code. Micro-lender Tala saw a 14 per cent improvement in micro-loan repayment.
What would happen if a tobacco company or alcohol maker or drug dealer were to buy the app to encourage addiction?
The fear is not far-fetched. E-commerce companies are using algorithmic nudges to encourages shoppers to buy more than what they need or can afford. The UK government is using behavioural scientists to get people to pay their taxes in time. What is to stop a totalitarian government from using algorithms for something sinister?
These are not choices that can be left to individuals, companies or governments. We need to create a global language for these like we have for human rights. Is that going too far? Is this fear too far fetched? What do you think? Do leave your views in the comments. Thank you.
Abhijit Bhaduri works as the Chief Learning Officer for the Wipro group. He lives in Bangalore, India. Prior to this he led HR teams at Microsoft, PepsiCo, Colgate and Tata Steel and worked in India, SE Asia and US.
He is on the Advisory Board of the prestigious program for Chief Learning Officers that is run by the Univ of Pennsylvania.