Ethics in Decision Sciences


  • THE MU SIGMA TIMES
  • August 6th, 2019
  •   27036 Views

“Data is the new oil” has become a very clichéd phrase now, but nobody expected a whole new industry to grow out of it. Most people are not aware of what data organizations have on them, and they don’t care to read through the terms and conditions when they choose to interact with a new entity. With GDPR being more strictly enforced after the Cambridge Analytica scandal, people are becoming more aware about the “how”, “what” and “why” of data collection and it is only an organization’s imperative to be more open about it.

If you look at the way humans have evolved over time, there has been a consistent need for some sort of an imagined reality – be it religious beliefs, national pride, or even organizations that we work for.

Financial institutions, Medical, and Automotive industries – all have regulatory bodies that govern the dos and don’ts and have prescribed rules that organizations follow. Even within an organization, employees believe in the organization’s vision and mission.

You may be wondering what this has to do with the article. Let’s set some context here and take ‘Ethics in Decision Sciences’ as the subject.

Let’s consider these thought experiments:

  1. Person X has been using Google since childhood and Google has been storing all possible information (data) about this person. Recently, Person X has been having suicidal thoughts and has been searching for “How to kill yourself” or something similar online, and has shown evidence of depression, hate over text messages with friends, writing self-loathing digital notes, buying things that could potentially be used to take a human life, etc.

Now any simple algorithm will tell you that this person is about to commit suicide. If you were Google, what would you do? Would you call for help or let it go, as you’ve not informed the person that you’ve collected private information?

  1. Person Y has a gambling problem and the casino has been storing information like visits, amount won/lost, food/drink purchases and all signs indicating that this person is addicted to gambling.


The data is ethically stored as it is collected by the casino as mentioned in their terms and conditions. Now, the marketing function of any casino would have a targeting/personalization algorithm to identify highly loyal/high value customers. Would it be ethical for the organization to send more promotions and make more money out of this customer or should they call the gambling helpline and assist the customer?

  1. A retailer collects information about historic purchases made by customers at a store and assigns a pregnancy probability score to the customer. Imagine a high-school teenage girl who shops regularly at this store. One day, she gets an email from the retailer with offers in the baby needs and female health section, leaving her father perplexed.

This was a real incident where the retailer came to know about her pregnancy before her father did. The question on ethics here is regarding data collection, but more so around parenting. While some may argue that the parents didn’t educate the girl well, others may say that the girl should have the independence of doing whatever she wants in life.

While pondering on the above, you can see that our rationale for each of these scenarios depends highly on the possible outcome of that scenario and will vary depending on the person’s perspective. Ethics are moral principles that govern a person’s behavior and in the context of decision sciences, they need to be thought through at each stage of the data journey – From Collecting to Processingto Decision Making.

Data breaches have been happening regularly by intruders who want to break firewalls and gain access to confidential data only to prove a point. Technological advances have been a boon – remembering your passwords (with prior permission to store them), booking flight tickets, ordering grocery, etc. These are conscious decisions made by us. But one day, a news article reads “Data breach at company X; user accounts compromised”. We panic and start to question our beliefs. This will eventually turn out to be a vicious circle. To achieve this so called ‘technical advancement’, consent is required to store our personal information and by doing so, we are putting our information at risk because we are no longer in control of it. Instead of making people go against technology, the focus should build on security standards and policies of organizations. Currently, research is being done on making data storage better (deeper/stronger encryptions, decentralized data storage/platforms). So, the debate needs to be on the values instilled in the people and the institutions working on this technology.

In the recent Facebook – Cambridge Analytica scandal, one might argue that it was completely unethical while the other might say that users are to blame when they don’t read through pages which have ‘I agree’ at the bottom or ‘Allow Facebook to provide data to third party apps or pages using the platform’. Although this story has more to it and what happened is morally and legally wrong, the point of concern for us users is something bigger.

Every organization wants to play that superior power who wants to control what you see, hear and do. One might argue that something similar happened with religion; when you were brought up by your parents you were taught to believe in certain things but no one could control your own consciousness – you formed your own perceptions and beliefs. In today’s technologically advanced age, why can’t the same hold true? Why can’t we be more aware about what we are seeing (or tricked into seeing by a superior force) and only choose to do things that we truly believe in? – Easier said than done?

You would be aware of the trolley problem – a trolley is going down the track towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternate track. The scenario exposes the moral tension between actively doing versus allowing harm. Is it morally acceptable to kill one to save five or should you allow five to die rather than actively hurt one?

The trolley problem is a bit far-fetched for reality but imagine you are writing an algorithm for a self-driving car and you need to code for a similar scenario where a car is in a situation where any action will put either the car passenger or someone else in danger. For examples, if there is a car crash or an impediment ahead and the only options are to pull into a wall or let the car go off a cliff, how would you approach it? One way to approach this is to put yourself in the situation and think what would we as humans do. We would ideally leave it to that specific moment in time and do something on the spot. The ethics associated with this algorithm will be governed by the person/organization coding it. In this case for example, the organization that built the self-driving car would have created a priority order for the lives at risk and developed an algorithm that ensures that the occupants of the car are protected first.

The more we dream and experience the rise of artificial intelligence, the more the strength of the hypothesis – ‘Are we in a computer simulation?’ grows. Consider the evolution of video games; over time it has helped us create worlds of our own and do whatever we want with it. Remember the show ‘Big Brother’ which has now reached all countries in the world in some form or the other – we believe in a superior power that is doing some sort of a surveillance on us and some organizations today are exploring the use of AI to watch over their employees as well. Amazon has patented a wristband that tracks the hand movements of warehouse workers and uses vibrations to nudge them into being more efficient. Workday, a software firm, crunches around 60 factors to predict which employees will leave. Humanyze, a startup, sells smart ID badges that can track employees around the office and reveal how well they interact with colleagues. Very few laws govern how data is collected at work and many employees blindly consent to surveillance when they sign their employment contract. But the moment the employees become more aware, they would feel that their managers are watching their every move to control them.

In many situations we have seen before, the onset of AI in the workplace calls for trade-offs between privacy and intelligence. Striking a balance will require thought, a willingness for both employers and employees to adapt, and a strong dose of humanity.

With GDPR being more strictly enforced after the Cambridge Analytica scandal, people are becoming more aware about the “how”, “what” and “why” of data collection and it is only an organization’s imperative to be more open about it. Telling a person what data is collected, how it is going to be used and why is it beneficial to both parties will help gain more confidence if at all the organization wants to make it work else it is our own imperative to be more aware of what is possible as technology advances so we can make better decisions ourselves. Ethics should govern an intelligent discussion and not focus on answers or rules. It should be about having the tools or mental models to think carefully about real-world actions and their consequent effects, not about prescribing what needs to be done in any situation. Discussion leads to values that inform decision-making and action.

Highlights from the recent Google I/O conference were both progressive and scary. Most tech organizations today are competing to become our planet’s superpowers. Google, Amazon, Microsoft, Apple and Facebook are the major players in collecting data and harvesting it for multiple purposes. You cannot point to any one of these organizations and be sure that what they are doing is ethical. Our perception of what is real and what is unreal/imaginary would be tested, and we would need to develop a newer version of the ‘Turing test’ on ourselves.

The next world war would be among these organizations on a digital battlefront and the weapons would be the data they have collected and the strength of the algorithms that they run on it. Having said that, there is now a growing need to set up a regulatory body which would govern the roles and responsibilities of organizations who play in this space.

I will leave you with this open-ended question to ponder upon: If you were in-charge of setting up a regulatory body, who do you think should be the primary owner of the data? Should it be the entity on whom data is collected or should it be the entity that collected the data?