10 Mar 2017

Algorithms should be made open ("transparent") to the public. I doubt, however, that we will live in a world where you can demand an explanation of what an algorithm did to you.

So, algorithms are a hot topic now. They change our world! You might remember how Facebook has an algorithm which manages user's news feeds, but it performed very poorly and kept trending fake news. Sad! Or the current discussions about self-driving vehicles being a danger to our safety - or even a danger to social peace because millions of driver jobs are endangered.

Let me make the side-note here that in both of these cases mentioned above, and many others which are similar, the awareness might be new, but the fact that algorithms do crucial work in our newsfeeds and our car safety systems is actually not that new actually. As a good example for this take banks, who have been using data mining algorithms to decide who should get a mortgage decades ago.

Lots of demands, but few incentives

However - the conversation is starting on what we should demand from entities (companies, governments) who employ algorithms w.r.t. the accountability and transparency of automated decision-making. The conversation is being led by many stakeholders:

  • Think Tanks: For example, Pew research just published a report, in which expert opinions around this question where gathered: "Experts worry [algorithms] can (...) put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment.".
  • NGOs: With Algorithm Watch we now have the first NGO that dedicates itself to "evaluate and shed light on algorithmic decision making processes that have a social relevance".
  • Governments: The EU General Data Protection Regulation (GDPR, enters into law 2018) has a section about citizens having "the right to question and fight decisions that affect them that have been made on a purely algorithmic basis".

There are many things being said, and I find this discussion more vague the longer I listen, but maybe the one specific demand all these stakeholders would get behind is this: Explain how the algorithm affects my life.

So it seems like a lot might be happening in this direction, but I believe very little will happen. Why?

  1. It is very costly to develop this feature (of explainable algorithms). I'll discuss why that is in more detail below. In fact, this feature is so costly, making it mandatory might actually drive smaller software development shops out of the market, as only the big players have enough manpower to pull it off.
  2. Customers will not demand it in everyday products, no matter what think tanks say. Open source software is not demanded by customers - it is being used because engineers love it.
  3. It will only become the norm in a specific type of algorithmic software if the product actually needs it. Consider a medical diagnosis software - the doctor needs to explain this diagnosis to the patient. Or the mortgage decision example where I believe lawmakers could step in and make banks tell you exactly why they will not give you a loan.

Explanations: decisions versus design

Algorithms should be "explainable", they say. However, there is no consensus to what that actually means.

In the light of complexity, I believe that what is explainable is not well-defined - while many people talk about explaining individual decisions, all we can really hope for in most cases is explaining the general design - which is much less helpful if our goal is to help people who have been wronged in specific situations. There can and should be transparency, but it might be less satisfying than many people hope for.

A new paper by Wachter et al (2017) makes a good point here that I agree with. They say that many stakeholders to the EU General Data Protection Regulation (see above), have claimed that "a right to explanation of decisions" should be legally mandated, but is not (yet). Rather, what the law includes as of now is a "right to be informed about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems". They also make a distinction between explanations for "specific decisions" versus "system functionality", and they state that the former is probably not "technically feasible", citing a book explaining how Machine Learning algorithms are not easy to understand.

I agree with their doubts here. Let me shortly dive into the feasability problems with explaining specific decisions and also broaden the scope a bit. To me, it comes down to complexity which is inherent to many IT systems that are being built these days:

  1. Complex data: Let's look at the Facebook news feed example: There is a lot of data going into this algorithm. The algorithm makes thousands of decisions for you while you surf and to fully explain one of them at a later time, you'd need a complete snapshot of Facebooks database at the very second the decision was made. That is unrealistic.
  2. Complex algorithms: Many algorithms are hard to understand for humans. Maybe the prime example here are machine learning algorithms. These algorithms are shown real-world data, so they can build a model of the real world. They use this model they built to make decisions in real-world situations. The model often is intelligible to human onlookers. For instance, a neural network (which is for instance used in Deep Learning) reaches a decision in a way that the engineers who "created" it cannot explain to you because the algorithm propagates information through this network many times over because it links back from its end to its beginning (so-called "back propagation").
  3. Complex systems: Finally, many algorithms might actually not live inside one computer. You might interact with a system of networked computers, which interact and all contribute their own part to what is happening to you. My favourite example here is a modern traffic control system that interacts with you while you travel through it, and at each intersection you encounter a different computer. I actually argued in a paper I published in 2010 that decentralised autonomic computing systems tend not to be "comprehensible".

So I believe (and agree with Wachter et al) that the demand to explain for any given situation how exactly the algorithm made a decision, is hard to hold. What can be explained is the design of the algorithm, the data that went into its creation and its decision-making and so on. This is a general explanation, which might be useful to explain to the public why an algorithm is treating problems in a certain way. Or it might be useful in a class action law suit. But it cannot be used to give any particular person the satisfaction of understading what happened to them. That is the new reality which probably already exists, but it has to sink in.

The notion of complex, almost incomprehensible algorithms has been brought up previously by the way. I want to mention Dr. Phoebe Senger, who noted that software agents tend to become incomprehensible in their behaviour as they grow more complex ("Schizophrenia and Narrative in Artifcial Agents", 2002), and of course the great Isaac Asimov who invented the profession of "robot psychologist".


# lastedited 11 Mar 2017
You are seeing a selection of all entries on this page. See all there are.