Science Fiction writer William Gibson said “The future is already here, it’s just not widely distributed.” When you look around you can see the truth of that statement. Most of the technologies that will influence us over the next few decades already exists. In many ways it feels like we’re living in parts of that future. We can 3-D print replacement jaws for people. And 3D printing was invented over 30 years ago. In NDRC, where I work, we have companies working on embedded sensors for post operative bleed detection, and working on helping kids with focusing and ADHD problems through neuro-feedback game play. [1] In many ways technology is enriching our lives. In reality the title of this piece is less ‘Our Algorithmic Future’ than ‘Our Algorithmic Present’.
As a technophile that’s very exciting. I have a deep and abiding love of science and the wonderful possibility of technology. I grew up reading Isaac Asimov (his science and his fiction), Arthur C Clarke and Carl Sagan. And watching Star Trek, Tomorrow’s World and other optimistic visions of technology and the future.
At the same time there is a darker side to technology. Paul Erlich said “To err is human, to really foul things up requires a computer.” It’s not hard to find examples. California released 450 high-risk, violent prisoners, on an unsuspecting public in 2011, due to a mistake in its computer programming. ‘We-connect’ an app based vibrator which captures the date and time of each use and the selected vibration settings, and transmits the data — along with the users’ personal email address — to its servers in Canada “Unbeknownst to its customers” a number of whom are now suing the company.[2]
And most dark of all is the case of the firing of elementary school teacher Sarah Wysocki by Washington DC Public schools. The school system used “VAR”, a Value Added statistical tool to measure a teacher’s direct contribution to students test results. Despite being highly regarded in classroom observations the low score from the algorithm led to her being fired. There was no recourse or appeal. And no way to really understand the working of VAR as they are copyrighted and cannot be viewed.[3]
There is this abstract notion of what the computer said or what the data tells us. Much as the complex gibberish that underlay the risk models of economists and financial services companies in the run wasn’t questions (because maths) the issue here isn’t the algorithms as much as people and their magical thinking.
I came across this quote from IPPN Director Sean Cottrell, in his address to 1,000 primary school Principals at Citywest Hotel in 2011.[4] He commented
‘Every calf, cow and bull in the State is registered by the Department of Agriculture & Food in the interests of food traceability. Why isn’t the same tracking technology in place to capture the health, education and care needs of every child?’
Well intentioned as it might be, this shows a poor understanding of cows, a worse understanding technology and dreadful misunderstanding of children and their needs. I find this thinking deeply disturbing, and profoundly creepy so I decided to unpack it a little.
This is how we track cows
And this is how we start that process by tracking calves
And I wondered is this how he’d like to track children? (H/T to @Rowan_Manahan for that last image)
Then I realised that we are already tracking children.
Only its not the Primary Principles Network that doing it, it is private companies doing the tracking and tagging. It is Google and Facebook and Snapchat, with some interesting results and some profound ethical questions. We now know that Instagram photos can reveal predictive markers of depression and that Facebook can influence mood, and peoples purchasing habits.[5]
Our algorithm present is composed of both data and algorithms. We have had an exponential growth of processing capability over the last number of years, which has enabled some really amazing developments in technology. Neural Networks emerged first in the 1950s dimmed in the late 1960’s, reemerged in the 1980s and has taken off like wildfire in the last few years.The Neural Network explosion is down to the power, cheapness and availability of GPU’s, together with improvements in the algorithms themselves. And Neural Networks are really really good at some kinds of pattern analysis. We are getting to a point where they are helping radiologists spot overlooked small breast cancers. [6]
There is also a very big problem with algorithms. The problem of the Black Box. The proprietary nature of many algorithms and data sets mean that only certain people can look at these algorithms. Worse we are building systems in a way where we don’t necessarily understand the internal workings and rules of these systems very well at all.
Black boxes look like this. In many systems we see some of the input and the output. But most is not only hidden its not understood. In a classic machine learning model. We feed in data and apply certain initial algorithms. Then we use it prediction or classification. But we need to be careful of the consequences. As Cathy O’Neill cleverly put it Donal Trump is an object lesson in Bad Machine Learning. Iterate on how crowd reacts to what he says and over optimise for the output – Classic problem of Machine Learning trained on bad data set. We need to think about what the systems we’re building are optimising for. [7]
George Box said that “All models are wrong but some are useful.” Korzybski put it more simply “The Map is not the territory.” And its important to remember that an algorithm is a model. And much as the human mind creates fallible biased models we can also construct fallible computer models. Cathy O’Neill put it bluntly that “A model is no more than a formal opinion embedded in code.” The challenge is that the models are more often than not created by young white males from an upper middle class or upper class background. It is not that human brains are perfect model makers but we spend a long time attempting to build social processes to cope with these biases. The scientific method itself is one of the most powerful tools we’ve invented to overcome these biases.
As we unleash them on education, (Sarah), Policing (pre-crime in chicago) and health and hiring we need to be aware of the challenges they pose. Suman Deb Roy has pointed out
Algorithmic systems are not a settled science, and fitting it blindly to human bias can leave inequality unchallenged and unexposed. Machines cannot avoid using data. But we cannot allow them to discriminate against consumers and citizens. We have to find a path where software biases and unfair impact is comprehended not just in hindsight. This is a new kind of bug. And this time, punting it as ‘an undocumented feature’ could ruin everything. [8]
Bernard Marr illustrates this with an example
Hiring algorithms. More and more companies are turning to computerized learning systems to filter and hire job applicants, especially for lower wage, service sector jobs. These algorithms may be putting jobs out of reach for some applicants, even though they are qualified and want to work. For example, some of these algorithms have found that, statistically, people with shorter commutes are more likely to stay in a job longer, so the application asks, “How long is your commute?” Applicants who have longer commutes, less reliable transportation (using public transportation instead of their own car, for example) or who haven’t been at their address for very long will be scored lower for the job. Statistically, these considerations may all be accurate, but are they fair? [9]
There is an old saying in tech: “GIGO: Garbage In Garbage Out” the risk now it that this will will become BIBO “Bias in and BIAS out”
As we gather vast amounts of data the potential for problems increase. There can be unusual downstream consequences also the opportunity to create perverse incentives. We are embedding sensors in cars, and looking the idea that safer driver will be given better rates. The challenge is that personalised insurance breaks the concept of shared risk pools, and can drive dysfunctional behaviour. Goodhart said “When a measure becomes a target, it ceases to be a good measure.” We had a significant recent Irish example with crime statistics where the CSO pointed out problems with both the Under-recording by police of crime and the downgrading of a number of reported crimes. [10]
At one level I see our future as a choice between, Iron Man – technology to augment, or Iron Maiden – technology controlled by a few that inflicts damage on the many. Technology to augment or to constrict . Technology changes that threaten the self also offer ways to strengthen the self, if used wisely and well.
It is clear that technology does not self-police. We could cut off the use of phones in cars using technology – so it can’t be used while driving but the companies doing so currently choose not to do so
In Europe we have our own bill of rights – a charter of fundamental rights enshrined in the Lisbon treaty and it guarantees “Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.” This right has been used to challenge the export of data from the EU to the US under the Schrems decision of the European Court of Justice. [11]
My belief is that we need to extend these rights in the algorithmic era. We need to create a “Charter of Algorithmic Rights” For our algorithmic age. Not a Magna Carta which really just enabled the lords against the king without much for the the peasants. We need algorithmic rights, of the people, by the people and for the people.
Simply put we need airbags for the algorithmic age. For decades cars have safer for men than women because the standard crash test dummy tests on male size standard and biases the development of safety towards the average male. As I said, technology is not self policing. [12]
We are going to have to create better tools. We need to be able to detect, and correct bias and to audit and ensure fairness over a simple move to efficiency. Or else we are tying things together in unforeseeable ways that can have profound consequences at the individual and societal level. Tools such as Value in Design and Thought experiments help. But we need to go much further.
Kate Crawford writing in Nature says
“A social-systems analysis could similarly ask whether and when people affected by AI systems get to ask questions about how such systems work. Financial advisers have been historically limited in the ways they can deploy machine learning because clients expect them to unpack and explain all decisions. Yet so far, individuals who are already subjected to determinations resulting from AI have no analogous power.” [13]
While this is necessary I don’t believe it’s sufficient. We need a “Charter of Algorithmic Rights“. While looking to the opportunities they can afford we need to recognise the biases and limitation of technology. What appears to be augmentation may not really be the case. It may restrict and rule rather than enable.
We need to ensure that are tools are creative and reflect the diversity of human experience.

We are better managing them than being managed by them in our algorithmic future.
Footnotes.
[1] The companies mentioned are Enterasense and Cortechs.
[2] Computer errors allow violent California prisoners to be released unsupervised can be found here and the story on the app based vibrator is here.
[3] One link to the Sarah Wysocki story is here for more details read Cathy O’Neills excellent book “Weapons of Math Destruction” or take a look at Cathy’s blog.
[4] Original Link was Tweeted by Simon McGarr. The piece is here http://www.ippn.ie/index.php/advocacy/press-releases/5000-easier-to-trace-cattle-than-children
[5] How an Algorithm Learned to Identify Depressed Individuals by Studying Their Instagram Photos https://www.technologyreview.com/s/602208/how-an-algorithm-learned-to-identify-depressed-individuals-by-studying-their-instagram/ and https://arxiv.org/pdf/1608.03282.pdf Everything we know about Facebooks mood manipulation http://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/
[6] http://www.cancernetwork.com/articles/computer-technology-helps-radiologists-spot-overlooked-small-breast-cancers Neural Nets may be so good because they map onto some fundamental principles of physics http://arxiv.org/abs/1608.08225:
[7] Trump as a bad Machine Learning Algorithm https://mathbabe.org/2016/08/11/donald-trump-is-like-a-biased-machine-learning-algorithm/
[8] Genesis of the Data Drive Bug https://www.eiuperspectives.economist.com/technology-innovation/genesis-data-driven-bug
[9] Bernard Marr The 5 Scariest Ways Big Data is Used Today http://data-informed.com/the-5-scariest-ways-big-data-is-used-today/
[10] What is the new Central Statistics Office report on Garda data and why does it matter?
http://www.irishtimes.com/news/crime-and-law/q-a-crime-rates-and-the-underreporting-of-offences-1.2268154
and CSO (2016) http://www.cso.ie/en/media/csoie/releasespublications/documents/crimejustice/2016/reviewofcrime.pdf
[11]DRI welcomes landmark data privacy judgement https://www.digitalrights.ie/dri-welcomes-landmark-data-privacy-judgement/ and Schrems v. Data Protection Commissioner https://epic.org/privacy/intl/schrems/
[12] Why Carmakers Always Insisted on Male Crash-Test Dummies
https://www.bloomberg.com/view/articles/2012-08-22/why-carmakers-always-insisted-on-male-crash-test-dummies
[13] There is a blind spot in AI research Kate Crawford& Ryan Calo
http://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805