computation
Please interact with this page using 🔗hypothesis𝌡 annotations !
If all is well, this line should now be marked yellow-ish.



A NEW DARK AGE - JAMES BRIDLE
< TEXT >

Computation

In 1884, the art critic and social thinker John Ruskin gave a series of lectures at the London Institution entitled ‘The Storm-Cloud of the Nineteenth Century.’ Over the evenings of February 14 and 18, he presented an overview of descriptions of the sky and clouds drawn from Classical and European art, as well as the accounts of mountain climbers in his beloved Alps, together with his own observations of the skies of southern England in the last decades of the nineteenth century.

In these lectures he advanced his opinion that the sky contained a new kind of cloud. This cloud, which he called a ‘storm-cloud’, or sometimes ‘plaguecloud’, 

never was seen but by now living, or lately, living eyes … There is no description of it, so far as I have read,
by any ancient observer. Neither Homer nor Virgil, neither Aristophanes nor Horace, acknowledges any such clouds among those compelled by Jove. Chaucer has no word for them, nor Dante; Milton none, nor Thomson. In modern times, Scott, Wordsworth and Byron are alike unconscious of them; and the most observant and descriptive of scientific men, De Saussure, is utterly silent concerning them.(1) 

Ruskin’s ‘constant and close observation’ of the skies had led him to the belief hat there was a new wind abroad in England and the Continent, a ‘plague-wind’ that brought a new weather with it. Quoting from his own diary of July 1, 1871, he relates that

the sky is covered with grey cloud; – not rain-cloud, but a dry black veil, which no ray of sunshine can pierce; partly diffused in mist, feeble mist, enough to make distant objects unintelligible, yet without any substance, or wreathing, or colour of its own … And it is a new thing to me, and a very dreadful one. I am fifty years old, and more; and since I was five, have gleaned the best hours of my life in the sun of spring and summer mornings; and I never saw such as these, till now. And the scientific men are busy as ants, examining the sun, and the moon, and the seven stars, and they can tell me all about them, I believe, by this time; and how they move, and what they are made of. And I do not care, for my part, two copper spangles how they move, nor what they are made of. I can’t move them any other way than they go, nor make of them anything else, better than they are made. But I would care much and give much, if I could be told where this bitter wind comes from, and what it is made of.(2)

He goes on to elucidate many similar observations: from strong winds out of nowhere, to dark clouds covering the sun at midday, and pitch-black rains that putrefied his garden. And while he acknowledges, in remarks that have been seized on by environmentalists in the years since, the presence of numerous and multiplying industrial chimneys in the region of his observations, his primary concern is with the moral character of such a cloud, and the ways it seemed to emanate from battlefields and sites of societal unrest.
   ‘What is best to be done, do you ask me? The answer is plain. Whether you can affect the signs of the sky or not, you can the signs of the times.’(3) The metaphors we use to describe the world, like Ruskin’s plague-cloud, form and shape our understanding of it. Today, other clouds, often still emanating from sites of protest and contest, provide the ways we have to think the world.
Ruskin dwelled at length upon the differing quality of light when affected by the storm-cloud, for light too has a moral quality. In his lectures, he argued that the ‘fiat lux of creation’ – the moment when the God of Genesis says, ‘Let there be light’ – is also fiat anima, the creation of life. Light, he insisted, is ‘as much the ordering of Intelligence as the ordering of Vision’. That which we see shapesnot just what we think, but how we think. 
Just a few years previously, in 1880, Alexander Graham Bell first demonstrated a device called the photophone. A companion invention to the telephone, the photophone enabled the first ‘wireless’ transmission of the human voice. It worked by bouncing a beam of light off a reflective surface, which was vibrated by the voice of a speaker, and received by a primitive photovoltaic cell, which turned the light waves back into sound. Across the rooftops of Washington, DC, Bell was able to make himself understood by light alone at a distance of some 200 metres. 
   Arriving several years before the promulgation of effective electricallighting, the photophone was completely dependent on clear skies to provide bright light to the reflector. This meant that atmospheric conditions could affect the sound produced, altering the output. Bell wrote excitedly to his father, ‘I have heard articulate speech by sunlight! I have heard a ray of the sun laugh and cough and sing! I have been able to hear a shadow and I have even perceived by ear the passage of a cloud across the sun’s disk.’(4)
   The initial response to Bell’s invention was not promising. A commentator in the New York Times wondered sarcastically if ‘a line of sunbeams’ might be hung on telegraph posts, and whether it might be necessary to insulate them. ‘Until one sees a man going through the streets with a coil of No. 12 sunbeams on his shoulder, and suspending them from pole to pole, there will be a general feeling that there is something about Professor Bell’s photophone which places a tremendous strain on human credulity,’ they wrote.(5)
   That line of sunbeams, of course, is precisely what we can see today arrayed around the globe. Bell’s invention was the first to deploy light as a carrier of complex information – as the commentator noticed, unwittingly, it required only the insulation of the sunbeam in order to carry it over unimaginable distances. Today, Bell’s sunbeams order the data that passes beneath the ocean waves in the form of light transmitting fibre-optic cables, and they order in turn the collective intelligence of the world. They make possible the yoking together of vast infrastructures of computation that organise and govern all of us. Ruskin’s fiat lux as fiat anima is reified in the network.
   Thinking through machines predates the machines themselves. The existence of calculus proves that some problems may be tractable before it is possible to solve them practically. History, viewed as such a problem, might thus be transformed into a mathematical equation that, when solved, would produce the future. This was the belief of the early computational thinkers of the twentieth century, and its persistence, largely unquestioned and even unconscious, into our own time is the subject of this book. Personified today as a digital cloud, the story of computational thinking begins with the weather.
   In 1916, the mathematician Lewis Fry Richardson was at work on the Western Front; as a Quaker, he was a committed pacifist, and so had enrolled in the Friends’ Ambulance Unit, a Quaker section that also included the artist Roger Penrose and the philosopher and science fiction writer Olaf Stapledon. Over several months, between sorties to the front line and rest periods in damp cottages in France and Belgium, Richardson performed the first full calculation of atmospheric weather conditions by numerical process: the first computerised daily forecast, without a computer. 
   Before the war, Richardson had been superintendent of the Eskdalemuir Observatory, a remote meteorological station in western Scotland. Among the papers he took with him when he went off to war were the complete records of a single day of observations across Europe, compiled on May 20, 1910, by hundreds of observers across the continent. Richardson believed that, through the application of a range of complex mathematical operations derived from years of weather data, it should be possible to numerically advance the observations in order to predict how conditions would evolve over successive hours. In order to do so, he drew up a stack of computing forms, with a series of columns for temperature, wind speed, pressure, and other information, the preparation of which alone took him several weeks. He divided the continent into a series of evenly spaced observation points and performed his calculations with pen and paper, his office ‘a heap of hay in a cold rest billet’.(6) 
   When finally completed, Richardson tested his forecast against the actual observed data and found that his numbers were wildly exaggerated. Nevertheless, it proved the utility of the method: break the world down into a series of grid squares, and apply a series of mathematical techniques to solve the atmospheric equations for each square. What was missing was the technology required to implement such thinking at the scale and speed of the weather itself.
   In Weather Prediction by Numerical Process, published in 1922, Richardsonreviewed and summarised his calculations, and laid out a little thought experiment for achieving them more efficiently with the technology of the day. In this experiment, the ‘computers’ were still human beings, and the abstractions of what we would come to understand as digital computation were laid out at the scale of architecture:
    

In a preface to the report, Richardson wrote,


It was to remain a dream for another fifty years, and would eventually be solved by the application of military technologies that Richardson himself would disavow. After the war, he joined the Meteorological Office, intending to continue his research, but he resigned in 1920 when it was taken over by the Air Ministry. Research on numerical weather forecasting stagnated for many years, until spurred forward by the explosion of computational power that emanated from another conflict, the Second World War. The war unleashed vast amounts of funding for research, and a sense of urgency for its application, but it also created knotty problems: a vast, overwhelming flow of information pouring from a newly networked world, and a rapidly expanding system of knowledge production.
   In an essay entitled ‘As We May Think’, published in the Atlantic in 1945, the engineer and inventor Vannevar Bush wrote,


Bush had been employed during the war as director of the US Office of Scientific Research and Development (OSRD), the primary vehicle for military research and development. He was one of the progenitors of the Manhattan Project, the top secret wartime research project that led to the development of the American atomic bomb.
   Bush’s proposed solution to both these problems – the overwhelming information available to enquiring minds, and the increasingly destructive ends of scientific research – was a device that he called the ‘memex’:


In essence, and with the advantage of hindsight, Bush was proposing the electronic, networked computer. His great insight was to combine, in exactly the way a memex would enable anyone to do, multiple discoveries across many disciplines – advances in telephony, machine tooling, photography, data storage, and stenography – into a single machine. The incorporation of time itself into this matrix produces what we would recognise today as hypertext: the ability to link together collective documents in multiple ways and create new associations between domains of networked knowledge: ‘Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.’(11)
   Such an encyclopaedia, readily accessible to the enquiring mind, would not merely amplify scientific thinking, but civilise it:


One of Bush’s colleagues at the Manhattan Project was another scientist, John von Neumann, who shared similar concerns about the overwhelming volumes of information being produced – and required – by the scientific endeavours of the day. He was also captivated by the idea of predicting, and even controlling, the weather. In 1945, he came across a mimeograph entitled ‘Outline of Weather Proposal’, written by a researcher at RCA Laboratories named Vladimir Zworykin. Von Neumann had spent the war consulting for the Manhattan Project, making frequent trips to the secret laboratory at Los Alamos in New Mexico and witnessing the first atomic bomb blast, code-named Trinity, in July 1945. He was the main proponent of the implosion method used in the Trinity test and the Fat Man bomb dropped on Nagasaki, and helped design the critical lenses that focused the explosion. 
   Zworykin, like Vannevar Bush, had recognised that the informationgathering and retrieval abilities of new computing equipment, together with modern systems of electronic communication, allowed for the simultaneous analysis of vast amounts of data. But rather than focusing on human knowledge production, he anticipated its effects on meteorology. By combining the reports of multiple, widely distributed weather stations, it might be possible to build an exact model of the climatic conditions at any particular moment. A perfectly accurate machine of this kind would not merely be able to display this information, but would be capable of predicting, based on prior patterns, what would occur next. Intervention was the next logical step: 
       

In October 1945, von Neumann wrote to Zworykin, stating, ‘I agree with you completely.’ The proposal was totally in line with what von Neumann had learned from the extensive research programme of the Manhattan Project, which relied on complex simulations of physical processes to predict real-world outcomes. In what could be taken as the founding statement of computational thought, he wrote: ‘All stable processes we shall predict. All unstable processes we shall control.’(14)
   In January 1947, von Neumann and Zworykin shared a stage in New York at a joint session of the American Meteorological Society and the Institute of the Aeronautical Sciences. Von Neumann’s talk on ‘Future Uses of High Speed Computing in Meteorology’ was followed by Zworykin’s ‘Discussion of the Possibility of Weather Control’. The next day, the New York Times reported on the conference under the headline ‘Weather to Order’, commenting that ‘if Dr Zworykin is right the weather-makers of the future are the inventors of calculating machines’.(15)
   In 1947, the inventor of calculating machines par excellence was von Neumann himself, having founded the Electronic Computer Project at Princeton two years previously. The project was to build upon both Vannevar Bush’s analogue computer – the Bush Differential Analyser, developed at MIT in the 1930s – and von Neumann’s own contributions to the first electronic generalpurpose computer, the Electronic Numerical Integrator and Computer, or ENIAC. ENIAC was formally dedicated at the University of Pennsylvania on February 15, 1946, but its origins were military: designed to calculate artillery firing tables for the United States Army’s Ballistic Research Laboratory, it spent the majority of its first years of operation predicting ever-increasing yields for the first generation of thermonuclear atomic bombs.

(IMG missing / Source: US Army. The ENIAC (Electronic Numerical Integrator and Computer) in Philadelphia, Pennsylvania. Glen Beck (background) and Betty Snyder (foreground) programme the ENIAC in building 328 at the Ballistic Research Laboratory.)

Like Bush, von Neumann later became deeply concerned with the possibilities of nuclear warfare – and of weather control. In an essay for Fortune magazine in 1955, entitled ‘Can We Survive Technology?’, he wrote, ‘Present awful possibilities of nuclear war may give way to others even more awful. After global climate control becomes possible, perhaps all our present involvements will seem simple. We should not deceive ourselves: once such possibilities become actual, they will be exploited.’(16) 
The ENIAC turned out to be Richardson’s fantasy of mathematical calculation made solid, at the insistence of von Neumann. In 1948, the ENIAC was moved from Philadelphia to the Ballistic Research Laboratory at the Aberdeen Proving Ground in Maryland. By this time, it covered three of the four walls of the research lab, constructed from some 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, and 6,000 switches. The equipment was arranged into forty-two panels, each about two feet across and three feet deep, and stacked ten feet high. It consumed 140 kilowatts of power, and pumped out so much heat that special ceiling fans had to be installed. To reprogram it, it was necessary to turn hundreds of ten-pole rotary switches by hand, the operators moving between the stacks of equipment, connecting cables and checking hundreds of thousands of hand-soldered joints. Among the operators was Klára Dán von Neumann, John von Neumann’s wife, who wrote most of the meteorological code and checked the work of the others.
   In 1950, a team of meteorologists assembled at Aberdeen in order to perform the first automated twenty-four-hour weather forecast, along exactly the same lines as Richardson had proposed. For this project, the boundaries of the world were the edges of the continental United States; a grid separated it into fifteen rows and eighteen columns. The calculation programmed into the machine consisted of sixteen successive operations, each of which had to be carefully planned and punched into cards, and which in turn output a new deck of cards that had to be reproduced, collated, and sorted. The meteorologists worked in eight-hour shifts, supported by programmers, and the entire run absorbed almost five weeks, 100,000 IBM punch cards, and a million mathematical operations. But when the experimental logs were examined, von Neumann, the director of the experiment, discovered that the actual computational time was almost exactly twenty-four hours. ‘One has reason to hope’, he wrote, that ‘Richardson’s dream of advancing computation faster than the weather may soon be realised.’(17)
   Harry Reed, a mathematician who worked on the ENIAC at Aberdeen, would later recall the personal effect of working with such large-scale computation. ‘The ENIAC itself, strangely, was a very personal computer. Now we think of a personal computer as one you carry around with you. The ENIAC was actually one that you kind of lived inside.’18 But in fact, today, we all live inside a version of the ENIAC: a vast machinery of computation that encircles the entirety of the globe and extends into outer space on a network of satellites. It is this machine, imagined by Lewis Fry Richardson and actualised by John von Neumann, that governs in one way or another every aspect of life today. And it is one of the most striking conditions of this computational regime that it has rendered itself almost invisible to us. 
   It is almost possible to pinpoint the exact moment when militarised computation, and the belief in prediction and control that it embodies and produces, slid out of view. The ENIAC was, to the initiated, a legible machine. Different mathematical operations engaged different electromechanical processes: the operators on the meteorology experiment described how they could identify when it entered a particular phase by a distinctive three-note jig played by the card shuffler.19 Even the casual observer could watch as the blinking lights picking out different operations progressed around the walls of the room.

(IMG missing / Source: Columbia University. Publicity photo of the IBM SSEC, 1948.)

By contrast, the IBM Selective Sequence Electronic Calculator (SSEC), installed in New York in 1948, refused such easy reading. It was called a calculator because in 1948 computers were still people, and the president of IBM, Thomas J. Watson, wanted to reassure the public that his products were not designed to replace them.20 IBM built the machine as a rival to the ENIAC – but both were descendants of von Neumann’s earlier Harvard Mark I machine, which contributed to the Manhattan Project. The SSEC was installed in full view of the public inside a former ladies’ shoe shop next to IBM’s offices on East Fifty-Seventh Street, behind thick plate glass. (The building is now the corporate headquarters of the LVMH luxury goods group.) Further concerned about appearances, Watson ordered his engineers to remove the ugly supporting columns that dominated the space; when they were unable to do so, they airbrushed the publicity photos so that the newspapers carried the look Watson wanted.(21)
   To the crowds pressed up against the glass, even with the columns in place the SSEC radiated a sleek, modern appearance. It took its aesthetic cues from the Harvard Mark I, which was designed by Norman Bel Geddes, the architect of the celebrated Futurama exhibit at the 1939 New York World’s Fair. It was housed in the first computer room to utilise a raised floor, now standard in data centres, to hide unsightly cabling from its audience, and it was controlled from a large desk by chief operator Elizabeth ‘Betsy’ Stewart, of IBM’s Department of Pure Science.

(IMG missing / Source: IBM Archive. Elizabeth ‘Betsy’ Stewart with the SSEC.)

In order to fulfil Watson’s proclamation, printed and signed on the wall of the computer room – that the machine ‘assist the scientist in institutions of learning, in government, and in industry, to explore the consequences of man’s thought to the outermost reaches of time, space, and physical conditions’ – the SSEC’s first run was dedicated to calculating the positions of the moon, stars, and planets for proposed NASA flights. The resulting data, however, were never actually used. Instead, after the first couple of weeks, the machine was largely taken up by top secret calculations for a programme called Hippo, devised by John von Neumann’s team at Los Alamos to simulate the first hydrogen bomb.(22) 
   Programming Hippo took almost a year, and when it was ready it was run continuously on the SSEC, twenty-four hours a day, seven days a week, for several months. The result of the calculations was at least three full simulations of a hydrogen bomb explosion: calculations carried out in full view of the public, in a shopfront in New York City, without anyone on the street being even slightly aware of what was going on. The first full-scale American thermonuclear test based on the Hippo calculations was carried out in 1952; today, all the major nuclear powers possess hydrogen bombs. Computational thinking – violent, destructive, and unimaginably costly, in terms of both money and human cognitive activity – slipped out of view. It became unquestioned and unquestionable, and as such it has endured.
   As we shall see, technology’s increasing inability to predict the future – whether that’s the fluctuating markets of digital stock exchanges, the outcomes and applications of scientific research, or the accelerating instability of the global climate – stems directly from these misapprehensions about the neutrality and comprehensibility of computation. 
   The dream of Richardson and von Neumann – that of ‘advancing computation faster than the weather’ – was realised in April of 1951 when Whirlwind I, the first digital computer capable of real-time output, went online at MIT. Project Whirlwind had started as an attempt to build a general-purpose flight simulator for the air force: as it progressed, the problems of real-time data gathering and processing had drawn in interested parties concerned with everything from early computer networking to meteorology.
   In order to better reproduce actual conditions that might be faced by pilots, one of Whirlwind I’s main functions was to simulate aerodynamic and atmospheric fluctuations, in what amounted to a weather prediction system. This system was not only real-time but, of necessity, networked: connected to and fed data by a range of sensors and offices, from radar systems to weather stations. The young MIT techs who worked on it went on to form the core of the Defense Advanced Research Projects Agency (DARPA) – the progenitor of the internet – and the Digital Equipment Corporation (DEC), the first company to manufacture an affordable business computer. All contemporary computation stems from this nexus: military attempts to predict and control the weather, and thus to control the future.            
   Whirlwind’s design was heavily influenced by ENIAC; in turn, it laid the groundwork for the Semi-Automatic Ground Environment (SAGE), the vast computer system that ran the North American Air Defense Command (NORAD) from the 1950s until the 1980s. Four-storey ‘direction centres’ were installed in twenty-seven command-and-control stations across the United States, and their twin terminals – one for operation, one for backup – included a light gun for designating targets (resembling the Nintendo ‘Zapper’) and ashtrays integrated into the console. SAGE is best memorialised in the vast, paranoid aesthetic of Cold War computing systems, from Dr. Strangelove in 1964 to WarGames, the 1983 blockbuster that told the story of a computer intelligence unable to distinguish between reality and simulation, and famous for its concluding line: ‘the only winning move is not to play.’ 
   In order to make such a complex system work, 7,000 IBM engineers were employed to write the largest single computer programme ever created, and 25,000 dedicated phone lines were laid to connect the various locations.(23) Despite this, SAGE is best known for its bloopers: leaving training tapes running so that follow-on shifts mistook simulation data for actual missile attacks, or designating flocks of migrating birds as incoming Soviet bomber fleets. Histories of computation projects typically write off such efforts as anachronistic failures, comparing them to modern bloat-ridden software projects and government IT initiatives that fall short of their much-vaunted goals and are superceded by subsequent, better-engineered systems before they’re even completed, feeding a cycle of obsolescence and permanent revision. But what if these stories are the real history of computation: a litany of failures to distinguish between simulation and reality; a chronic failure to identify the conceptual chasm at the heart of computational thinking, of our construction of the world?
    We have been conditioned to believe that computers render the world clearer and more efficient, that they reduce complexity and facilitate better solutions to the problems that beset us, and that they expand our agency to address an everwidening domain of experience. But what if this is not true at all? A close reading of computer history reveals an ever-increasing opacity allied to a concentration of power, and the retreat of that power into ever more narrow domains of experience. By reifying the concerns of the present in unquestionable architectures, computation freezes the problems of the immediate moment into abstract, intractable dilemmas; obsessing over the inherent limitations of a small class of mathematical and material conundrums rather than the broader questions of a truly democratic and egalitarian society.
   By conflating approximation with simulation, the high priests of computational thinking replace the world with flawed models of itself; and in doing so, as the modellers, they assume control of the world. Once it became obvious that SAGE was worse than useless at preventing a nuclear war, it shapeshifted, following an in-flight meeting between the president of American Airlines and an IBM salesman, into the Semi-Automated Business Research Environment (SABRE) – a multinational corporation for managing airline reservations.(24) All the pieces were in place: the phone lines, the weather radar, the increasingly privatised processing power, and the ability to manage real-time data flows in an era of mass tourism and mass consumer spending. A machine designed to prevent commercial airlines from being accidentally shot down – a necessary component of any air defence system – pivoted to managing those same flights, buoyed by billions of dollars of defence spending. Today, SABRE connects more than 57,000 travel agents and millions of travellers with more than 400 airlines, 90,000 hotels, 30 car rental companies, 200 tour operators, and dozens of railways, ferries and cruise lines. A kernel of computational Cold War paranoia sits at the heart of billions of journeys made every year. 
   Aviation will recur in this book as a site where technology, scientific research, defence and security interests, and computation converge in a nexus of transparency/opacity and visibility/invisibility. One of the most extraordinary visualisations on the internet is that provided by real-time plane-tracking websites. Anyone can log on and see, at any time, thousands upon thousands of planes in the air, tracking from city to city, mobbing the Atlantic, coursing in great rivers of metal along international flight paths. It’s possible to click on any one of the thousands of little plane icons and see its track, its make and model, the operator and flight number, its origin and destination, and its altitude, speed, and time of flight. Every plane broadcasts an ADS-B signal, which is picked up by a network of amateur flight trackers: more thousands of individuals who choose to set up local radio receivers and share their data online. The view of these flight trackers, like that of Google Earth and other satellite image services, is deeply seductive, to the point of eliciting an almost vertiginous thrill – a sublime for the digital age. The dream of every Cold War planner is now available to the general public on freely accessible websites. But this God’s-eye view is illusory, as it also serves to block out and erase other private and state activities, from the private jets of oligarchs and politicians to covert surveillance flights and military manoeuvres.(25) For everything that is shown, something is hidden.

(IMG missing / Source: Flightradar24.com. Screenshot of Flightradar24.com, showing 1,500 of 12,151 tracked flights, October 2017. Note Google ‘Project Loon’ balloons over Puerto Rico, following Hurricane Maria.)

In 1983, Ronald Reagan ordered that the then-encrypted Global Positioning System (GPS) be made available to civilians, following the shooting down of a Korean airliner that strayed into Russian airspace. Over time, GPS has come to anchor a huge number of contemporary applications and become another of the invisible, unquestioned signals that modulate everyday life – another of those things that, more or less, ‘just works’. GPS enables the blue dot in the centre of the map that folds the entire planet around the individual. Its data directs car and truck journeys, locates ships, prevents planes flying into one another, dispatches taxis, tracks logistics inventories and calls in drone strikes. Essentially a vast, space-based clock, the time signal from GPS satellites regulates power grids and stock markets. But our growing reliance on the system masks the fact that it can still be manipulated by those in control of its signals, including the United States government, which retains the ability to selectively deny positioning signals to any region it chooses.(26) In the summer of 2017, a series of reports from the Black Sea showed deliberate interference with GPS occurring across a wide area, with ships’ navigation systems showing them tens of kilometres off their actual position. Many were relocated onshore, finding themselves virtually marooned in a Russian airbase – the suspected source of the spoofing effort.(27) The Kremlin is surrounded by a similar field, as first discovered by players of Pokémon GO, who found their in-game characters teleported blocks away while trying to play the location-based game in the centre of Moscow.(28) (Particularly enterprising players later turned this realisation to their advantage, using electromagnetic shielding and signal generators to collect points without leaving the house.)(29) In other cases, workers whose labour is remotely monitored by GPS, such as long-distance lorry drivers, have simply jammed the signal to enable them to take breaks and unauthorised routes – throwing off other users along their paths. Each of these examples illustrates how crucial computation is to contemporary life, while also revealing its blind spots, structural dangers, and engineered weaknesses.
   To take another example from aviation, consider the experience of being in an airport. An airport is a canonical example of what geographers call ‘code/space’.(30) Code/spaces describe the interweaving of computation with the built environment and daily experience to a very specific extent: rather than merely overlaying and augmenting them, computation becomes a crucial component of them, such that the environment and the experience of it actually ceases to function in the absence of code.
   In the case of the airport, code both facilitates and coproduces the environment. Prior to visiting an airport, passengers engage with an electronic booking system – such as SABRE – that registers their data, identifies them, and makes them visible to other systems, such as check-in desks and passport control. If, when they find themselves at the airport, the system becomes unavailable, it is not a mere inconvenience. Modern security procedures have removed the possibility of paper identification or processing: software is the only accepted arbiter of the process. Nothing can be done; nobody can move. As a result, a software crash revokes the building’s status as an airport, transforming it into a huge shed filled with angry people. This is how largely invisible computation coproduces our environment – its critical necessity revealed only in moments of failure, like a kind of brain injury.
   Code/spaces increasingly describe more than just smart buildings. Thanks to the pervasive availability of network access and the self-replicating nature of corporate and centralising code, more and more daily activities become dependent on their accompanying software. Daily, even private, travel is reliant on satellite routing, traffic information, and increasingly ‘autonomous’ vehicles – which, of course, are not autonomous at all, requiring constant updates and input to proceed. Labour is increasingly coded, whether by end-to-end logistics systems or email servers, which in turn require constant attention and monitoring by workers who are dependent upon them. Our social lives are mediated through connectivity and algorithmic revision. As smartphones become powerful general-purpose computers and computation disappears into every device around us, from smart home appliances to vehicle navigation systems, the entire world becomes a code/space. Far from rendering the idea of a code/space obsolete, this ubiquity underscores our failure to understand the impact of computation on the very ways in we think.
   When an e-book is purchased from an online service, it remains the property of the seller, its loan subject to revocation at any time – as happened when Amazon remotely deleted thousands of copies of 1984 and Animal Farm from customers’ Kindles in 2009.(31) Streaming music and video services filter the media available by legal jurisdiction and algorithmically determine ‘personal’ preferences. Academic journals determine access to knowledge by institutional affiliation and financial contribution as physical, open-access libraries close down. The ongoing functionality of Wikipedia relies on an army of software agents – bots – to enforce and maintain correct formatting, build connections between articles, and moderate conflicts and incidences of vandalism. At the last survey, bots counted for seventeen of the top twenty most prolific editors and collectively make about 16 per cent of all edits to the encyclopaedia project: a concrete and measurable contribution to knowledge production by code itself.(32) Reading a book, listening to music, researching and learning: these and many other activities are increasingly governed by algorithmic logics and policed by opaque and hidden computational processes. Culture is itself a code/space.
   The danger of this emphasis on the coproduction of physical and cultural space by computation is that it in turn occludes the vast inequalities of power that it both relies upon and reproduces. Computation does not merely augment, frame, and shape culture; by operating beneath our everyday, casual awareness of it, it actually becomes culture.
   That which computation sets out to map and model it eventually takes over. Google set out to index all human knowledge and became the source and arbiter of that knowledge: it became what people actually think. Facebook set out to map the connections between people – the social graph – and became the platform for those connections, irrevocably reshaping societal relationships. Like an air control system mistaking a flock of birds for a fleet of bombers, software is unable to distinguish between its model of the world and reality – and, once conditioned, neither are we.
   This conditioning occurs for two reasons: because the combination of opacity and complexity renders much of the computational process illegible; and because computation itself is perceived to be politically and emotionally neutral. Computation is opaque: it takes place inside the machine, behind the screen, in remote buildings – within, as it were, a cloud. Even when this opacity is penetrated, by direct apprehension of code and data, it remains beyond the comprehension of most. The aggregation of complex systems in contemporary networked applications means that no single person ever sees the whole picture. Faith in the machine is a prerequisite for its employment, and this backs up other cognitive biases that see automated responses as inherently more trustworthy than nonautomated ones.
   This phenomenon is known as automation bias, and it has been observed in every computational domain from spell-checking software to autopilots, and in every type of person. Automation bias ensures that we value automated information more highly than our own experiences, even when it conflicts with other observations – particularly when those observations are ambiguous. Automated information is clear and direct, and confounds the grey areas that muddle cognition. Another associated phenomenon, confirmation bias, reshapes our awareness of the world to bring it better into line with automated information, further affirming the validity of computational solutions, to the point where we may discard entirely observations inconsistent with the machine’s viewpoint.(33)
   Studies of pilots in high-tech aircraft cockpits have produced multiple examples of automation bias. The pilots of the Korean Air Lines flight whose destruction led to the emancipation of GPS were victims of the most common kind. Shortly after takeoff from Anchorage, Alaska, on August 31, 1983, the flight crew programmed their autopilot with the heading given to them by air traffic control and handed over control of the plane. The autopilot was preprogrammed with a series of waymarks that would take it through the jetways over the Pacific to Seoul, but due either to a mistake in the settings, or an imperfect understanding of the mechanisms of the system, the autopilot did not continue to follow its preassigned route; rather, it stayed fixed on its initial heading, which took it further and further north of its intended route. By the time it left Alaskan airspace, fifty minutes into the flight, it was twelve miles north of its expected position; as it flew on, its divergence increased to fifty, then a hundred miles from its intended course. Over several hours, investigators related, there were several cues that might have alerted the crew to what was occurring. They noticed, but disregarded, the slowly increasing travel time between beacons. They complained about the poor radio reception as they drifted further from the normal air routes. But none of these effects caused the pilots to question the system, or to double-check their position. They continued to trust in the autopilot even as they entered Soviet military airspace over the Kamchatka Peninsula. As fighter jets were scrambled to intercept them, they flew on. Three hours later, still completely unaware of the situation, they were fired upon by a Sukhoi Su-15 armed with two air-to-air missiles, which detonated close enough to wreck their hydraulic systems. The cockpit transcript of the last few minutes of flight shows multiple failed attempts to re-engage the autopilot, as an automated announcement warns of
 an emergency descent.(34)
   Such events have been repeated, and their implications confirmed, in multiple simulator experiments. Worse, such biases are not limited to errors of omission, but include those of commission. When the Korean Air Lines pilots blindly followed the directions of an autopilot, they were taking the road of least resistance. But it has been shown that even experienced pilots will take drastic actions in the face of automated warnings, including against the evidence of their own observations. Oversensitive fire warnings in early Airbus A330 aircraft became notorious for causing numerous flights to divert, often at some risk, even when pilots visually checked for signs of fire multiple times. In a study in the NASA Ames Advanced Concepts Flight Simulator, crews were given contradictory fire warnings during preparation for takeoff. The study found that 75 per cent of the crews following the guidance of an automated system shut down the wrong engine, whereas when following a traditional paper checklist only 25 per cent did likewise, despite both having access to additional information that should have influenced their decision. The tapes of the simulations showed that those following the automated system made their decisions faster and with less discussion, suggesting that the availability of an immediate suggested action prevented them looking deeper into the problem.(35) 
   Automation bias means that technology doesn’t even have to malfunction for it to be a threat to our lives – and GPS is again a familiar culprit. In their attempt to reach an island in Australia, a group of Japanese tourists drove their car down onto a beach and directly into the sea because their satellite navigation system assured them there was a viable road. They had to be rescued as the tide rose around them, some fifty feet from the shoreline.(36) Another group in Washington state drove their car into a lake when they were directed off the main road and down a boat ramp. When emergency services responded, they found the car floating in deep water, with only its roof rack visible.(37) For rangers in Death Valley National Park, such occurrences have become so common that they have a term for it: ‘Death by GPS’, which describes what happens when travellers, unfamiliar with the area, follow the instructions and not their senses.38 In a region where many marked roads may be impassable to regular vehicles, and daytime temperatures can reach fifty degrees Celsius with no water available, getting lost will kill you. In these cases, the GPS signal wasn’t spoofed, and it didn’t drift. The computer was simply asked a question, and it answered – and humans followed that answer to their deaths.
   At the foundation of automation bias is a deeper bias, firmly rooted not in technology, but in the brain itself. Confronted with complex problems, particularly under time pressure – and who among us is not under time pressure, all the time? – people try to engage in the least amount of cognitive work they can get away with, preferring strategies that are both easy to follow and easy to justify.(39) Given the option of relinquishing decision making, the brain takes the road of least cognitive effort, the shortest cut, which is presented nearinstantaneously by automated assistants. Computation, at every scale, is a cognitive hack, offloading both the decision process and the responsibility onto the machine. As life accelerates, the machine steps in to handle more and more cognitive tasks, reinforcing its authority – regardless of the consequences. We refashion our understanding of the world to better accommodate the constant alerts and cognitive shortcuts provided by automated systems. Computation replaces conscious thought. We think more and more like the machine, or we do not think at all.
   In the lineage of the mainframe, the personal computer, the smartphone and the global cloud network, we see how we have come to live inside computation. But computation is no mere architecture: it has become the very foundation of our thought. Computation has evolved into something so pervasive and so seductive that we have come to prefer to use it even when simpler mechanical, physical, or social processes will do instead. Why speak when you can text? Why use a key when you can use your phone? As computation and its products
increasingly surround us, are assigned power and the ability to generate truth, and step in to take over more and more cognitive tasks, so reality itself takes on the appearance of a computer; and our modes of thought follow suit.
   Just as global telecommunications have collapsed time and space, computation conflates past and future. That which is gathered as data is modelled as the way things are, and then projected forward – with the implicit assumption that things will not radically change or diverge from previous experiences. In this way, computation does not merely govern our actions in the present, but constructs a future that best fits its parameters. That which is possible becomes that which is computable. That which is hard to quantify and difficult to model, that which has not been seen before or which does not map onto established patterns, that which is uncertain or ambiguous, is excluded from the field of possible futures. Computation projects a future that is like the past – which makes it, in turn, incapable of dealing with the reality of the present, which is never stable. 
   Computational thinking underlies many of the most divisive issues of our times; indeed, division, being a computational operation, is its primary characteristic. Computational thinking insists on the easy answer, which requires the least amount of cognitive effort to arrive at. Moreover, it insists that there is an answer – one, inviolable answer that can be arrived at – at all. The ‘debate’ on climate change, where it is not a simple conspiracy of petrocapitalism, is characterised by this computational inability to deal with uncertainty. Uncertainty, mathematically and scientifically understood, is not the same as unknowing. Uncertainty, in scientific, climatological terms, is a measure of precisely what we do know. And as our computational systems expand, they show us ever more clearly how much we do not know.
   Computational thinking has triumphed because it has first seduced us with its power, then befuddled us with its complexity, and finally settled into our cortexes as self-evident. Its effects and outcomes, its very way of thinking, are now so much a part of everyday life that it appears as vast and futile to oppose as the weather itself. But admitting the myriad ways computational thinking is the product of oversimplification, bad data, and deliberate obfuscation also allows us to recognise the ways in which it fails, and reveals its own limitations. As we shall see, the chaos of the weather itself ultimately lies beyond its reach.
   In the margins of his revision copy of Numerical Prediction, Lewis Fry Richardson wrote,


It took him forty years to formulate, but in the 1960s, Richardson finally found a model for this uncertainty; a paradox that neatly summarises the existential problem of computational thinking. While working on the ‘Statistics of Deadly Quarrels’, an early attempt at the scientific analysis of conflict, he set out to find a correlation between the probability of two nations going to war and the length of their shared border. But he discovered that many of these lengths appeared as wildy different estimates in various sources. The reason, as he came to understand, was that the length of the border depended upon the tools used to measure it: as these became more accurate, the length actually increased, as smaller and smaller variations in the line were taken into account.(41) Coastlines were even worse, leading to the realisation that it is in fact impossible to give a completely accurate account of the length of a nation’s borders. This ‘coastline paradox’ came to be known as the Richardson effect, and formed the basis for Benoît Mandelbrot’s work on fractals. It demonstrates, with radical clarity, the counterintuitive premise of the new dark age: the more obsessively we attempt to compute the world, the more unknowably complex it appears.




This eBook is licensed to Lodewijk Heylen, heylenlodewijk@gmail.com on 12/26/2018


(Site generated by E2H, an "Etherpad hypermedia" project by @dcht00). Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Edit Site

Edit CSS