Automating the Professions
Unique among the outpouring of articles and books about automation over the last few years, Richard and Daniel Susskind hope that the professional knowledge in fields like medicine, law, and education will be dispensed by machines. The Future of the Professions: How Technology Will Transform the Work of Human Experts (OUP, 2015) argues that automating professional knowledge would spread the benefits of practical expertise, empowering those who do not now have access.
Usually when we think about automation we might imagine factory jobs being lost to robots. Instead, the Susskinds imagine jobs like teaching being unbundled and decomposed into tasks that machines can carry out, which would turn teachers into something like babysitters: low-skilled and low-paid. Conversely, the implication is that education should no longer be in the business of supplying students with the kinds of professional knowledge that machines will take over.
On the one hand, the Susskinds’ model is supposed to provide access to professional knowledge to those who can’t afford the current system. Think about how we have all used the medical knowledge on WebMD to avoid a doctor’s visit at some point. But I’m willing to bet that we have also all experienced frustration when hoping to speak to a real person on the phone when we are trapped in automated menus.
Beyond the frustrating brittleness and inflexibility of machines lays a new kind of systematic inequality that comes from algorithmic biases and the commodification of our personal data. Try to imagine coping with an inflexible machine that has misidentified your face, which is a real problem for African Americans, according to The Atlantic: “Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes.”
While the threat of automation, much like the threat of offshoring, is often used to erode the bargaining power of labor less skilled labor, the Susskinds squarely take aim at the professions and argue that automation would be “the preferred direction of travel” (377). I question whether their reliance on the market to provide more efficient solutions in the form of automation would really lead to a more equal future. At the center of my critique, I argue that their approach will lead to the consolidation of the monopoly power of Big Data, mass standardization in the delivery of professional services, and the erosion of skills and knowledge that are needed to deal with a changing world.
Free Market Fantasies in the Age of Internet Centrism
Why should the professions be automated? The Susskinds argue that the professions are a bad solution to the problem of “limited understanding” because “they act as gatekeepers who maintain, interpret, and apply the practical expertise from which we wish to benefit.” (69) They envision a shift: “as the industrialization and digitization of the professions; as the routinization and commoditization of professional work; as the disintermediation and demystification of professionals.” (427)
The language of disintermediation, industrialization, and routinization is congenial to corporate power. The Susskinds seem to think that capitalism works just fine, much like like Erik Brynjolfsson and Andrew McAfee are “skeptical of efforts to come up with fundamental alternatives” (208), which shows when they identify the three main drivers towards more automation: “market forces, technological advances, and human ingenuity.” (286) But unlike Brynjolfsson and McAfee, they don’t see a future when humans and machines complement each other, writing “it is not at all clear why professionals will be able to secure their place indefinitely in these joint ventures.” (410)
Instead of jobs, they argue we will have tasks and each should be carried out in the “most efficient way.” (349) Just as “market forces and technological advances have eliminated” previous generations of craftspeople, we should expect the same of the professions. (290) Indeed, we should hope for it, since it is “widely recognized that there is insufficient funding available to run high-quality schools and universities if teachers and professors operate in the traditional way” (300). While they they offer no source, their “widely recognized” conclusion reveals that budgets more than equity might be driving their argument.
They imagine 12 future roles for people, which which fits with the neoliberal skills agenda that’s emerging as the prime challenger to the conservative standardized testing movement. As one example of a role, “In the future there will be a need for wise and empathetic, discipline-independent individuals —empathizers—who can provide the reassurance to recipients of their work that is often as important as the correct answer. Empathizing of itself will be a decomposed part of some professional services.” (376)
However, I doubt that we want an empathizer to comfort us at school or in the hospital. Rather, we want our teacher or doctor to empathize with us, to understand our frustrations and hopes, and to use that empathy to shape their interaction with us. I can’t see how I could divorce my empathy for students from my professional knowledge as a teacher in cases where I might reduce the amount that a struggling writer needs to put on the page in order that they might begin to enjoy writing and build their confidence.
The Susskinds are well aware that many will object to their proposal, but they seem more concerned with systematically delineating and refuting objections, than in truly taking on board the concerns that others have articulated about automation. For example, the Susskinds dispatch with Michael Sandel’s worries about how “market norms are increasingly replacing non-market norms” in just four paragraphs. (342) They argue that the “consequences of liberalization will be greater, not less, access to affordable expertise”, and that in the United Kingdom’s National Health Service, “marketization has not undermined access” because private individuals do not need to pay for service. Yet, one study estimates that “at least £5 billion of the NHS’s recurrent i.e. continuing, year-on-year running costs relate to the market.”
There is no reason to think that market forces and technological advances won’t simply carry us farther down the path of polarization and inequality. Algorithms “tend to punish the poor,” as Cathy O’Neill argues. “The privileged, we’ll see time and again, are processed more by people, the masses by machines.” (15) Turbo Tax for the masses, Mossack Fonseca for the wealthy. As The Guardian reports, “According to the US economist Gabriel Zucman, 8% of the world’s wealth – a vast $7.6tn (£5.3tn) – was stashed in tax havens.”
If the Susskinds are concerned with equitable access to professional expertise, why not pursue other parallel paths to making professional knowledge widely available? For even if it is possible to automate expertise, there is no certainty that access will be cheaper, more democratic, and of a high quality. Rather than wait for automated doctors, why not allow registered nurses to prescribe medication?
While I don’t doubt the Susskinds’ stated motivation to make expertise more accessible, Internet Centrism shines through their argument more vibrantly than a sustained analysis of social inequality. Evgeny Morozov sharply criticizes the essentializing and epochal terms where “rupture talk and revolutionary rhetoric tend to displace all other forms of analysis.” (48) Central to the Susskinds’ argument is the idea that we have left behind a “print-based industrial society” in which “it did seem to be the case that the most effective way of sharing practical expertise was through face-to-face interaction.” Now that we live in a “technology-based Internet society” we should let go of the “veneration for tradition” and embrace “more effective ways to produce and distribute practical expertise that make less use of personal interaction.” (379)
It’s odd to contrast print and technology; surely print is a kind of technology, not its categorical opposite. And despite the rhetoric of the New Economy, material industry has not been left behind for the immaterial Internet. The supply chains, fossil fuel extraction, and waste dumps create what Naomi Klein calls sacrifice zones, a kind of inequality we need to account for just as much as the access to professional knowledge.
The Consolidation of Big Data
The kind of smart machines that the Susskinds envision presuppose big data, and more worryingly they argue that the boundaries between different services should collapse since there are “fundamental benefits for clients of having one provider looking after many or all of their professional interests.” (178) It would certainly give those providers more power over our lives. Try imagining the same provider having access to your financial, medical, educational, and legal information. Mark Zuckerberg is probably thinking such thoughts as we speak.
Even though the Susskinds say that any consolidation of services should be “consistent with settled principles of privacy and data protection”, it isn’t clear that those principles are strong enough to keep corporate powers in check. (175) For example, Facebook has already filed a patent to use the data they collect from our social networks to influence our credit rating: “When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual […]. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.”
In contrast to the Susskinds’ irrationally optimistic argument, Frank Pasquale offers the kind of sober tech criticism that we need if we hope for a more democratic and fair future. In The Black Box Society: The Secret Algorithms that Control Money and Information, Pasquale argues that “Despite the promises of freedom and self-determination held out by the lords of the information age, black box methods are just as likely to entrench a digital aristocracy as to empower experts.” (218)
Pasquale writes that opacity already “prevails in many critical transactions in order to give privileged insiders an advantage over their clients, regulators, and risk managers.” (3) Google not only protects it’s algorithms that determine search results, but their search engine’s power comes from the vast quantities of information they have accumulated from and about us. Thus, the actual trend in automated knowledge has not been to more open and transparent systems, but to monopoly power built on opacity and our data. “What Thomas Piketty said of unlimited capital accumulation applies as well to untrammeled tech giants: ‘the past devours the future.'” (162)
The Susskinds make arguments about specific professions in Chapter 2, titled ‘From the Vanguard’. Since education is my area of expertise, I will focus on their suggestions for the future of teaching. They argue that we should apply the Big Data approach to education:
“In Learning with Big Data: The Future of Education, Viktor Mayer-Schönberger and Kenneth Cukier describe how the handful of data points traditionally used in education—test scores, report cards, attendance records, and so on—are likely to be dwarfed by far larger, and far more diverse, data sets. A rich range of data is captured, from where students click on the screen to how long they take to answer a question. And the data can be collected and stored in respect of hundreds of thousands of students.”
But who would those “far larger, and far more diverse” data sets really benefit? Should one provider have access to a student’s disciplinary records, data about how quickly they process information, attendance, and library records?
How would we guarantee that education providers would work like a good librarian, helping students expand their intellectual horizons, rather than like Amazon or Netflix, locking them into what we already ‘like’ and profiting from it?
Overall, the Susskinds’ section on education exemplifies a larger pattern where they list various technological changes that have happened, but offer no critical assessment of whether or not they help bring about a more equitable world. In their catalog of changes that excite them, they briefly mention personalized learning systems, Edutopia, Moodle, Kahn Academy, Edomo, MOOCs, learning management systems, and compare online intelligent tutoring systems to the tutor system at Oxford.
Sure, Kahn Academy videos and adaptive software can offer a ‘personalized’ learning experience, but they fall far short from the experience of discussing a book with a teacher or tutor. In fact, those ‘personalized’ experiences are a kind of mass standardization, more like a McDonald’s than like a chef at a restaurant who customizes meals based on what customers want to eat. And while adaptive software might quickly tell students when they have the right or wrong answer, it won’t be able to have a conversation about how a novel inspires their dreams for the future.
In the unbundled view of education, the constructivist dream of the teacher as facilitator would come true. Perhaps these facilitators would simply shuffle kids from Khan Academy, to the lunch room, to an adaptive test. Throughout the book, the Susskinds lean heavily on the cases they present in this chapter, arguing that it is “simply not the experience of those who are working at the vanguard of the professions” that, as David Autor argues, “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.”
David Autor is right. No one has shown that replacing a teacher (or any other professional) with a bundle of software, systems, and videos comes anywhere close to matching the quality of education that a discerning teacher can provide. We can also learn lots from people like Audrey Watters who have charted the long history of attempts to automate education.
Deskilling the Planet
As the Susskinds tell the story of IBM’s Watson, it is one full of optimism and hope, yet short on factual details. Perhaps Watson is the future of flexible machine learning, a “landmark development in artificial intelligence.” (243) The Susskinds take Watson as evidence that “the technologies already exist to support the development of powerful systems in other professions. The day will come, for most professional problems, when users will be able to describe their difficulties in natural language to a computer system on the Internet, and receive a reasoned response, useful advice, and polished supporting documents, all to the standard of an expert professional practitioner.” (245)
First, note the Susskinds’ discursive bias which construes knowledge as something that primarily takes linguistic form: description, reason, advice, documents. As Barry Allen argues, that discursive bias infects nearly all Western philosophy. However, the knowledge involved in performing a complicated surgery or managing a classroom cannot be reduced to descriptions and instructions. Even at the point of diagnosis, we ought not to expect that patients simply describe their difficulties. A good doctor has an educated perception; she can notice something that’s off, detect an odd bump, or suspect that beneath the reported difficulties lays an issue that the patient doesn’t know how to talk about.
Much the same is true of the pedagogical knowledge that teachers possess: you can’t fully articulate how you keep a classroom of 30 kids going on their individual projects without the whole class descending into chaos. And no set of explicit instructions automatically translates into skilled practice. When should you intervene when a student is struggling? When should you let some misbehavior go without comment because you know the student is having a bad day?
It’s also unlikely that we will be able to dispense with the human expertise that makes a judgment about whether Watson has provided a good diagnosis, especially when we take into account Watson’s failures, which the Susskinds studiously avoid talking about. David Autor explains that machines, even impressive ones like Watson, exhibit a kind brittleness in contrast to the flexibility of human cognition. (57:00) In final Jeopardy, Watson made a mistake that no human Jeopardy champion would when it responded to this final answer in the category of ‘U.S. Cities’: “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.” Watson guessed “What is Toronto?”. I don’t know how the Susskinds hold up Watson as the future of automated intelligence without discussing the continuing role of humans in spotting those obviously wrong answers.
Much like the brief treatment of other important topics, their section on Watson contains less than 850 words and 3 sources. They fail to provide the contextual circumstance where Watson operate most effectively, note its limitations, or provide the necessary evidence for their assertions.
But suppose that Watson were to replace doctors, what then? Would we somehow miss the craft of doctors? The Susskinds devote a section of their book to this objection. While they begin considering teachers and surgeons, they illustrate their idea with a rather trivial case: the rise Nespresso. Apparently Nespresso and Lavanzaa are used in “fifteen Michelin-starred restaurants in the United Kingdom, by over 100 in France, and in Italy, arguably the home of coffee, by more than twenty.” Nespresso even wins “blind tests.” (348)
Far from an in-depth study of technological change, their only reference is to an article by Julian Baggini and his one blind test featured four people. Not the most convincing example, given that bespoke coffee does thrive in places and people do feel that the outcome of Nespresso is not as good. Here, the Susskinds really need a solid case to convince us, and they don’t produce.
Instead of the rather trivial example of Nespresso, we ought to consider knowledge in what philosopher Barry Allen calls the “ultimate context” of human flourishing and survival as the global population grows, becomes increasingly urban, and wreaks increasing ecological destruction.
Allen discusses how too many of our artifacts demand less and less of users, which may seem to be a boon, “But being indifferent to differences among users, such artifacts blunt any effort to cultivate knowledge through their use.” (269) That is, when it comes to making Nespresso, you will be as good as you will ever be by the second or third cup. But if our whole culture shifts away from cultivating knowledge across fields like medicine and agriculture, we will lose the capacity to cope with the truly unexpected.
A Nespresso machine only operates with both a narrow range of inputs (the standardized capsules) and outputs (a cup of black coffee). This is one way to achieve automation, by controlling the environment. As David Autor puts it, we simplify the world so inflexible machinery can do useful things in a carefully curated environment. (49:00) Much automation aims to eliminate variation, but we if we are thinking about knowledge in the “ultimate context,” we cannot assume that our world won’t change and that variation won’t be important. We need to acknowledge that our global problems are going to radically alter our daily lives and require the continuing cultivation of knowledge to cope with them.
Suppose that through standardization, we automate large parts of medicine, engineering, and teaching. What then happens when we are hit with a novel strain of flu? Or have the need to design a new kind of building? Or find that students want to learn something that we haven’t anticipated?
Will we simply turn to the machines and the human empathizers?
Allen writes that “The most ‘productive’ tools and systems ask increasingly less of their operators and increasingly more of society, which gets stuck with the rising cost of management error, insurance, and diluted competence… It is good, we think, to be able to pick up a tool and within minutes be doing something ‘productive’ with it. But good for whom? And productive of what?” (270)
So instead of Nespresso, let’s imagine a consequential case of deskilling in detail. Industrial agriculture would seem to pass the Susskinds’ test of producing ‘better outcomes’, so what difference could the process of production make? Suppose ecological devastation reached the point where we could no longer use massive amounts of fertilizer to support monocultures that we ship across the continent. Would a farmer that has been forced into using the methods of industrial agriculture have the knowledge and ability to return to more sustainable methods? It’s doubtful.
As the anthropologist Glenn Davis Stone argues, industrial agriculture has already “disrupted” this “ongoing process of skilling” that should take place during farming where farmers are supposed to “learn how practices and technologies perform together under variable conditions.” Moreover, since we have forgone healthy soil in favor of fertilizers, it would take years of work and care to repair the land.
From the perspective of the market and efficiency, all of the problems with industrial agriculture are externalities. That’s the real danger that the Susskinds fall into when they dismiss non-market logics, as I argued earlier. Among those externalities, Barry Allen points out that “farmers traditionally did not produce only crops; they produced farms, farmers, farming communities, and fertile soil. Far from a presumably more efficient way to farm, agribusiness is another giddy adventure of the unhinged greed we whimsically call the free market.” (280)
We cook less and less, and lose our knowledge of food which makes it difficult to build a movement to resist industrial agriculture because as consumers, we have too been standardized by agribusiness and food chemists. Both our pallets and life routines have been rendered less knowledgeable despite the apparent abundance in our stores. It’s hard for us to even realize what we have lost because it seems normal. Shannyn Kornelsen argues that “By gaining experiential knowledge of food, food preparation, appreciation of taste and quality, and increasing food literacy, one renders the range of products and services offered by the industrial food system as both useless and undesirable.”
But it takes knowledge and skilled perception to know that we have lost appreciation and taste, which is precisely what we risk losing by giving expertise over to automation.
Economic Planning for Everyone
It’s frustrating that the Susskinds spend so little time on what the future of employment will look like and how we will ensure that we all profit from automation. Arguably, this is ought to be the most important part of their argument. Suppose that corporations automate professional expertise and offer a better service than what humans can. If that service becomes another monopoly power or gatekeeper, then there would be no reason to prefer automated services.
The crucial question is not whether services can be automated but whether we can ensure automated services lead to more equality. Even if services seem to be ‘free’, like a Facebook account or Google search, that would not be enough to guarantee a more equal future since those companies make incredible amounts of money from our data.
The Susskinds would clearly like to avoid another monopoloy or gatekeeper and they invoke the famous egalitarian John Rawls’ thought experiment asking us to choose the future direction of society from behind a ‘veil of ignorance’. That is, if we didn’t know whether we would be born rich or poor, male or female, healthy or sick, would we choose a society where we have professions or where we automate that knowledge? Would Mark Zuckerberg make different choices behind a ‘veil of ignorance’? I’m not sure that I give a shit since what we have to in fact deal with are the Mark Zuckerbergs of the world who act in their corporate interest.
There are I think two broad lines of argument and resistance that we need to pursue.
In the first case, we need (as a public, government, as educators) to change the narrative about innovation that makes it the provenance of rugged individuals, and which views character traits, such self-reliance and the capacity to be a ‘lifelong learner’, as the primary determinants of an individual’s success. Instead, as Marina Mazzucato argues in The Entrepreneurial State, we need to stop socializing risk and privatizing rewards. We are already way behind on this front.
Second, beyond a UBI, we should, as Robert McChesney and John Nichols argue, socialize those services that are essential, such as medicine, education, legal representation. Thus, rather than people having a small fixed budget that they must use as they prioritize whether they will buy food or go to a doctor, we should ensure that everyone has the basic goods. This would prevent all of the risk being shoved onto the individual.
As McChesney and Nichols argue, in direct opposition to the Susskinds, we should “simply remove certain functions from the market altogether”. (249) Much like Mazzucato, they point out that “we have plenty of economic planning”, but “the problem is that it is done by and for the elites.” (269) That’s the real problem we have to solve.