Select Page

“A real intelligence doesn’t break when you slightly change the requirements of the problem it’s trying to solve.” James Somers

“The solution to our newfangled monopoly problem lies in … taking back control over the internet and our digital infrastructure, instead of allowing them to be run in the pursuit of profit and power. … If we don’t take over today’s platform monopolies, we risk letting them own and control the basic infrastructure of 21st-century society.” Nick Srnicek

 

Despite my best intentions to have a screen-free Saturday, I checked Twitter while brewing my coffee and came across “Is AI Riding a One-Trick Pony?” by James Somers in the MIT Technology Review. It turned out to be an engrossing read that addresses the technological challenge of AI: Will we ever have a ‘real intelligence’ that lives up to our hopes for AI?

I used to care a lot about this question back in the early 2000s while I was studying philosophy. At that time, it seemed unlikely that AI would move beyond dominating humans at games with well-described rules like chess and get a foot-hold in the more flexible kind of human cognition. I still think we are a fairly long ways off from that, but I’ve come to care less about the technological challenge. Instead, I think the political challenge, which Somers inadvertently raises, is far more important: Will we ever have a ‘real enough’ artificial intelligence that will drive corporate profits at the expense of everyone else?

At the heart of Somers’s article is AI pioneer Geoffrey Hinton who in the late 1980s developed the backpropagation method for training so-called neural networks. In essence, a network of nodes is organized in layers. Training information, say a photograph, is fed into the first layer, and the weights of the network are adjusted so the highest layer of nodes produces the correct response, ‘recognizing’ something in the photo. When the network does not produce a correct response, the weights of the network are adjusted backwards from the response to input layer. Somer’s rightly points out how this one-trick of AI may have reached its limits:

“A computer that sees a picture of a pile of doughnuts piled up on a table and captions it, automatically, as “a pile of doughnuts piled on a table” seems to understand the world; but when that same program sees a picture of a girl brushing her teeth and says “The boy is holding a baseball bat,” you realize how thin that understanding really is, if ever it was there at all. Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled. … Machines have trouble parsing sentences that demand common-sense understanding of how the world works.”

Now, the Vector Institute in Toronto has attracted Hinton and plenty of cash. Somers connects the details of backpropagation to the corporate hunger for our data:

“Backprop is remarkably simple, though it works best with huge amounts of data. That’s why big data is so important in AI—why Facebook and Google are so hungry for it, and why the Vector Institute decided to set up shop down the street from four of Canada’s largest hospitals and develop data partnerships with them.”

Yet, Somers virtually ignores what I’m calling the political challenge. After doing some quick research, I found that the Munk Foundation donated $100 million to a hospital, specifically the Munk cardiac center, with the aim of partnering with Vector:

“Most of the money will be used to develop a digital cardiovascular health platform — a digitized compilation of health information from patients, ranging from blood-test and imaging results to genetic sequencing. The centre has partnered with the Vector Institute, a Toronto-based company that specializes in artificial intelligence, to develop the platform.”

The Toronto Star calls Vector an “unusual hybridization of pure research and business-minded commercial goals.” On the one hand, the Canadian government is donating $100 million. And on the other hand, Vector has also attracted large corporate donations. Accenture, a global consulting company that actively develops automation, is a ‘founding sponsor‘ of Vector, which will “actively seek ways to enable and sustain AI-based economic growth in Canada.” Yet, Accenture strategically relocated its HQ from Bermuda to Ireland to reduce the amount of tax it pays. According to Wikipedia, “In 2017, the company reported net revenues of $34.9 billion, with more than 425,000 employees serving clients in more than 200 cities in 120 countries. In 2015, the company had about 130,000 employees in India, about 48,000 in the US, and about 50,000 in the Philippines.” In Canada, the company employs 3,425 people.

Clearly, Accenture already makes use of strategies – offshoring jobs, using tax havens – to maximize profits. As the government invests in Vector, Canadian citizens ought to ask what returns will flow back to them and what returns will flow to corporations. While we are told that American corporations love the neoliberal ‘free market’, in fact the US government heavily subsidizes the riskiest parts of research, only to allow corporations to profit off what is in fact a welfare system for the wealthy. Noam Chomsky (1996) calls this the “transfer public funds to make sure that high-tech industry keeps moving”.

Now, the neoliberal model of extracting profits finds synergy with platforms in what Shoshana Zuboff’s calls surveillance capitalism: “The game is selling access to the real-time flow of your daily life –your reality—in order to directly influence and modify your behavior for profit. This is the gateway to a new universe of monetization opportunities: restaurants who want to be your destination. Service vendors who want to fix your brake pads. Shops who will lure you like the fabled Sirens.”

In the case of Vector, the National Post quotes Dr. Barry Rubin, medical director of the Munk Cardiac Centre, as saying AI will benefit patients through the kind of monitoring and surveillance that Zuboff warns us of:  “The idea is we remotely monitor patients, we keep them out of the hospital, and if we see something that’s wrong, be able to predict it before it happens and treat them before they have a lethal arrhythmia or heart attack.” He also speculates on the possibility that AI will uncover genetic markers for cardiovascular disease: “That gives you the opportunity to say: ‘Hey, maybe that gene is related to the development of this aortic narrowing. And if you then figured out what that gene does, you might be able to treat patients with that gene before they ever got a narrowed aortic valve.”

It’s not that I’m against research that might save lives, but we need to mount a political challenge to how research resulting from government investments becomes privatized and profits platforms. We should always imagine what our data would look like in the hands of Google and Facebook, who already buy as much of our data as they can. Robert McChesney defines neoliberalism as “policies and processes whereby a relative handful of private interests are permitted to control as much as possible of social life in order to maximize their personal profit.” As platforms come to control even more of our social life, we must take seriously Nick Srnicek’s argument to nationalize them. We’ve already missed the chance to nationalize so much technology that emerged from government funding, and now that we know better, we should mount resistance. The big tech companies don’t need AI to do things as well as humans can, just well enough extract profits from monetizing our social lives .

 

Header image by Osman Rana

I footnotes