ReD Associates

View Original

Have We Lost Our Minds?

By Christian Madsbjerg

Since the mid-nineties, the story about IT has been that the “New Information Economy” would give way to vast gains in productivity. We’ve been told that if we simply implement ERP, CRM, and God knows what other kinds of systems, our companies, public services, cities, and infrastructure would be smarter and more efficient. That we humans would be supercharged by technology and become vastly more productive as a result. After 20+ years, one would think there would be indications of this productivity boost at all levels of society, beyond just the valuations of the companies selling us the message. Yet that is not the case.

Let’s look at education. According to the Organization for Economic Co-operation and Development (OECD), which tracked the relationship between math performance and access to information and communication technology in schools from 2001 to 2012, there is actually an inverse relationship between how well our kids learn math and how many computers we put in our classrooms. The study found that in every single country, the more computers were implemented, the worse children performed. In fact, kids who used pen and paper to solve math problems had higher test scores than those who used computers. As the OECD puts it, “Impact on education delivery remains sub-optimal.”

In our economy, the benefits of massive IT implementation and big data are similarly elusive. Economic growth, by definition, is a result of more people producing plus people producing more efficiently, and the latter half of that equation has been unfortunately slow. We have experienced some growth, sure, but far less than IT promised us. We are certainly nowhere near the hyper-productive, data-fueled society we were told we would become. Take marketing, for example. Amazon has had more than thirty years to learn our preferences and fine-tune its recommendation engine. Yet it still recommends products that we have just bought and, hence, no longer need and even suggests items to buy that are already in our shopping carts. We’ve been told that the more data a company amasses, the better it will be at targeting consumers. But is this really the case? Think about the old story of the disruptor, Netflix, and the fallen giant, Blockbuster – wasn’t that really one of supply and demand and a better distribution model? Or did we all really watch Netflix because of suggestions generated by its AI, big data, and machine learning? (I’m guessing not.) Amazon may be an excellent distribution company, but from any human consumer perspective, its recommendation algorithms are deeply flawed.

I can’t help but wonder if this is because big data analytics aren’t generating the insights needed to target consumers any better than what TV advertising and newspapers have always done. Billions of dollars have moved from offline to online advertising, but, in the offices of big advertisers, there is a sinking feeling that the data-fueled Facebook or Google advertising model isn’t going to help them as much as they had hoped and – quite frankly – were sold.

Part of the marketing model of information technology is that the next big thing is always around the corner. These days, that means deep learning and AI. We’re being told that machine learning is going to revolutionize and disrupt everything as we know it and spur amazing productivity…the way IoT and big data were meant to just a few years ago. But before this hype-cycle takes off again, maybe it’s time to take a step back and ask ourselves if we should really buy what they’re selling. What makes these technologies that much different from the failed ones in the past? Will they truly stimulate unprecedented productivity? Or is this just another round of technology-sector marketing that will distract us from real matters of human importance?

What if, instead of simply spending billions of dollars putting computers in schools, we also invested in the materials and pedagogies – the teachers who work with the computers – to make this technology of actual use to students? What if, rather than mindlessly stampeding into a new world of promises, we thought critically about the distractions these screens, ads, and recommendations create? Or invested in the critical skills of our children, who will one day build these computers to think for us? Build technology as meaningful extensions of us, rather than pretending it can take over everything?

Critical thinking feels almost revolutionary in the context of deep learning and AI. But perhaps it’s just what we need.

This piece was first published on EPIC.org in connection to a panel Christian Madsbjerg is a part of during the conference in Montreal on the 22nd of October 2017.

[Banner image by Daniel Falcão on Unsplash]