image

Dedicated to all the humans who will never read this book and are therefore doomed to be replaced by smart machines.

Good luck.

CHRISTOPH BURKHARDT

DON’T BE
A ROBOT

SEVEN SURVIVAL STRATEGIES
IN THE AGE OF ARTIFICIAL INTELLIGENCE

image

Don’t Be A Robot

Seven Survival Strategies in the Age of Artificial Intelligence

©2018 Midas Management Verlag AG

ISBN 978-3-03876-511-0

Editing: Raj Hayer, London

Cover Design: Frank Höger @RATdesign

Layout: Ulrich Borstelmann, www.borstelmann.de

Printer: CPI Print

Printed in Germany

This book is also available in these formats:

Printed Book (english):

978-3-03876-511-0

E-Book (english):

978-3-03876-521-9

Printed Book (German):

978-3-03876-512-7

E-Book (German):

978-3-03876-522-6

All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. Requests for permission should be addressed to the publisher.

Midas Management Verlag AG

Dunantstrasse 3, CH 8044 Zürich

Mail: kontakt@midas.ch, Website: www.midas.ch,

Social Media: @midasverlag

CONTENT

Introduction

1HOW ROBOTS BECAME HUMAN

It’s Evolution, Stupid

Your Next Big Idea

You Can’t Stop the Robots

How They Think

2HOW HUMANS BECAME ROBOTS

We are Obsessed with Learning

We are Obsessed with Tools

How We Think

In Love with Predictability

Standardized and Normalized

3WHAT ROBOTS WILL DO NEXT

Inevitable Technologies

Cognification

Human Interfaces

4WHAT HUMANS WILL DO NEXT

Automation

Connection

Four Directions

How Do You Know You’re a Robot?

5SUCCEED IN THE AGE OF ARTIFICIAL INTELLIGENCE

Don’t be a Robot

Forget Occupation: It’s About Tasks

Forget Creativity: It’s About Change

6SEVEN SURVIVAL STRATEGIES IN THE AGE OF ARTIFICIAL INTELLIGENCE

IDENTITY

PURPOSE

CURIOSITY

AMBIGUITY

ATTENTION

CONNECTION

TRUST

Conclusion

Introduction

May 2015. I had just invested in a smart scale, to go with my fitness tracker. The tracker gives me my heart rate and number of steps, as well as an indicator of how well I slept; the scale gives me my weight and body fat percentage. All data points are automatically synchronized and show up in the app I use to analyze my status quo on my phone. After several months this data set shows some interesting correlations between workout patterns and sleep patterns, and I start learning about what is good for me based on my own data. I love it; it is amazing, and it works.

June 2017. My Internet is finally upgraded to the fastest possible speed in San Francisco. I receive a new router and everything works. Except for my smart scale, which does not even tell me my weight anymore. It literally did not only stop being smart; it stopped being a scale. After searching online I found other people with similar issues and learn that the recommendation is to call the hotline. I hate hotlines. But I want my scale to work. So I call and, as expected, the call goes to somewhere far outside the United States. I speak with Jenny and I highly doubt that this is her real name. Jenny is following a script; I can hear how she reads from her screen to ask me questions to determine what the cause of the problem is. A lot of the questions sound like she is expecting me to be the problem.

“Have you checked the batteries of the scale?” Jenny asks. “They might be empty.” Slightly annoyed, I reply that the app her company developed shows that the batteries are full and that I had exchanged them just a few days earlier.

“Okay, batteries are fine…” Jenny continues, without any emotion, and I can hear her click with her mouse before she moves on to ask me whether the scale had been underwater or had fallen from some height. Not only am I rolling my eyes at how ridiculous this question seems, but also secretly wondering how Jenny’s script came together. Why would they ask this if it had not happened with some other customer before? Now I’m rolling my eyes about the weirdness of human beings in general.

After thirty minutes, of more or less diagnostic questioning, Jenny determined that she would send me a new scale. To keep it short, the new scale did not respond in the same way the old one did. But this time, I knew that calling the hotline would probably not solve the issue. After Googling a little more outside the official FAQs of the developer, I found the source of the problem. The company had stopped developing the scale and it only worked with an older Wi-Fi standard that my new router would not support. I fixed the problem by buying an old router online for very little money, installed a second Wi-Fi network just for my scale, and it worked. We might laugh about how ridiculous it is to set up an outdated network to make an outdated piece of technology work, but this is literally what so many of my clients, and the corporations they work for, face when dealing with new technologies all the time.

I want to talk about a different takeaway from this story though. I want to talk about Jenny. There is a big problem with Jenny: Jenny is a robot. And yet, she is a human being. Jenny followed a script like a robot would, she showed no emotions like a robot would, and she did not connect with me outside the problem she tried to solve—just like a robot would. But Jenny is no robot; she is a human being. And I know she is a human being. Yet she acts like a robot.

The reason behind this book and, to me, the most fascinating paradox of our times, has to do with Jenny and her many colleagues. Humans who act like robots. How did we end up in a world in which humans behave like robots while robots become more and more like humans?

The title of this book does not hide my message to Jenny and anybody else working, acting, behaving or thinking like a robot. If you want to survive in the age of artificial intelligence, you need to focus less on being like a robot. But this should not be our only focus. We need to understand how to be more human and for that we need to know what it means to be human. This is what this book is about.

CHAPTER 1

HOW ROBOTS BECAME HUMAN

It’s Evolution, Stupid

Computers make excellent and efficient servants,
but I have no wish to serve under them
.

Mr. Spock

Something happened. Something big. Over thousands of years we became the humans we are today. Homo sapiens: to our current knowledge, the most intelligent species on this planet. But are we the most intelligent species? We are standing at the brink of a massive paradigm shift. A shift so fundamental, so far-reaching, and so transformative, that we cannot even begin to understand what is going to happen to our intelligence and us.

We have already developed artificial intelligence; smart robots with surprisingly human traits are running through many homes and even more factories. Industrialized manufacturing and the use of machines to replace physical labor used to be a breakthrough of historic proportion; now this breakthrough fades as just another stepping stone in human development. The fading takes place due to an unfolding advent of intelligence that will transform how we work in factories and businesses around the world. For the first time, we have created a tool that might surpass our own intelligence. What some researchers refer to as the singularity might happen in our lifetime. To some, this development is scary; to others, it is fascinating. And most humans do not yet realize the extent of the consequences behind this dramatic shift. If you think the last fifty years of technological developments were revolutionary wait for the next fifty years to turn your world upside down. And if you thought the speed of change we see today is accelerating at a high frequency, be prepared. We have not seen anything yet. We are facing the most transformative change in about 10,000 years. Industrialization and globalization, the connectedness of minds and machines in the worldwide web, and the use of data as a new currency are mere precursors of what is going to happen next. We will no longer be the only species using reason, experience and intelligence to make sense of our world. Maybe we should rethink calling it our world anyway.

I am asking you, for now, to think big. Let’s see the bigger picture, and gain a good understanding of the driving forces behind the curtain, before we then look at what is actually happening around this data-powered paradigm shift in intelligence we are currently facing. Here is an interesting fact that we are eager to forget or at least ignore it for most of the time: humans have not always been humans.

If we only go back a few thousand years to the point we started occupying most of the landmass on this planet, we were a very different species. (Actually, there was more than one human species.) We looked different, we relied on our hunting and gathering skills, we formed small groups to ensure protection and survival, and we communicated very differently than we do today. The way we live has changed so drastically that we have a hard time imagining how life might have been at the time. The way we connect with each other and the incredible number of connections we learned to handle has turned our social lives upside down. And ultimately we have changed the way we think over and over again. Being busy thinking about what to eat, and how to protect ourselves from adverse weather conditions and other hostile animals while looking for food, gave us little time to go on vacation or travel at all. Our minds were busy figuring out how to help us survive. We are no longer forced to spend time thinking about how to find food. It is exactly this last change, the way we think, that I am most interested in. All that we do follows what we think, so it seems worth looking at how we actually think today in an effort to understand how we got to the point of inventing and developing robots with artificial intelligence that would ultimately surpass our own minds’ capabilities.

How often do you think about the fact that we were all fish at some point in the past? Well, not us directly, but our ancestors. Isn’t this a strange thought? Life evolved out of the water and before mammals could occupy land, their ancestors were living under water. It is very hard to imagine that fish at some point turned into birds and humans, isn’t it? What would people have thought about this crazy idea to leave the ocean and occupy land? Maybe it is good that there was nobody to comment on this development at the time. It happened without a plan, without a goal, and without the idea of an end result.

The reason thinking about fish becoming birds is so strange is the fact that we treat fish and birds as very different categories, each with a unique set of features. Humans love categories: they simplify our lives, they organize our environment, and they make abstract reasoning possible. Categorical thinking becomes very apparent when some of the features in a category don’t really match. For example, we think of a penguin as a bird even though it cannot fly, while we would not think of penguins as fish, even though they certainly spend a lot of time underwater. I have always found categories a particularly interesting field to explore. They organize our world and they change very slowly. They are such an essential cognitive mechanism that if we want to understand how robots think, we need to explore how we use categories that are very stable versus the ones that are changing. And when categories change, big things happen.

Now, humans think in categories because they are very useful. It is simply very adaptive to think in categories, so we learned to do it everywhere, with everything. If we did not have categories, we would have to identify every bird we see as a new species and we would not be able to call them “birds” as a group. We would also not know what a human is or what a robot is. If we put a robot next to a bird and another robot next to another robot, we would not know how to group the robots together; we would not know that they are different from the bird. Categories help us to think in abstract terms rather than concrete examples of a category. So yes, categories are absolutely necessary and an inevitable part of human thinking. Despite some misguided attempts to fight stereotypes (pretty much another word for category) by avoiding categorical thinking altogether, stereotypical categories exist and persist because we cannot change the mechanism of abstraction or avoid false conclusions. These mechanisms are part of who we are. They constitute how we operate. We cannot change them by thinking differently. We can only change the categories, but not the categorical thinking behind it.

If we want to understand what it means to be human, we need to understand where the fundamental differences between the two categories of humans and machines lie. How are they different? How do we know a robot is a robot? What exactly is a machine? And how are we so sure we are not machines? To investigate these questions, we need to understand how we come to define ourselves as a category. What is it really that makes a human being different from a machine? Is there really that much of a difference? Or is this again another mind trick we use to protect our existing categories? Let’s see.

Let’s examine how categories are formed. When we compare a fish to a human and a human to a bird and then a bird to a fish, we will find very different features of each category that we use to explain the difference. Fish live under water, humans on the ground, bird’s fly, and humans walk. Birds eat fish, humans eat fish, and humans eat birds. What we see as crucial characteristics for a member of the category bird or fish, do not threaten the category we know as “human.” Even though the fish, the bird, and the human are part of the same evolutionary chain, they are distinct enough from each other for us to group them in very different categories. Here is the analogy to robots: we can no longer easily differentiate them from humans based on some of the features we have used for thousands of years. They walk like us, they talk like us, and they look like us.

When we see a bird next to a fish, we can pinpoint all the obvious differences. When we compare a human and a robot, we see quite a number of similarities. For many people, these similarities often outweigh the differences, which naturally makes the category of “robots” a threatening, destructive force to the category of “humans.” Since we do not accept robots as equals (yet), we are under pressure to define the differences between us in the most obvious way possible. As we struggle to do so, robots become more and more human. On nearly a monthly basis, we see new skills, from understanding and using language to communicate, to deep learning skills in playing games and planned behavior. Robots come closer and closer to being human at an incredibly fast pace.

Imagine the most human-like robot you have ever seen, maybe one of the almost perfect robotic replicas of humans; robots that try to imitate a particular human in every move. Now, combine this robot with the smartest chatbot we have today to simulate natural language in human conversations. Finally, add language production that does not sound like a machine but like a human, and boom, we are very close to passing the Turing test. In this test, you sit across from an artificial being (our robot) and you are unable to tell whether you are talking to a human or a machine. If you cannot tell the difference, your robot passed the Turing test. If it were indistinguishable from a human, would we call it human? Would we grant the robot the same rights? Probably not. But on what basis?

To understand this shift in detail, we have to ask why robots became human at all. Why did we build them as copies of ourselves? And once we understand that, we will be able to see what is going to happen next in our evolution. So how did we get to where we are?

We are born into the status quo of a world
that bombards us with questions we cannot answer
.

Christoph Burkhardt

The Evolution Paradox

One of the most dangerous, and yet most powerful, ideas the human mind has created in its 70,000-year history on this planet is the idea of magical interference from outside forces. Every time we cannot explain the sources and reasons behind a phenomenon around us, we apply magical thinking. We do this for no other reason than mere desperation. We believe our abilities to investigate are limited. Sometimes we simply cannot know the answer to a question. For example, we do not know why we are here. Since we cannot answer this question (yet) we make use of belief systems to justify why we are here. The human mind has evolved to answer all questions. Why this is the best thing that could ever happen to us, we will explore later. For now, whatever belief system you apply to answer a question, we have to be very aware of the fact that as humans we are not made to leave any questions unanswered. We cannot accept the fact that there are questions without any possible answers (right now). When we face the limits of what we know, rather than accepting our limits, we make up a story that serves as a temporary fix to answer the question. These temporary fixes take the shape of religious beliefs, supernatural explanations, and paranormal activities but also appear in more mundane forms, such as urban myths, beliefs on nutrition and exercise or the idea that Oprah would make a great president of the United States.

Here is the problem with this type of thinking:

imageOver time, beliefs are reinforced. Merely by the fact, that the questions behind our beliefs still cannot be answered by us (such as the question of why we are here) and the made-up stories become convictions. At that point it becomes virtually impossible to break them. Even the most convincing evidence is then not evidence enough to give up a belief system.

imageWhat initially starts as a lack of knowledge, and an incapability to answer a question, turns into a pseudo-answer that satisfies our need to know just enough to end all investigations into the truth or different versions of what an answer might look like. We stop exploring and start justifying. We enter the post-factual world.

imageThe great paradox of evolution lies in the fact that it is evolution that got us to reject the theory of evolution. Hardly any other theory has received more resistance than the powerful explanation of how we became who we are today. Because our belief systems have become convictions over time, many people still struggle to accept evolution as a fact, despite overwhelming evidence. In fact, there is so much evidence for evolution that it should not be called a theory. Intelligent design, on the other hand, has yet to deliver any scientifically sound evidence.

How can this type of thinking be a good thing at all? Why would it be beneficial for homo sapiens to have this kind of belief system? Why can we not simply move on and accept facts as facts? The answer is quite tricky. But there is one that does not require you to believe. So hold on—we are getting there.

If you use magical thinking to explain the unexplained and combine it with the simplifying logic of categorical thinking, you come up with a powerful mix that is responsible for most of what makes us human, which is why it is so crucial that we understand why we do what we do in our thoughts as well as in our actions. Now more than ever, we need to acknowledge the workings of the human mind as they are, and not be blinded by wishful thinking about being more rational than we actually are. Any false and oversimplified explanation of what makes us human will only drag us further down into living with machines that outperform us on every level.

Now is the time to rethink what it means to be human, rethink the skills that make us stronger, deepen the capabilities that make us different, and understand our minds. To extend our minds powers, rather than limit ourselves in irrational fights against technological changes and developments that can no longer be stopped or avoided. It is absolutely out of question that we will live in a world run in large parts by artificial intelligence. The question is not how we avoid such a world; the question is how we want to live in this world—how we want to be human in this world.

Your Next Big Idea

Evolve solutions; when you find a good one, don’t stop.

Davir Eagleman

If you take these two very fundamental human processes, categorical and magical thinking, and you understand that an evolutionary process got us, first, to develop this way of thinking as a way of dealing with the world because it was adaptive to do so, and second, to use our cognitive tools, including categorical and magical thinking, to create new ideas, come up with innovations, and ultimately make progress on a global scale, we then begin to understand that we need to look at evolution as a process that made us human. Evolution created the way we think and the way we think created robots that can think. And now evolution will be responsible for the next leap in human intelligence, the evolution of non-human intelligence.

Now we need to look at how evolution does this in order to know where this is going. How will evolution shape artificial intelligence? How will evolution force us to adapt in the age of smart machines?

The Evolution of our Ideas

Evolution is not about species and organic systems developing biologically from one generation to the next; evolution is actually about a process that we can see in action everywhere. From individuals and companies to societies, from pop music to business models, from technologies to preferences, everything is evolving.

Many people do not realize that we are still evolving biologically. We are not done. We have not reached the state of the ultimate human being without any further needs to adapt. Yet, our biological evolution is too slow for us to observe. We simply do not see the tiny changes that happen from one generation to the next. What we do see, though, is our social evolution, how our organizations and institutions change, the way our political systems operate and the way we see the roles of state, government, and citizens. We see how the music from the sixties is different from what we listen to today, but at the same time we can hear the connection. Music obviously relies on existing material to create new pieces. Our ideas evolve.

Within the corporate world, this evolution happens in the shape of new products and services but also in terms of economic shifts, new business models, new platforms, and cultural movements. The link between the evolution of ideas and the evolution of humans lies in the social realm. We need to realize that outside the evolution of ideas, which so many people contribute to, there is no other process of creation. Everything new, everything innovative, every paradigm shift in our cultural lives is based on the evolution of ideas. The individual mind in this game is, at the same time, the driver of ideas but hardly necessary to make the process happen, which surprises many innovation strategists because it means that the individual with the greatest ideas is not really necessary, and certainly not sufficient for breakthrough changes.

Ideas exist outside human minds. It does not feel this way, but we do not really have ideas; rather, we work with ideas. They are not ours. This point is crucial to keep in mind if we want to understand how robots became robots.

Ideas are independent of their human hosts. We do not create, own, or store ideas. We share them. This is a very important difference that, for many organizations, makes all the difference between being innovative and being stuck. Look at your parent’s generation and your generation; you are exemplars of the same species. While you will certainly find differences between the two generations, you will probably not assume that there was a categorical shift between the two, making one human while the other is something other than human. If you go back in time, far back, even further, the first mammals did not have much in common with us humans today. Yet, if you go back generation by generation, there will never be a point at which you can say that this was a big jump between two generations that justifies calling one generation of mammals homo sapiens and the next homo sapiens sapiens. This clear-cut difference between the categories we apply is only possible because we do not zoom in and compare two generations, but instead hundreds of generations at the same time. We see the difference between early mammals and us very clearly, but we do so only because we ignore all the connecting mammals in the chain.

And here is the point: The same is true for all our ideas. Between an idea that a human shares and the next generation of this idea, there will be hardly any difference. You know how this feels when it happens. Someone shares an idea and if you listen, your mind will start immediately to come up with variations and mutations of this idea. Your mind creates the next generation. This process is of course much faster than biological mutation but it is not fundamentally different from it. So the idea someone shared with you and your idea are only different to a very small degree. Yet, is the first generation of this idea is supposed to be owned by the first human and the next generation by you?

Before your mind starts telling you that this can’t be true, since your idea is obviously different from the first, just imagine this process rippling through thousands of evolutionary generations within a few hours and a group of over a hundred people adding their mutations. I go through this process with my clients regularly and within just a few (quite exhausting) hours, the hundreds of useful innovative ideas that make it to the actual project kick-off phase, are not at all similar to the original ideas we started with. Yet, nobody will be able to differentiate who came up with the final ideas. And that is because the ideas themselves went through thousands of evolutionary steps to get where they are. They did that by utilizing all the brains and minds in the room. We do not have great ideas; great ideas have us.

I am fully aware that this is quite a strange view of how we innovate, but it is incredibly crucial if we want to understand how we are different from the robots we create. And ultimately recognize that we can be so much more if we become better hosts for ideas and let go of being robots. The real task is not to become a better creator of ideas. Our job will be to become a better platform for ideas. Organizations need to make sure they are platforms for the evolution of ideas to take place fast and with many minds in the mix.

If we accept the separation of ideas from the humans who have them, we can move on to look at how robots became human.

Nobody Invents Anything New

Whenever we adopt an idea or feel we have invented something new we walk through a door. Every evolutionary step, the generation of ideas, provide these doors to us. When we actually walk through that door, we have accepted the underlying idea of that door. After going through that door, though, we see a new set of doors. Maybe we see three or four doors in front of us to open now—doors that we had not seen before. We did not even know they were options that we could end up with. Now that we have walked through the first door, we see the next set of doors and pick one to walk through. The concept of “the adjacent possible”1 describes the map of doors as the current reality we can lead to if we walk through the first door. Steven notes that the adjacent possible “captures both the limits and the creative potential of change and innovation”.2 The limits are set by what we can see from the status quo. We simply do not know what the next door will hold for us. We can only know by opening the door. The creative potential, on the other hand, is set by our capability to open the doors and keep exploring what is behind the next door. Indeed, we are afraid at times, since a closed door could hide danger. Yet, our drive for progress leads us to open another door at every turn. And the further we walk through door after door, the further we move away from where we started. This is how some of the robots in our life have become what they are today, and who knows what they might become after we open the next door.

Of course, there is a long list of evolutionary steps leading up to the personal computer, but let’s start from there. Do you remember the first time you opened Excel on your computer? How would you describe the experience of using the software from today’s perspective? Did it feel natural? Was it easy to use? I did not think so when I used it first. There was a lot to learn.

imageMost technological systems (for the sake of categorical simplicity, we’ll call them all “robots”) around us have forced us humans for quite a long time now to adjust to whatever the logic of the technology required. We literally had to learn to use the software, meaning we had to form new neural pathways in our brains to adapt to the software requirements.

imageAfter the first generation of interfaces became more visually appealing, they started mimicking inanimate objects in our environment. The desktop looked like a desk, files looked like paper files, and folders would hold files, just like in the real world. The process of duplicating a file would even be called “copying” it.

imageSome time later, interfaces would start animating and anthropomorphizing inanimate objects. Now paperclips started talking (though Microsoft’s Clippy was hardly of any help when working on a document), and they started bouncing around. When you threw something in the trash folder, an invisible hand would crumble the paper and throw it in the basket (which resembled the basket next to your real-world desk).

imageThen we took on the interaction with the interface by adding main input sources in addition to the mouse and keyboard, such as voice commands and touch screens. While we went mobile with our personal computers (sort of robots too) software had to adjust to smaller screens and different functionality. For the first time interfaces started adjusting to humans, rather than humans having to adjust to the requirements of software.

imageNext, our robots learned to talk to us and to take commands in natural language, which is, even now, transforming how we interact with our computers and the devices we need. Chatbots learned to mimic us and have conversations like a human being would naturally have.

imageIn the near future, robots will connect and exchange information to serve us without us doing anything. Think of a robot sitting in every conference room, displaying information that it thinks might be relevant to the current conversation in the room. When the conversation is about a sales report, it will be right there ready for you to look at. If the conversation is about a marketing campaign including a YouTube clip your competitor made, it will be ready to play without your doing anything.

The doors we walk through open up opportunities to us and challenge us. We cannot know for sure what is behind the next door. We also cannot know how many doors there are. But without going through them, we will not be part of creating our future. It will simply happen to us and that is not a good idea.

Robots became human because we as humans like to interact with humans. We designed them step-by-step to serve our needs. The ideas behind them went through hundreds of evolutionary generations before they became what they are today.

How to Become a Platform for Ideas

Whether you want to turn yourself, your team, or an organization into an effective hub for innovation and progress, you will need to invest in the same strategies. And many of them don’t exactly come naturally to us:

1.Stop wondering whether an idea is good or not so good or downright bad. Evaluating ideas while you are creating an evolution around them is a waste of time. Your impulsive judgment will probably perform some sort of evaluation no matter what. So you might not actually be able to stop this judgmental inner voice, but certainly you can choose not to listen to it.

2.Take whatever you like and ignore the rest. Whatever the reason is for your liking an idea or concept someone came up with, take it, reuse it, rethink it, work with it, but resist the temptation to discuss the ideas you did not like immediately. In a room full of people who do not really know why they like or dislike what they hear, a discussion about the “why” is a waste of energy.

3.Make other people’s ideas your own and let the same happen to your ideas. Birds did not invent wings. Evolution did. You do not invent ideas. Evolution does. Let go of ownership. Play your part, contribute, improve, adjust, change, turn upside down, but do not fight for your ideas just because you think they are yours. Fight for the ones that are good, particularly when they are not your own.

4.Enjoy the process rather than the outcome. Of course it is satisfying to see an idea become a success, but you will last much longer and become a much better platform if you are about the process and not the outcome. Work on the ideas around you, because working on them speeds up the evolution, not because you are looking for that one big hit. Companies need, in many cases, to change incentives so that they do not rely heavily on the outcome but rather on the process. Yes, when it comes to progress having tried is more important than being successful. Unsuccessful ideas inspire successful ones. That’s why they need to be allowed to get out there.

5.Connect people and their ideas. Be strategic about whom you want to meet to create new ideas. The more people who realize that you are a connector, the more people will want to use your platform to get connected.

You Can’t Stop the Robots

It is curious how often you humans manage to
obtain that which you don’t want
.

Mr. Spock

Do you remember when the first cordless phones were made for the mass market? Some people celebrated their new freedom, while others were very skeptical about potential health risks. I grew up in a household that was one of the latter. I remember very well anticipating when my family was going to get our first handheld phone without a cable. My mom very strongly opposed the idea of having invisible waves go through the house that could potentially hurt our bodies and brains. I don’t think the opposition was driven by a concrete fear of something we could not grasp, but rather by a technology that we, frankly, did not understand. But this is how humans operate. We are afraid, attempt to make up our mind against it, resist for a while, and then give up under the pressure of convenience. So, as we have every time ever since, we (despite being late to the party) jumped on the bandwagon and got our handheld phones and so far nobody has gotten sick.

A couple of years later we had the same discussion when the first Wi-Fi stations came to homes, and replaced those long and annoying LAN cables. At the time, my parents were building a house in Germany. Concerned with the effects of even more invisible information pathways through the air and the obvious yet invisible exposure we all would have to confront, they actually went through the enormous efforts of bringing Ethernet cables to every room in the house just to get Wi-Fi in the house a few years later. Again, the potential risk factor at first stopped progress, before we inevitably surrendered to convenience.

I find it fascinating that resistance to technology is for legitimate reasons, and in most cases, being overcome, not by good reasoning and convincing from the technology side, but from the side of the adopters. The more adopters that make use of a technology, the higher the pressure on the rest of us to go along with it, and equally as important, the more convenience a technology delivers compared to the status quo, the faster the adoption. In other words, it is not important whether a technology really proves that it is not harmful. It matters how many people a technology can convince to adopt it. The critical mass of adopters ultimately determines whether we all adopt it too.

A lesson I learned from my family’s resistance to new technology, as well as from many of my clients trying to transform their businesses to meet digital standards, is about the way we use our energy to make technology less harmful and more useful. Because here is the fact: by resisting technological change none of us did anything to stop its implementation, nor did we make it potentially less harmful, nor did we implement it in a way that added more value than it had originally offered. And that is a real issue when it comes to intelligent machines and smart robots. We resist them right now in many areas of life. Many people do not want companies like Amazon or Apple to listen to our conversations around the clock. Yet it is likely that we will not stop technologies that are already doing this. We will not stop technology with artificial intelligence from entering our homes and offices, our schools, and government institutions. We will not—and here is where things get real—we will not make them less harmful or intrusive by resisting them, and ultimately, we will not make them better, we will not apply them to more important problems, or build better cases for their use, because we are busy fighting them. With my corporate clients this leads to real problems, problems that ultimately put the survival of companies at risk. Being late to the game is not a problem; not contributing to better-use cases is.