Soon AI will take over the world. There will be no jobs left for human beings. As a species, we will slowly realize that our existence is meaningless, staring off into the distance while an AI robot cooks our food and folds our laundry. We won’t manage our own finances, won’t build our own products—we won’t even fly our own planes.
Or at least that’s the reality some people envision when they think of the rise of AI—and as general partner of a deep tech venture capital firm, I believe it’s far off-base from what lies ahead.
Sure, the buzz over AI has become much louder with the proliferation of terms such as “big data” and “deep learning.” But unbeknownst to most people, the scientific community still has not found the road map to a path that will achieve that vision.
This was true 50 years ago, just as it is true today.
The irony of AI is that it is a master of complexities and newbie of generalities.
AI can best be described as narrowly intelligent. It has mastered well-defined tasks that are commercially useful, such as speech recognition for call centers and autonomous driving for warehouse robots.
But generally speaking, it’s still not very intelligent.
Imagine a photo of a young man and woman hugging each other at the departure gate of an international airport. Who are they to one another? Perhaps brother and sister or friends, but based on context clues, they seem more likely to be lovers. How do they feel? Perhaps very sad. Why? Because they must be parting for a long time, enough to make them miss each other. What will they do after they hug? One will walk into the gate and the other will leave the airport.
These are instances of general common sense that most human beings are capable of deducing—and that AI would have absolutely no clue how to discern.
AI is meant to master complexities human beings cannot.
In my experience, the fundamental difference between AI and software as a service (SaaS) is that AI steps outside the boundary of human capabilities, while SaaS still operates within that boundary. SaaS can make human beings highly productive, but it cannot create a superhuman.
AI can and has already done so.
So while AI is not yet generally intelligent, its narrow intelligence alone is enough to create a new type of productivity that SaaS is unable to achieve. For example, a company in our portfolio is able to compute over 6 million construction schedules for a given schematic in a span of 30 minutes and recommend the most optimal one. A human operator using project management software would likely take at least one week to create one schedule.
In the world of SaaS, one would look for ways to enhance human productivity. In the world of AI, it is about imagining things beyond human capabilities.
AI can transcribe and operate on new forms of inputs.
The interaction between human users and SaaS requires that the data is understandable and processable by human beings. But as AI takes over and automates certain tasks, the modality of data need not be understood by humans.
For example, consider a scenario where medical professionals assess ultrasound scans, then provide a diagnosis. Sometimes these images can produce poor image quality and result in an inconclusive diagnosis. Rather than using the images these ultrasound scanners produce, AI could go directly to the data source and run an algorithm based on this source. It takes away the low image quality problem because it operates in a new paradigm.
AI prototypes take more than a weekend to create.
Still, AI is not without its fallacies. It takes time to build and train AI. Algorithms are complex, often consisting of tens of thousands of layers or more.
Because AI is narrowly intelligent, it requires you to train them to be good at each task distinctly. For example, AlphaGo, a computer program that can defeat humans in the board game Go, would need to be trained rigorously to play checkers decently. A human chess master who has never played checkers would probably take 10 minutes to learn the rules and need just a few games to be a decent player.
It is common for startups to prototype their software product over a weekend, test the product with customers and iterate on the product quickly. This is not possible for AI products, which require a lot more programming and training.
The complexity requires that AI companies have a very good sense of market opportunity from day one or risk sinking development time for nothing.
Will we see the Google of AI?
As a venture capitalist, a question I often get asked is whether AI is a winner-takes-all market and could lead to the next Google.
It is helpful to look at Google from the lens of the pre-internet era. We used yellow pages to search for phone numbers and a travel guide to research our holiday destination. Enter Google, and the modality of search changed forever.
The Google of AI would not—and could not—exist within the confines of how things are done today. It must operate outside those confines, outside the ways human beings are currently doing things. If it simply automates the mundane or makes our work more productive, it ceases to be sufficiently game-changing.
The hegemony of the new digital age must imagine and build toward operating procedures that do not exist and ways of living that have not yet been lived.