Some call it “strong” AI, others “real” AI, “true” AI or artificial “general” intelligence (AGI)… whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human — possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences.
This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we’d think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a “new life form” on planet earth that is about to “wrap its tentacles” around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google’s AI getting “highly aggressive”
However, as an investor who spends a lot of time in the “trenches” of AI, I have been experiencing a fair amount of cognitive dissonance on this topic. I interact daily with a number of AI entrepreneurs (both in my portfolio and outside), and the reality I see is quite different: it is still very difficult to build an AI product for the real world, even if you tackle one specific problem, hire great machine learning engineers and raise millions of dollars of venture capital. Evidently, even “narrow” AI in the wild is nowhere near working just yet in scenarios where it needs to perform accurately 100% of the time, as most tragically evidenced by self-driving related recent deaths.
So which one is it? The main characteristic of exponential technology accelerations is that they look like they’re in the distant future, until suddenly they’re not. Are we about to hit an inflection point?
A lot of my blog posts on AI have been about how to build AI applications and startups. In this post, I look a bit upstream at the world of AI research to try and understand who’s doing what work, and what may be coming down the pipe from the AI research labs. In particular, I was privileged to attend an incredible small-group workshop ahead of the Canonical Computation in Brains and Machines held at NYU a few weeks ago, which was particularly enlightening and informs some of the content in this post.