Elon Musk’s recent announcement of his new startup Neuralink (together with an extensive 40,000-word backgrounder from WaitButWhy’s Tim Urban), and Bryan Johnson’s investment in Kernel point to a near-term future where humans are enabled to communicate “telepathically” – in a rich manner at least as expressive as spoken and written language – direct brain-to-brain, using either a Neuralink Brain-Computer Interface (BCI)…
…or a Kernel BCI…
…or, a few years down the line, a mass-market consumer brand BCI:
This scenario poses an immediate problem – how do we get these different BCI products to talk to each other? If I’ve installed an Apple BCI and I want to communicate telepathically with your Android one, how do they interoperate? I certainly don’t want to get locked away in a walled (mind)garden! So, let’s assume we can implement a BCI translator…
Pretty soon, however, we’re going to have to support an exponential number of point-to-point translations…
…this will quickly get pretty expensive to create and maintain. Looks like we’ll need some kind of common language for all of the BCIs to talk to each other- say (to be glib) “Human Intelligence Markup Language” (HIML):
…and now we can send this easily across the internet too right?
So…to get this straight, in order for me to get any sort of useful network utility from my BCI device, there needs to be a common language which translates my brain’s activities into machine-readable format, sends them across the internet, and then re-translates them into a form that your brain will understand? Basically an abstraction of the whole range space of the human mind’s functionality and content? Hmmm….sounds like …English? Mandarin? Spanish? Japanese? Or that common meta-language that Google’s AI researcher recently observed….? Just less lossy, with more expression. (Instead of just saying “I love you”, somehow our BCIs translate the emotional state into a more directly experienceable message).
However engaging this thought experiment is, (un)fortunately this scenario is unlikely to play out any time soon – it’s more likely that Neuralink and Kernel are going after the right problem – but at the wrong time.
To explain, let’s go a bit further into our thought experiment. So… could a computer speak HIML?
…Pretty much yes, at least with a limited vocabulary initially. But they’ll get smarter, probably Turing-test-smart – to the point that most people, most of the time won’t be able to tell the difference between communicating telepathically with a computer or a human.
At this stage, computers (AIs) and humans would make up a new society of [telepathic] “human intelligences”, some biological, some (probably the vast majority) physically distributed across the datacentres of this world. Whether or not there’s a ghost inside, “humanity” will become a hybrid machine.
Let’s deconstruct the word “telepathically” for a moment. When we use that word – when Tim Urban talks about Elon’s “magic wizard hats” – what we’re actually implying is two things:
- It will be wireless, soundless, movement-less – just like thinking or listening
- It will be faster (higher outbound bandwidth) than speaking, writing, typing or any other current mode of outbound communication.
Solving problem 1 – cool. Pretty much clear you’ll need a direct BCI (either invasive or non-invasive) for that. Can’t wait.
But solving problem 2 – I’m not so sure. Could we ever get enough bandwidth, with enough accuracy and resolution out of a direct neural interface into our brains to compete with richly expressive speaking, writing or typing? More importantly, could we get it any time soon – and faster than other alternatives that might happen? I love Ramez Naam’s Nexus trilogy – but “Neural Dust” is pure scifi conjecture.
As Tim Urban points out, at current projected rates of progress, Stevenson’s Law suggests that the number of neurons we can simultaneously record seems to consistently double every 7.4 years – and if this continues, it will take until the end of this century to reach a million neurons, and until 2225 to record every neuron in the brain. And even then our brain neurons are biologically constrained to process information at a theoretical maximum of 30 bits per second.
Let’s look at this on a timeline:
Giving us telepathic brain-computer interfaces isn’t going to solve the outbound-bandwidth problem any time soon, especially since computers and AIs are just going to continue evolving off without us. Before long the communications and computational power of non-biological humanity will have eclipsed that of the entirety of human brains on the planet – if we’re not there already. Humanity will start to look like this:
…and then the AIs will continue to evolve at an accelerating pace relative to biological evolution, a moment later humanity will look like this…(look very carefully, you can still see the brains…).
What we have here is a hardware problem – our biological brains are not evolving fast enough to keep up with non-biological intelligence. By the time it becomes practical to wire a human brain into the net, that brain will be such an insignificant part of the overall intelligence infrastructure, it won’t even figure.
The problem we really need to be solving is replacing the physical brain itself – achieving whole-brain emulation, learning how to port / emulate human intelligence onto non-biological hardware so that we have a chance to keep up with the AIs. Science-fiction as it may sound now, given the exponential divergence in capacity this seems to be a far more pressing problem to solve than learning how to jack our biological brains in.
(Incidentally, whole brain emulation will give us richly-expressive telepathy as a by-product. (There are many other implications too, the book The Age of Em by Robin Hanson goes into these in much detail).
And the key to solving whole-brain emulation? It’s the “HIML” discussed above – essentially a richly expressive machine-readable vocabulary of the full range of the human brain’s functionality and content. That is why I’d argue that Neuralink and Kernel are accidentally trying to solve the right problem (HIML), but at the wrong time (after BCIs are workable). Arguably working on a portable model of human intelligence – rather than assuming that it will stay locked inside a skull – would be a better use of the millions of dollars being spent on BCIs in 2017.
Here’s a snap from last year’s SingularityUNZ conference in my home town of Christchurch, New Zealand, which gets the point across quite well, I think – keynote speaker David Roberts speaking on technological disruption was saying at this point: “The next thing on this chart isn’t a bigger fatter skull. Our future is so unlike our past…” Humanity itself is about to be disrupted.
I’ll leave off with one of my favourite quotes in modern science fiction – from writer David Brin in his book Existence – which speaks hopefully about the dwindling – but still crucially important – role that humans may play in a future dominated by non-biological AIs.
“Wanting. Yearning. Desire…Wanting is what we do best. And machines have no facility for it. But with us, by joining us, they’ll find more vivid longing than any striving could ever satisfy. Moreover, if that is the job they assign us – to be in charge of wanting – how could we object?”