0

Memia Labs Monthly Digest – May 2017

This month:
//The humans behind the chatbots
//Total transition to autonomous electric vehicles
//(Stop) talking about false meat
//The end of forecasting
//...and space junk.

New Zealand’s Prime Minister Bill English saw the future at Christchurch’s EPIC building (which also houses Memia) as part of this month’s NZ Techweek programme. Nice work by AR/VR specialist Corvecto, even attracting the unforgiving attention of John Oliver. (Note to self: make sure there are no cameras around next time I put on a VR headset!).

Sign up for our regular monthly updates at http://memia.com/labs/

AR 25%, VR 75%

“Our base case software scenario is driven 75 percent by VR use cases vs 25 percent for AR use cases,” said a Goldman Sachs research report driving Microsoft’s strategic shift away from enterprise AR towards consumer VR, at least for a few years yet: What Happened to the Amazing HoloLens Future We Were Promised?

AI 100%

Idealab’s CEO Bill Gross writes up his takeaways from this year’s TED conference. In particular AI expert Noriko Arai’s talk about how she’s building an AI that can take (and pass) the University of Tokyo entrance exam, including reading and writing essay questions.

As a user of Clara for over a year now, HI+AI services continue to improve. But how much is HI and how much AI? BloombergTech goes behind the scenes and investigates The Humans Hiding Behind The Chatbots.

More on the BMI debate kicked off by Elon Musk’s Neuralink announcement: Wait but actually why: Brain-Machine Interfaces and Unit Economics of Human Output

And my own contribution from this month: Why Neuralink and Kernel are trying to solve the right problem at the wrong time

Huge fan of Jeff Hawkins and his team’s work at Numenta decoding how the neocortex works and reverse engineering it into software – here’s a rough early preview video of their latest work introducing a new concept in Hierarchical Temporal Memory (HTM): The Neuroscience Behind HTM Sensory Inference

Meanwhile… for those who want to learn more Machine Learning, FreeCodeCamp’s David Venturi published Every single Machine Learning course on the internet, ranked by your reviews. Background: “A year and a half ago, I dropped out of one of the best computer science programs in Canada. I started creating my own data science master’s program using online resources. I realized that I could learn everything I needed through edX, Coursera, and Udacity instead. And I could learn it faster, more efficiently, and for a fraction of the cost.” A superb resource, traditional universities should feel very afraid.

 

Roads, Roads and Less Roads

Recently the government here in New Zealand announced a big pre-election spendup on “infrastructure” – a euphemism for “more roads”.

The NZ Ministry of Transport is admirably transparent on its website about the investment and costs associated with roading: NZ$4Bn per year on the “Land Transport System”. Four. Billion. Dollars.  (NZ GDP is NZ$260Bn).

This has set me thinking about how advances in transportation technology could start to be applied now not only to significantly reduce the amount being spent, but to deliver better outcomes for everyone. Auckland’s traffic woes are an example where more roads are not going to solve the problems even today. And flying cars ain’t going to cut it any time soon either.

StartupGrind’s Geoff Nesnow wrote a neat summary last year of 50 implications of driverless cars (and trucks).

Implication no. 21: “Roads will be much emptier and smaller since self-driving cars need much less space between them (major cause of traffic today), people will share vehicles more than today (carpooling), traffic flow will be better regulated and algorithmic timing (i.e. leave at 10 versus 9:30) will optimize infrastructure utilization”.

A recent analysis from thinktank RethinkX predicts an extremely disruptive, total transition to EV / autonomous vehicles in 13 years.

Meanwhile India unveiled an ambitious plan to have only electric cars by 2030

Are any government transport agencies around the world modeling a decline in road usage in the future?

How about borrowing the concept of Negawatts – the amount of power saved from improved energy efficiency – and apply it to road usage – “NegaKm” is the car and truck journey km (and journey times) saved from more efficient and timely road use.

One key tool could be reverse-road-pricing: Rather than spend tens of millions of dollars widening a highway, how about holding that money and rewarding people to stay off that road during peak hours? Surely simple enough to trial for a year, registration through a mobile app and you’re away…Surely…?

Maybe we need 777-size cargo-carrying flying drones or giant cargo-carrying blimps to take freight off the roads instead.

 

Future of Food

More agri agitprop from provocateur-in-chief Rosie Bosworth: interview on NBR radio podcast on Why New Zealand is becoming the Detroit of Agriculture. (Nice turn of phrase, wonder who came up with that :-)). Synthetic biology (synbio) will disrupt the traditional (and, let’s face it, hugely inefficient, polluting and fundamentally unsustainable) pastoral agricultural model and maybe even allow countries such as NZ to meet our Paris Agreement commitments of net-zero greenhouse gas emissions by 2050.

Similar themes at last week’s TechWeekNZ update from SingularityUNZ founder Kaila Colbin – here she is mapping the cost per Kg of synthetic meat.

(Incidentally, I was in a meeting recently when the conversation turned to synbio – one of the bankers in the room – maybe starting to feel a bit exposed – said “let’s stop talking about false meat“. Unlikely.)

More future food:

Startup Nutrient Rescue launched their plant-based wholefood powder shots – 5-10 serves of fruit and veges for $2, takes less than 1 min to prepare. Using.

Functional food leader Soylent raised a $50M Series B round led by GV (Google Ventures). “Soylent is addressing one of the biggest issues we face today: access to complete, affordable nutrition”.

 

Turning Facebook data into money

“…is harder than it sounds, mostly because the vast bulk of your user data is worthless. Turns out your blotto-drunk party pics and flirty co-worker messages have no commercial value whatsoever.”I’m an ex-Facebook exec: don’t believe what they tell you about ads

The Half-Life Of Forecasting?

The World Economic Forum published an article on The End Of Forecasting? Given the increasing mainstream acceptance of accelerationism and predictions like “The next double-century (2000-2200) promises no fewer than 150 breakthrough innovations on par with the steam engine, antibiotics and the airplane” – the article argues that

long-term forecasting is simply becoming obsolete and we need to adapt to a post-forecasting era.

Alternatively…the meaning of the phrase “long term” has a half-life attached to it: as technology-driven change accelerates, so our view out to the future shortens. But we can still forecast out effectively for the same order of magnitude change as previously – it’s just this will take exponentially less time to happen.

 

Dystopian Futures

Sometimes the future doesn’t seem so bright:

There’s a link in the WEF article above to a thought-provoking tweet quoting AliBaba founder Jack Ma – as we all live longer, the need may emerge to legislate for a maximum human lifespan.

And, in 5 years, most Americans won’t be able to afford clean, safe water?

 

Space Junk

And finally, here’s a hypnotic (12 mins) video from the European Space Agency showing manmade objects journeying from the outer solar system back to Earth.

More again next month – Comments, feedback, suggestions? Email labs@memia.com

0

Why Neuralink and Kernel are trying to solve the right problem at the wrong time

Elon Musk’s recent announcement of his new startup Neuralink (together with an extensive 40,000-word backgrounder from WaitButWhy’s Tim Urban), and Bryan Johnson’s investment in Kernel point to a near-term future where humans are enabled to communicate “telepathically” – in a rich manner at least as expressive as spoken and written language – direct brain-to-brain, using either a Neuralink Brain-Computer Interface (BCI)…           

…or a Kernel BCI…

…or, a few years down the line, a mass-market consumer brand BCI:

This scenario poses an immediate problem – how do we get these different BCI products to talk to each other? If I’ve installed an Apple BCI and I want to communicate telepathically with your Android one, how do they interoperate? I certainly don’t want to get locked away in a walled (mind)garden! So, let’s assume we can implement a BCI translator…

Pretty soon, however, we’re going to have to support an exponential number of point-to-point translations…

…this will quickly get pretty expensive to create and maintain. Looks like we’ll need some kind of common language for all of the BCIs to talk to each other- say (to be glib) “Human Intelligence Markup Language” (HIML):

…and now we can send this easily across the internet too right?

So…to get this straight, in order for me to get any sort of useful network utility from my BCI device, there needs to be a common language which translates my brain’s activities into machine-readable format, sends them across the internet, and then re-translates them into a form that your brain will understand? Basically an abstraction of the whole range space of the human mind’s functionality and content? Hmmm….sounds like …English? Mandarin? Spanish? Japanese? Or that common meta-language that Google’s AI researcher recently observed….? Just less lossy, with more expression. (Instead of just saying “I love you”, somehow our BCIs translate the emotional state into a more directly experienceable message).

However engaging this thought experiment is, (un)fortunately this scenario is unlikely to play out any time soon – it’s more likely that Neuralink and Kernel are going after the right problem – but at the wrong time.

To explain, let’s go a bit further into our thought experiment. So… could a computer speak HIML?

…Pretty much yes, at least with a limited vocabulary initially. But they’ll get smarter, probably Turing-test-smart – to the point that most people, most of the time won’t be able to tell the difference between communicating telepathically with a computer or a human.

At this stage, computers (AIs) and humans would make up a new society of [telepathic] “human intelligences”, some biological, some (probably the vast majority) physically distributed across the datacentres of this world. Whether or not there’s a ghost inside, “humanity” will become a hybrid machine.

Let’s deconstruct the word “telepathically” for a moment. When we use that word – when Tim Urban talks about Elon’s “magic wizard hats” – what we’re actually implying is two things:

  1. It will be wireless, soundless, movement-less – just like thinking or listening
  2. It will be faster (higher outbound bandwidth) than speaking, writing, typing or any other current mode of outbound communication.

Solving problem 1 – cool. Pretty much clear you’ll need a direct BCI (either invasive or non-invasive) for that. Can’t wait.

But solving problem 2 – I’m not so sure. Could we ever get enough bandwidth, with enough accuracy and resolution out of a direct neural interface into our brains to compete with richly expressive speaking, writing or typing? More importantly, could we get it any time soon – and faster than other alternatives that might happen? I love Ramez Naam’s Nexus trilogy – but “Neural Dust” is pure scifi conjecture.

As Tim Urban points out, at current projected rates of progress, Stevenson’s Law suggests that the number of neurons we can simultaneously record seems to consistently double every 7.4 years – and if this continues, it will take until the end of this century to reach a million neurons, and until 2225 to record every neuron in the brain. And even then our brain neurons are biologically constrained to process information at a theoretical maximum of 30 bits per second.

Let’s look at this on a timeline:

Giving us telepathic brain-computer interfaces isn’t going to solve the outbound-bandwidth problem any time soon, especially since computers and AIs are just going to continue evolving off without us. Before long the communications and computational power of non-biological humanity will have eclipsed that of the entirety of human brains on the planet – if we’re not there already. Humanity will start to look like this:

…and then the AIs will continue to evolve at an accelerating pace relative to biological evolution, a moment later humanity will look like this…(look very carefully, you can still see the brains…).

What we have here is a hardware problem – our biological brains are not evolving fast enough to keep up with non-biological intelligence. By the time it becomes practical to wire a human brain into the net, that brain will be such an insignificant part of the overall intelligence infrastructure, it won’t even figure.

The problem we really need to be solving is replacing the physical brain itself – achieving whole-brain emulation, learning how to port / emulate human intelligence onto non-biological hardware so that we have a chance to keep up with the AIs. Science-fiction as it may sound now, given the exponential divergence in capacity this seems to be a far more pressing problem to solve than learning how to jack our biological brains in.

(Incidentally, whole brain emulation will give us richly-expressive telepathy as a by-product. (There are many other implications too, the book The Age of Em by Robin Hanson goes into these in much detail).

And the key to solving whole-brain emulation? It’s the “HIML” discussed above – essentially a richly expressive machine-readable vocabulary of the full range of the human brain’s functionality and content. That is why I’d argue that Neuralink and Kernel are accidentally trying to solve the right problem (HIML), but at the wrong time (after BCIs are workable). Arguably working on a portable model of human intelligence – rather than assuming that it will stay locked inside a skull – would be a better use of the millions of dollars being spent on BCIs in 2017.

Here’s a snap from last year’s SingularityUNZ conference in my home town of Christchurch, New Zealand, which gets the point across quite well, I think – keynote speaker David Roberts speaking on technological disruption was saying at this point: “The next thing on this chart isn’t a bigger fatter skull. Our future is so unlike our past…” Humanity itself is about to be disrupted.

I’ll leave off with one of my favourite quotes in modern science fiction – from writer David Brin in his book Existence – which speaks hopefully about the dwindling – but still crucially important – role that humans may play in a future dominated by non-biological AIs.

 

“Wanting. Yearning. Desire…Wanting is what we do best. And machines have no facility for it. But with us, by joining us, they’ll find more vivid longing than any striving could ever satisfy. Moreover, if that is the job they assign us – to be in charge of wanting – how could we object?”