I came across this video by MIT CSAIL.

Here is the article they are talking about: https://www.science.org/doi/10.1126/scirobotics.adc8892

This team claims to have achieved driving tasks that previously required 10000 neurons, while using only 19, by using "liquid neural networks" inspired by worm neurology.

They say this innovation brings massive improvements on performance, especially in embedded systems, but also in interpretability, since the reduced number of neurons makes the system much more human-readable. In particular, the attention of the system would be much more easily tracked; this would open the door to safety certifications for high-stakes applications.

Having tried driving and flying tasks in different conditions and environments, they also claim that their system is vastly better at out-of-distribution zero-shot tasks.

So basically, they believe they have made very substantial steps in pretty much every dimension that matters, both for performance and for safety.

As far as I can tell these are very serious researchers, but doesn't that sound a bit too god to be true? I have no expertise in machine learning and I haven't seen any third-party opinions on this yet, so I'm having a hard time making up my mind.

I'd be curious to hear your takes!

New Answer
New Comment

2 Answers sorted by

Dave Orr

100

I think this is real, in the sense that they got the results they are reporting and this is a meaningful advance. Too early to say if this will scale to real world problems but it seems super promising, and I would hope and expect that Waymo and competitors are seriously investigating this, or will be soon. 

Having said that, it's totally unclear how you might apply this to LLMs, the AI du jour. One of the main innovations in liquid networks is that they are continuous rather than discrete, which is good for very high bandwidth exercises like vision. Our eyes are technically discrete in that retina cells fire discretely, but I think the best interpretation of them at scale is much more like a continuous system. Similar to hearing, the AI analog being speech recognition.

But language is not really like that. Words are mostly discrete -- mostly you want to process things at the token level (~= words) or sometimes wordpieces or even letters, but it's not that sensible to think of text as being continuous. So it's not obvious how to apply liquid NNs to text understanding/generation.

Research opportunity!

But it'll be a while, if ever, before continuous networks work for language.

Thanks for your answer! Very interesting

I didn't know about the continuous nature of LNN; I would have thought that you needed different hardware (maybe an analog computer?) to treat continuous values.

Maybe it could work for generative networks for images or music, that seems less discrete than written language.

3Dave Orr
I mean, computers aren't technically continuous and neither are neural networks, but if your time step is small enough they are continuous-ish. It's interesting that that's enough. I agree music would be a good application for this approach.
[-]awg10

Then again...the output of an LLM is a stream of tokens (yeah?). I wonder what applications LTCs could have as a post-processor for LLM output? No idea what I'm really talking about though.

3mishka
Not quite. The actual output is the map from tokens to probabilities, and only then one samples a token from that distribution. So, LLMs are more continuous in this sense than is apparent at first, but time is discrete in LLMs (a discrete step produces the next map from tokens to probabilities, and then samples from that). Of course, when one thinks about spoken language, time is continuous for audio, so there is still some temptation to use continuous models in connection with language :-) who knows... :-)
1awg
Ah aha! Thank you for that clarification!

This is pure capabilities, and yes, it's a big deal.

If it works out-of-distribution, that's a huge deal for alignment! Especially if alignment generalizes farther than capabilities. Then you can just throw something like imitative amplification at it and it is probably aligned (assuming that "does well out-of-distribution" implies that the mesa-optimizers are tamed).

3red75prime
I have low confidence in that, but I guess it (OOD generalization by "liquid" networks) works well in differentiable continuous domains (like low-level motion planning) by exploiting natural smoothness of a system. So I wouldn't get my hopes high in its universal applicability.
2the gears to ascension
it's built out of an optimizer, why would that tame inner optimizers? perhaps it makes them explicit, because now the whole thing is a loss function, but the iterative inference can't be shut off and still get functionally
1Christopher King
That's just part of the definition of "works out of distribution". Scenarios where inner optimizers become AGI or something are out of distribution from training.
3 comments, sorted by Click to highlight new comments since:

I have to dispute the idea that "less neurons" = "more human-readable". If the fewer neurons are performing a more complex task it won't necessarily be easier to interpret.  

Definately. The lower the neuron vs 'concepts' ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe.  Audio models? Definately. Tokens and/or semantic data?  That doesnt seeem practical.

 

I just skimmed the video, but it seems like there's more salesmanship than there is explanation of what the network is doing, how its capabilities would compare to using e.g. a small RNN, and how far it actually generalizes.

Remember that self-driving cars first appeared in the 1980s - lane-keeping is actually a very simple task if you only need 99% reliability. I don't think their demos are super informative about the utility of this architecture to complicated tasks.

So I'd be interested if you looked into it more and think that my first impression is unfair.