One of Levin's main points is to describe agency/intelligence as a continuous spectrum rather than an on/off thing so within the framework of thought that this paper exists in, it's no longer meaningful to treat the answer to the 'is it intelligence?' question as having a boolean answer.
I completely agree with this myself (and have for a long time before I even read any of Levin's frankly amazing work) and I think of the answer to this as more like a float/real-numbered thing - the amount of consciousness/intelligence/agency as a fraction of overall energy usage or something maybe? And that probably will lead to one constantly having to try to work out where the heck zero and one are all the time eh? heheh : )
I think it's fun and fascinating as well though for sure, and I think that even stuff as simple as a reaction-diffusion simulation, can actually contain some tiny elements of agency (just like this paper does with it's self-sorting cells!) Who cares what the scale is, right?, it's the same phenomena at the tiniest scales in my opinion, that led to life, that led to humans.
Building something that isn't Turing-complete is surprisingly-hard once it's complex enough.
If basal intelligence is present in diverse computational structures, then weak intelligence is everywhere.
If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Personally, I blame game theory. Too many agents too smart in one place, you get conflicts, and eventually someone breaks an atom apart in your direction.
Or do you need emotions to have conflict? Are there basal emotions?
I'm usually not worried about AI uprisings, but I do believe in the possibility of conflict.
>Building something that isn't Turing-complete is surprisingly-hard once it's complex enough
The most basic computational device that is studied is the (deterministic) finite automaton, which corresponds to regular languages (regex, although actual implementations are usually way more powerful).
If you add a stack (to count parenthesis basically) you have context-free (CF) languages, which correspond to the syntax of most programming languages. Add a second stack and you're already Turing-complete (TC).
If you know that, you can add any extra-power to your machine that is strictly less than a second unbounded stack, and you get a new language class! For a example, a second n-bounded stack. If you do so you will easily get an infinity of language classes. The point is, are they interesting? In particular, the language classes we focus on have some good properties that most arbitrary classes tend to lack.
The Chomsky hierarchy has context-sensitive languages in between CF and TC, but it is already not a very natural class so I've never seen it discussed anywhere, even in complexity theory research --which focuses a lot more in getting links to computability theory or subtle distinctions between deterministic and non-deterministic classes (most famously P vs NP). For the latter, studying analogs of the complexity classes on restricted models of computations is an interesting approach since Turing machines are difficult to work with.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Most certainly outside of our light cone.
It took 4 billion years for this planet to produce intelligent life that can send out radio signals. If we were to wipe ourselves out, it would be another half a billion years for another intelligent species to appear on this planet (probably? - using Cambrian explosion as a benchmark FWIW).
We've been emitting radio signals for a century so far, and mayyyyybe we'll last another 1000 years before we blow ourselves up? This is something we can only conjecture about at this point.
But just for the sake of argument, let's say that a post-radio-emissions intelligent species lasts 10,000 years. This means that our light cone must match up to a 10,000 year period in a planet's 4b year history (or 500m year repeat) TODAY, in order for us to detect anything at all. The chances of that are vanishingly small. And they're certainly not going to visit us a mere 100 years after we began emitting detectable signals.
It's not just a problem of space; it's a problem of time (and timing).
We don't know how long it takes to evolve our level and kind of intelligence, nor if intelligence like ours implies successful expansion such that it could eventually be noticed from the kinds of distances we can sense with our tech, nor how fast it would actually expand.
If the first in any light cone dominates that light cone, expanding at a high fraction of c, then almost everyone starts off thinking they're the first.
We may be the first in our own light cone, and that light cone may be just about to start intersecting with that of a galaxy where every star has been completely Dyson'd by a Kardeshev 3 civilisation.
If the civilisation is two million years older than us, that galaxy could even be the Andromeda galaxy.
No, it actually seems to be the most likely explanation. The universe is so young yet. It's just a cosmic blip of time since the current generation of stars has began forming.
The Fermi paradox can be answered in so many ways, and is tied to questions like what is the purpose of life and the universe.
Beyond the existence of a single person (such as myself, or you) what do we exist to do?
Is it to learn the universe? (Curiosity)
Is it to decrease entropy locally in order to increase it globally? (Spend energy)
Is it to increase complexity? (Do interesting things, foster maximum diversity?)
For example, if the purpose is indeed curiosity, maybe all we will need is one Dyson sphere in order to understand the universe. We could have a dozen super intelligent life forms in our galaxy alone and probably wouldn't notice them. Basically would just look like a quiet black hole the size of a star.
In my opinion, life is just self-replicating tumbleweeds of matter that drift towards local spaces with high energy. The ideal "shape" of these tumbleweeds is gradually approximated via the algorithm of evolution, filtering out the tumbleweeds that fly too close to the sun and so on. Intelligence becomes an emergent property of these optimal shapes, but intelligence doesn't change the outcome, broadly speaking, they still drift towards local spaces with high energy.
Individual organisms will live their life perusing energy, with every breath, with every meal. Even super organisms, such as a nation, will (attempt to) peruse energy in the form of a thriving economy, which influences the energy allocation of the organisms that make it up.
Even absent these tumbleweeds, high density matter (high energy) will literally bend space, and attract other matter to itself through gravitational force. It's entirely different than what I've already discussed, yet intuitively similar?
How does this apply to the fermi paradox? Maybe the idea that the algorithm of evolution will eventually lead to life self-propagating across the universe is flawed. Maybe the spirit of exploration is not universal. Maybe the the simple fact that interstellar travel and communication is energy inefficient is enough to explain the aggregate effect we are seeing?
I thought Dyson sphere will emit enormous radiation anyway? Like how do you convert photons into electricity and use that electricity afterwards with 100% efficiency? Is it even theoretically possible? There should be lots of heat emitted as infrared light.
That is incorrect, dolphins are unlikely to evolve hands and humanoids evolved hands before they became intelligent (probably to grab branches). It was very lucky that a good brain evolved in a body that already had hands.
> very lucky that a good brain evolved in a body that already had hands.
The other way around: hands adds evolutionary pressure towards becoming more intelligent. (The ones that understand how to use their hands and tools better...)
Looking at this planet, it's less likely to happen.
Maybe high density (water) makes tools less useful, and thus hands less useful,
since you cannot move a tool particularly fast under water, compared to on land.
I suppose you've tried throwing a stone underwater -- compare with throwing on land.
From this seems to follow, that creatures with human like intelligence, are less likely to appear, if the density of the liquid or gas surrounding them, is too high. (Dolphins are bright but not that bright.)
Every organism manipulates their environment in some way. The ones that can manipulate it in a way that allows them access to more resources than the others will out compete the ones who don't.
Evolution doesn't really work like that. It's just a low bar that everything has to cross from time to time. Being a specialist, very advanced hunter is in no ways better than a dumb jellyfish that spawns billions of offsprings.
emotions are just a form of intelligence that's calcified over evolutionary time. each one of our emotions can be linked with survival and/or reproduction.
I just loved this bit in the paper, could be so easily taken on so many tangents!:
""Delayed Gratification is used to evaluate the ability of each algorithm undertake actions that temporarily increase Monotonicity Error in order to achieve gains later on. Delayed Gratification is defined as the improvement in Sortedness made by a temporarily error-increasing action.""
Is it slightly analogous in some ways to the avoidance of getting stuck in local maxima perhaps?
The last coauthor listed on this preprint is Michael Levin, who has a lot of other cool work.
In particular, this talk of his from NeurIPS 2018 includes fascinating biology research results, as well as musings on the future of biologically-inspired artificial intelligence.
A beach sorts itself by size of sand, just by applying physics, so any array copied by parallel processes will sort itself given enough time, by computation time spend on the element aka the size of the rocks.
I think if you 'just apply physics' (let's say 'just apply computation' shall we?) then an array of numbers can only hope for this to happen In a kind of bogosort-style way?: Shuffle them all, and if now sorted, return! Else, loop and shuffle again, and so on. Such a hillarious sort algo.. but wait till u hear about bogobogosort! lollll!
But different sizes of sand do move against each other differently, and I think maybe that aspect is slightly reminiscent of the every-cell-for-themselves aspect of the cells described in the paper, and especially how the different rules allow different swapping operations when the swap-target is smaller or larger than the current cell. So I think it's a very relevant observation!
This is fun, but I would call it an example of self organization / self organized complexity not intelligence.
Cell membranes assemble themselves, so do micella ( little spherical protein baubles ), or to take a non living example lipid bilayers.
We would not call such a system intelligent.
One of Levin's main points is to describe agency/intelligence as a continuous spectrum rather than an on/off thing so within the framework of thought that this paper exists in, it's no longer meaningful to treat the answer to the 'is it intelligence?' question as having a boolean answer.
I completely agree with this myself (and have for a long time before I even read any of Levin's frankly amazing work) and I think of the answer to this as more like a float/real-numbered thing - the amount of consciousness/intelligence/agency as a fraction of overall energy usage or something maybe? And that probably will lead to one constantly having to try to work out where the heck zero and one are all the time eh? heheh : )
I think it's fun and fascinating as well though for sure, and I think that even stuff as simple as a reaction-diffusion simulation, can actually contain some tiny elements of agency (just like this paper does with it's self-sorting cells!) Who cares what the scale is, right?, it's the same phenomena at the tiniest scales in my opinion, that led to life, that led to humans.
Building something that isn't Turing-complete is surprisingly-hard once it's complex enough.
If basal intelligence is present in diverse computational structures, then weak intelligence is everywhere.
If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Personally, I blame game theory. Too many agents too smart in one place, you get conflicts, and eventually someone breaks an atom apart in your direction.
Or do you need emotions to have conflict? Are there basal emotions?
I'm usually not worried about AI uprisings, but I do believe in the possibility of conflict.
>Building something that isn't Turing-complete is surprisingly-hard once it's complex enough
The most basic computational device that is studied is the (deterministic) finite automaton, which corresponds to regular languages (regex, although actual implementations are usually way more powerful). If you add a stack (to count parenthesis basically) you have context-free (CF) languages, which correspond to the syntax of most programming languages. Add a second stack and you're already Turing-complete (TC).
If you know that, you can add any extra-power to your machine that is strictly less than a second unbounded stack, and you get a new language class! For a example, a second n-bounded stack. If you do so you will easily get an infinity of language classes. The point is, are they interesting? In particular, the language classes we focus on have some good properties that most arbitrary classes tend to lack.
The Chomsky hierarchy has context-sensitive languages in between CF and TC, but it is already not a very natural class so I've never seen it discussed anywhere, even in complexity theory research --which focuses a lot more in getting links to computability theory or subtle distinctions between deterministic and non-deterministic classes (most famously P vs NP). For the latter, studying analogs of the complexity classes on restricted models of computations is an interesting approach since Turing machines are difficult to work with.
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Most certainly outside of our light cone.
It took 4 billion years for this planet to produce intelligent life that can send out radio signals. If we were to wipe ourselves out, it would be another half a billion years for another intelligent species to appear on this planet (probably? - using Cambrian explosion as a benchmark FWIW).
We've been emitting radio signals for a century so far, and mayyyyybe we'll last another 1000 years before we blow ourselves up? This is something we can only conjecture about at this point.
But just for the sake of argument, let's say that a post-radio-emissions intelligent species lasts 10,000 years. This means that our light cone must match up to a 10,000 year period in a planet's 4b year history (or 500m year repeat) TODAY, in order for us to detect anything at all. The chances of that are vanishingly small. And they're certainly not going to visit us a mere 100 years after we began emitting detectable signals.
It's not just a problem of space; it's a problem of time (and timing).
Are you wondering why Conway’s Game of Life or the C++ type system isn’t trying to communicate with us from beyond the stars?
Beyond the stars, a static void. Ions, but no aliens.
Till one day SETI finds a 5k line template compilation error!
Rejoice! We are not alone! Aliens have to deal with C++ too!
> If weak intelligence is everywhere, Earth-like planets are everywhere, ... where are the aliens?
Someone has to be first (in our speed-of-causality bubble), maybe it's us?
Doesn't that seem even less likely? Not only do we exist but we're the first?
Not without more information.
We don't know how long it takes to evolve our level and kind of intelligence, nor if intelligence like ours implies successful expansion such that it could eventually be noticed from the kinds of distances we can sense with our tech, nor how fast it would actually expand.
If the first in any light cone dominates that light cone, expanding at a high fraction of c, then almost everyone starts off thinking they're the first.
We may be the first in our own light cone, and that light cone may be just about to start intersecting with that of a galaxy where every star has been completely Dyson'd by a Kardeshev 3 civilisation.
If the civilisation is two million years older than us, that galaxy could even be the Andromeda galaxy.
No, it actually seems to be the most likely explanation. The universe is so young yet. It's just a cosmic blip of time since the current generation of stars has began forming.
The Fermi paradox can be answered in so many ways, and is tied to questions like what is the purpose of life and the universe.
Beyond the existence of a single person (such as myself, or you) what do we exist to do?
Is it to learn the universe? (Curiosity) Is it to decrease entropy locally in order to increase it globally? (Spend energy) Is it to increase complexity? (Do interesting things, foster maximum diversity?)
For example, if the purpose is indeed curiosity, maybe all we will need is one Dyson sphere in order to understand the universe. We could have a dozen super intelligent life forms in our galaxy alone and probably wouldn't notice them. Basically would just look like a quiet black hole the size of a star.
In my opinion, life is just self-replicating tumbleweeds of matter that drift towards local spaces with high energy. The ideal "shape" of these tumbleweeds is gradually approximated via the algorithm of evolution, filtering out the tumbleweeds that fly too close to the sun and so on. Intelligence becomes an emergent property of these optimal shapes, but intelligence doesn't change the outcome, broadly speaking, they still drift towards local spaces with high energy.
Individual organisms will live their life perusing energy, with every breath, with every meal. Even super organisms, such as a nation, will (attempt to) peruse energy in the form of a thriving economy, which influences the energy allocation of the organisms that make it up.
Even absent these tumbleweeds, high density matter (high energy) will literally bend space, and attract other matter to itself through gravitational force. It's entirely different than what I've already discussed, yet intuitively similar?
How does this apply to the fermi paradox? Maybe the idea that the algorithm of evolution will eventually lead to life self-propagating across the universe is flawed. Maybe the spirit of exploration is not universal. Maybe the the simple fact that interstellar travel and communication is energy inefficient is enough to explain the aggregate effect we are seeing?
it sounds like your take is the entropy one, but with a caveat that dark energy prevents Indefinite growth.
I thought Dyson sphere will emit enormous radiation anyway? Like how do you convert photons into electricity and use that electricity afterwards with 100% efficiency? Is it even theoretically possible? There should be lots of heat emitted as infrared light.
Hard to fathom what engineering a civilization like that might be capable of, maybe it would emit extremely hard to detect radio noise.
Where is it written that intelligent beings must create the means for interstellar communication, or any technology at all?
Imagine a planet with highly intelligent whales who have no way to manipulate their environment (hands) and no need to.
Experience on earth that they eventually evolve hands.
That is incorrect, dolphins are unlikely to evolve hands and humanoids evolved hands before they became intelligent (probably to grab branches). It was very lucky that a good brain evolved in a body that already had hands.
> very lucky that a good brain evolved in a body that already had hands.
The other way around: hands adds evolutionary pressure towards becoming more intelligent. (The ones that understand how to use their hands and tools better...)
That's because there's land here, but what about a planet with only water, or just not enough land.
But maybe water-surface-only (no land surface) is unlikely
Why would water prevent the evolution of hands? Lots of sea creatures have claws.
The sea is also not all that different from an atmosphere with a higher density in principle, we live "under-air".
Looking at this planet, it's less likely to happen.
Maybe high density (water) makes tools less useful, and thus hands less useful,
since you cannot move a tool particularly fast under water, compared to on land.
I suppose you've tried throwing a stone underwater -- compare with throwing on land.
From this seems to follow, that creatures with human like intelligence, are less likely to appear, if the density of the liquid or gas surrounding them, is too high. (Dolphins are bright but not that bright.)
There would still be enough earth like planets with land.
Every organism manipulates their environment in some way. The ones that can manipulate it in a way that allows them access to more resources than the others will out compete the ones who don't.
Evolution doesn't really work like that. It's just a low bar that everything has to cross from time to time. Being a specialist, very advanced hunter is in no ways better than a dumb jellyfish that spawns billions of offsprings.
Still hard to smelt metal if you live in the ocean.
That's why genetic mutations happened and they grew hands and started smelting on land.
This happened.
>where are the aliens?
It's probably a Plato's Cave situation. You're chained there, staring at flickering shadows on the wall asking, "Where are the aliens?".
Which is to say, the dimension that must be traversed in order to meet the aliens is an invisible one.
emotions are just a form of intelligence that's calcified over evolutionary time. each one of our emotions can be linked with survival and/or reproduction.
I just loved this bit in the paper, could be so easily taken on so many tangents!:
""Delayed Gratification is used to evaluate the ability of each algorithm undertake actions that temporarily increase Monotonicity Error in order to achieve gains later on. Delayed Gratification is defined as the improvement in Sortedness made by a temporarily error-increasing action.""
Is it slightly analogous in some ways to the avoidance of getting stuck in local maxima perhaps?
Or maybe the fact that the path of minimal effort != sequence of paths with locally simplest steps.
The last coauthor listed on this preprint is Michael Levin, who has a lot of other cool work.
In particular, this talk of his from NeurIPS 2018 includes fascinating biology research results, as well as musings on the future of biologically-inspired artificial intelligence.
https://youtu.be/RjD1aLm4Thg
HN discussion about the talk: https://news.ycombinator.com/item?id=18736698
Artem Kirsanov has many great videos on this subject most of which I fail to absorb on first pass.
https://www.youtube.com/@ArtemKirsanov/videos
He's on Lex Fridman's podcast as well - https://www.youtube.com/watch?v=p3lsYlod5OU
Great conversation.
"The Collective Intelligence of Morphogenesis: a model system for basal cognition" by Michael Levin
https://www.youtube.com/watch?v=JAQFO4g7UY8
And from Machine Learning Street Talk
Michael Levin - Why Intelligence Isn't Limited To Brains.
https://www.youtube.com/watch?v=6w5xr8BYV8M
I recommend Michael Levin’s YouTube channel. Lots and lots of fascinating discussions.
A beach sorts itself by size of sand, just by applying physics, so any array copied by parallel processes will sort itself given enough time, by computation time spend on the element aka the size of the rocks.
I think if you 'just apply physics' (let's say 'just apply computation' shall we?) then an array of numbers can only hope for this to happen In a kind of bogosort-style way?: Shuffle them all, and if now sorted, return! Else, loop and shuffle again, and so on. Such a hillarious sort algo.. but wait till u hear about bogobogosort! lollll!
But different sizes of sand do move against each other differently, and I think maybe that aspect is slightly reminiscent of the every-cell-for-themselves aspect of the cells described in the paper, and especially how the different rules allow different swapping operations when the swap-target is smaller or larger than the current cell. So I think it's a very relevant observation!