This is a book review of Braitenberg’s vehicles, not a summary. If you are wondering if you should read it, both Ksaj and dm say yes, it would be the text for a class either of them would teach. So yes. Let us look for a moment at the
Braitenberg walks us through a series of simple toy vehicles with sensors, mainly construed to be directional noses or eyes and bilateral wheels with motors attached to the sensors.
In this way, trivial adjustments and additions to a preceding vehicle are found to add shocking emergent properties, which Braitenberg lines up convincingly with human mentalities.
The story is told with the kinds of logic-programmable cars common in some engineering labs, though given strictly in terms of simple though occasionally detail-papered-over electronics.
There are two special nonphysical wires used that later turn out to be adopted from other scientists’ neurobiology theories: Wire that forms memories and wire that implies causality/time-passing. They are justified in terms of circuits that could convey those things, though it is acknowledged that making an ideal wire in the spirit of those circuits is just abstract.
At some point around memory, we eventually get to having a Vehicle that has a Turing machine, which is appealled to as a Turing machine, after which, since we have a universal computer I am not sure how much else needs to be said.
of the first half, the vehicle designing and emergence describing half, is the discovery that trivial computer program automata can have fantastic emergent properties, as I guess many programmers have experienced when a simple tool (I am tempted to say the unix ones) turn out to be wonderfully elegant and powerful, if subtle. But his finding is that this is a trapdoor function. A simple program or group of programs plausibly has fantastic emergent consequences: But given the consequences, it is impossible to divine what simple program it was that did this.
I think that point might tie together reverse engineering arbitrary programs from their output to Chaitin’s number (an exceptionally uncalculable infinite number, being the sequence of halting probabilities of programs in a given language), or something close to it.
Probably because in the middle of this book, I reviewed Erik Sandewall’s Biological Software (find it in here), I am not very impressed by the explanation that all these good computer programs can be expected to be made by evolution (Sandewall focuses on the near-field meanings of doing reproduction and raising offspring viz computer programs instead). I don’t think that genetic evolution and computer programming are similar enough for this appeal to have much explanatory power for us doing compute programming. In fact, the sequence of vehicles weren’t discovered by a genetic optimizer, they were a collection and deliberate unification and simplification of computer programs that had been studied as being close to being what organisms are seen to do (reviewed in the second half). But you didn’t come here to get my opinion.
It’s not the end of the world, but there was definitely a disjunction between convincing me that Vehicles-style creation is a trapdoor process, where I can make an elegant system with emergent consequences, but I cannot be told some emergent consequences and then symmetrically deliver an elegant system that turns out to imply them (by the definition of emergence)
and then presenting biological emergent observed realities and attaching them back to the earlier vehicles - I thought we had just gone over how the other direction was a trapdoor function!
However Braitenberg’s point was never that the way all of biology works was literally the 14 vehicle thought experiment he presented in the first half of the book, simply that those 14 vehicles are suspiciously close to practically spanning all of neuroscience (though not uniquely, it just implies that even the most complex biology seems like it can emerge from elegant automata in terms of other peoples’ research).
I guess Braitenberg is actually reintroducing boundaries to the problem by pointing out how researchers including himself have worked towards truer programmatic models of vision neurobiology by showing that the way fly eye neurons are wired in fact literally is basically like his vehicle supposition. So in this and analogous ways it is possible for science to move towards reverse-engineering a more true set of prior mechanisms of neurobiology and vision psychology.
Continuing in the vein, the neurobiology of humans and various other classes of animal is covered, all the way down to axon growth and spike trains, with arguements and demonstrations of compatibility between the neurons and sequences of available neurons in different scenarios including epilepsy’s pathology syncronising with having vehicles-like underlying programs.
I think the takeaway is that the staggering complexity of human cortices is conceivably an emergent implication of a flock of 14-vehicles-esque computer programs. I will use my emacs slime eepitching from before.
Regarding optimization, I much prefer the act of reproduction explored in Sandewall’s Biological Software and implemented in the design of his magnum opus, the Leonardo System (which I ported out of 2014 to Gnu C Lisp) to adopting genetic algorithms as an approach. I don’t think Braitenberg ever calls for people to actually use genetic algorithms per se.
In terms of me ballparking how many running programs to expect to make up a problem (or solution), Braitenberg cites that humans have 10x-100x more processing inside their cortex than the number of inputs (neurons) passing information into their cortex, whereas frogs are closer to 1:1. Squinting a lot, it seems like a frog-like problem might be a job for one agent (i.e. one program), whereas a human-like problem might warrant 10-100 cooperating agents.
On the other hand and more technically, I should expect that understanding my running program should require sophisticated analysis (hopefully something sophisticated is emerging), analogous to Braitenberg’s vehicles and cortical spiking. This seems hard- instead of shooting an arrow into a target, creating a multiagent system which emergently shoots arrows into the target (hopefully.). This implies two software needs: Firstly, software targets that I cannot personally hit, and secondly, having regularly occurring targets, not just one target once.
Well, I read it! Thoughts, your own questions for Ksaj on the Mastodon please.
Please share this as far and wide and in whatever manner you see fit. Hope it provokes some thought in someone!
screwlisp proposes kittens