After our talk on Braitenbergian Vehicles Synthetic Psychology alife (bottom up) rather than ai (top down) on the lispy gopher climate weekly live show with Karsten (masto), shizamura (masto) and myself (screwlisp (masto)) in particular.
I suggest reviewing it yourself, and you are very welcome to share it how and where you please - conducting knowledge from the past to the now (and after) is deeply important.
One of my outcomes from this discussion was to get a clue how to go from the first half of Vehicles (a description of 14 logical vehicles up from âvehicle that presents fearâ, âvehicle that presents aggressionâ, âvehicle that lovesâ, âvehicle that exploresâ⊠"vehicle with memory âŠ) into the second half, a selected survey of neurobiology and psychology including housefly vision and the human cortex in my own lisp programming.
Building on the other twoâs very hard work explaining artificial life and the semantic web to me, in the episode we gradually developed something like the following:
Shizamura gave an example and explanation from her PhD trajectory of this being done with alife agents at an institution she was at (listen to the episode.).
Karsten identifies this scenario as the boundary between artificial life and artificial intelligence. This boundary seems to be the key place - can we connect the first half of Braitenbergâs Vehicles to the second half?
This simple scenario directly implies a further scenario where instead of the location-sharing vehicle acting altruistically, a system of sharing is the only way any vehicles can survive.
Take the crow behaviour of crafting sticks that can be used to get insects to eat out of the insectsâ burrows.
We have two constant problems:
Say we have two particularly hungry crows, where each simply does not have enough energy to both find and figure out how to eat hypothetically available insects on their own: Their luck will run out and they will quickly starve, acting alone.
The crows start at a site with edible insects, but they donât know how to extract the insects to eat.
(goto one)
.For some parameterization of this scenario - food energy of each insect - time taken to figure out eating the insects - energy taken to find insect colony sites - ease of copying insect eating technique - ease of following others to a new site - eventually only crow populations presenting a cooperative system like this will exist there.
Assuming it takes one full day to figure out how to extract insects at a typical site, and one full day to find a new insect site, crows must have several beliefs: that they will get food by figuring out how to get at the insects at the current site, and the belief that they will get food by exploring. More obviously, they must believe that if they are at a site with insects, and know how to eat insects at that site, they can satiate themselves by extracting and eating the available insects.
Here the distinction between desires and intents is important. The desire to eat leads to a crow acting upon a food-getting belief. Intents however are a special kind of desire: A desire that the crow has already begun to act upon. If each crow is constantly randomly swapping between exploring and figuring out the insects, each crow starves, not having finished its analysis of extracting insects, and not having properly explored for the next insect colony site.
So successful crows will prioritize intents they are already working on over other desires. This is a BDI architecture.
These seem to line up with my approach. The lisp images are crows (are neurons, possibly).
Exploration is a lisp image which already has a functionality, and input data to work upon, operating normally.
Figuring out the current site is a lisp image which has hypothetical input data to work upon, but does not have functionality for this current data. The primary mechanism of learning is to collaborate with a human-in-the-loop (i.e. interactive programming) by providing and operating a repl for the human, and incrementally compiling the humans actions until the image does have functionality for the input data. After which the crow goes into explorer mode (consistent to its beliefs, desires and intents, having finished the figuring-it-out intent).
Talk about this on the mastodon please.
You are very encouraged to participate with and share this article and any of my articles in the manner and place that you feel suits.
I will join Karsten in hyping alife this era. (And read shizamuraâs thesis).
screwlisp proposes kittens