| link | Last. 20260201T065027330Z |
While I modernized Erik Sandewall’s academically well-published Leonardo System AI platform from 2014 and in some small way contribute to The Lisp Community, until now I had not contributed a global public and private killer feature. This is my straightforward stab at ameliorating that deficiency.
If any country, enterprise or vendor in the world is to be believed, all of them are in a crisis about the uninterpretability/unexplainability and performance hunger of the black box deep learning large models. Fortunately, the crisis is actually the crisis of using 1950s integer encodings of string data rather than deep learning. You might have heard ‘transformer model’ instead of deep learning ; a transformer model is one thing implemented by e.g. this.
Here is my implementation of deep learning defined as set operations on matrixes of sets of symbols. The training data=memories are matrixes of sets of symbols which are intact and directly interpretable, explainable and predictable as well as modifiable, addable and deletable with understandable (allowing, deep) consequences. In ansi common lisp we use symbols, which have names (which are words ~ strings), properties, namespaces, values and functions. This is obviously better than looking directly at 1s and 0s of the float subsets of the integers when working on sets of things that have names (or as we call them, all things). Lisp was ranked the 25th hottest programming language of 2025, and common lisp is the only currently top-25 popular language definition in its 30s, the rest all being toddlers.
Here follows an example whose the lisp code is in the appendix at the end. I assume I do not need to explain that this is what deep learning is. I guess I am available to consult, or better yet, I heard (on my show) that Kent Pitman is available to lead your project and has been active around the modern deep learning topics and concerns.
Here is my demonstration input data. If this were used to implement a modern chatbot, this would be the context. Matrices and sets are this whole big general thing.
(((foo) (foo) () (bar))
(() (foo) (foo bar) (bar)))
we can see the data has multiple symbols in sets in a matrix. I used lists as my low level data structure in this particular case. My memories=training data are
((((foo) (foo))
((foo) (foo)))
(((bar) (bar))
((bar) (bar)))
((() ())
(() ())))
obviously being a list of three two-by-two memories. Let us perform our symbolic deep learning inference
CL-USER> (test-sum-memories 'foo
:start-row 0
:start-col 0
:target-row 0
:target-col 0)
T
FOO
which has the literal interpretation,
(((foo) (foo) () (bar))
(() (foo) (foo bar) (bar)))
((x1 x2)
(x3 x4))
of all the memories=training data, the deep learning inference at x1 is to set union the symbol foo into it (foo is already in there). Further, by running my deep learning inferencing until changes cease we can always directly recover what the specific literal memory=training data the deep learning inference was moving towards was, in this case:
((foo foo)
(foo foo))
Quickly consider four more examples.
CL-USER> (test-sum-memories 'foo
:start-row 0
:start-col 0
:target-row 1
:target-col 0)
T
FOO
CL-USER> (test-sum-memories 'foo
:start-row 0
:start-col 1
:target-row 0
:target-col 1)
T
FOO
CL-USER> (test-sum-memories 'foo
:start-row 0
:start-col 2
:target-row 1
:target-col 0)
NIL
CL-USER> (test-sum-memories 'bar
:start-row 0
:start-col 2
:target-row 0
:target-col 0)
T
BAR
these deep learning inferences transform the input data to
(((foo) (foo) (bar foo) (bar))
((foo) (foo) (bar) (bar)))
in the next inference step clearly reflecting movements towards specific memories=training data at different localities in the input context (if it was not obvious, the ‘best’ original training data is recoverable by propagating this algorithm ‘to the end’).
We are free to add, modify and delete memories=training data at any time. This is what training is for this lisp pure symbolic deep learning implementation.
Two more ubiquitously stated needs in today’s new deep learning globe are encryption of the memories=training data, and matrix multiplication based crunching. Fortunately there is an academically established companion form of my iterative symbolic deep learning which is cryptographic and inferences via matrix multiplication, however these wants come at the expected cost of the data taking up more space and being difficult to modify in place. You would work on the data in modern form and put it into cryptographic/matrix-multiplication form in special cases.
Earlier we saw the input context had two symbols. This is loosely similar to two bit quantization or a two bit word length, as is well known in large model quantization.
Make a modern hopfield network but then instead of replacing +1 and -1 with +1.0 and -1.0, replace them with +a and -a. Use memory dimensions for locality instead of the hopfield-networks-is-all-you-need stuff.
Is it symbolic? It uses the metric of counting how many times member of sets is non nil. It is pretty symbolic.
Point any references here please, publish your expansions and corrections. I will add your reference at the top.
Contact me on the show/blog Mastodon to start a conversation (countries of the world whom have claimed to be in desperate need of this explainable deep learning inferencing).
This name is a parody of Goertzel’s A Better Fourier Transform Algorithm. Goertzel’s algorithm is an optimal fourier transform in an important and different way than what fast fourier transforms are. My analogy is clear.
(defun sum-one-row
(item row-in mem-in idx)
(loop :for me :in mem-in
:for ro :in row-in
:for count :from 0
:for m := (if (member item me) +1 -1)
:for c := (if (member item ro) +1 -1)
:when (and idx (= count idx))
:sum (* +1 m)
:into pluses
:when (and idx (= count idx))
:sum (* -1 m)
:into minuses
:unless (and idx (= count idx))
:sum (* c m)
:into pluses
:unless (and idx (= count idx))
:sum (* c m)
:into minuses
:finally
(return (list pluses minuses))))
(defun make-rectified-polynomial (degree)
(lambda (x) (cond ((plusp x) (expt x degree))
((not (plusp x)) '0))))
(defun sum-one-memory
(item input memory
&key target-row target-col
start-row start-col
rectified-polynomial &allow-other-keys)
(loop :for mem-row :in memory
:for count :from 0
:for whole-input-row
:in (nthcdr start-row input)
:for input-row
:= (nthcdr start-col whole-input-row)
:for idx
:= (when (= count target-row) target-col)
:for (row-plus row-minus)
:= (sum-one-row item input-row mem-row idx)
:summing row-plus :into row-pluses
:summing row-minus :into row-minuses
:finally
(return
(- (funcall rectified-polynomial row-pluses)
(funcall rectified-polynomial row-minuses)))))
(defun sum-memories (item input memories
&rest keys &key &allow-other-keys)
(loop :for memory :in memories
:sum (apply 'sum-one-memory
item input memory
keys)
:into best-memory-yes
:finally (return (if (minusp best-memory-yes)
(values NIL item)
(values t item)))))
(defun test-sum-memories
(item &rest keys
&key
start-row
start-col
target-row
target-col
(rectified-polynomial (make-rectified-polynomial 2)))
(declare (ignore start-row start-col
target-row target-col))
(apply 'sum-memories
item
'(((foo) (foo) () (bar))
(() (foo) (foo bar) (bar)))
'((((foo) (foo))
((foo) (foo)))
(((bar) (bar))
((bar) (bar)))
((() ())
(() ())))
:rectified-polynomial rectified-polynomial
keys))
(test-sum-memories 'foo
:start-row 0
:start-col 0
:target-row 0
:target-col 0)