| link | Last. 20260208T060301916Z |
Heed my warning. Nonconsing (âdestructiveâ) operations are tricky.
Lisp is well known as the AI language. One obvious want is to have training data that refers into the current context either literally so that changes in context propagate to the training data, or as a copy snapshot of part of the input context at one timepoint. This follows closely in my recent lisp symbolic deep learning effort.
The ANSI common lisp standard already provides this via sharpsign equals and sharpsign sharpsign reader macros as we will see here with two training data memories defined relative to the current input context, one dynamically and one statically.
We obviously want our common lisp implementation of a feedforward neural network of a single hidden layer implemented via my symbolic modern hopfield network to have this.
Please do not overly worry about extracting the code from this article. My next planned article will be my efforts on distributing and sharing my common lisp symbolic deep learning, including this in a way I hope your computing can directly consume it.
We will go over an interactive example showing this in action with a deep learning inference step by step. Then we will draw some conclusions. Afterward will be an appendix with the small amount of supporting code.
Instead of defining my feedforward neural network updates as being on integers read as floating point numbers with a certain number of bits, mine is defined on lists of e.g. symbols. So an inference on one symbol is somewhat analogous to a one bit float inference.
CL-USER> (setf *context* '(((FOO) (FOO) (bar foo) (FOO) NIL)
(NIL (FOO) (FOO) NIL (FOO))
(NIL NIL NIL NIL (FOO))))
(((FOO) (FOO) (BAR FOO) (FOO) NIL)
(NIL (FOO) (FOO) NIL (FOO))
(NIL NIL NIL NIL (FOO)))
This is clearly a three by five matrix of lists of symbols.
I dressed prin1 (print machine READably) up with the SHARPSIGN EQUALs reader macro.
CL-USER> (format nil "~v/reference-mat/" 0
*context*)
"((#1=(FOO) #2=(FOO) #3=(BAR FOO) #4=(FOO) #5=NIL)
(#6=NIL #7=(FOO) #8=(FOO) #9=NIL #10=(FOO))
(#11=NIL #12=NIL #13=NIL #14=NIL #15=(FOO)))"
CL-USER> (defparameter *context-string* *)
*CONTEXT-STRING*
this reader macro labels objects so we can refer to the same object multiple times in a single read.
Here are two training data memories. The first one is a 3x3 memory which is clearly a rectangular subregion of the training context. There is no particular reason that all the cells are literally objects from the earlier labelled input context. For example, one imagines there could be an âasciiâ box around some values, or values could be rotated, flipped or otherwise rearranged compared to their layout in the input context. We will see a dynamic change occur in both places during inference.
CL-USER> (defparameter
*referential-memories-string*
"(
((#3# #4# #5#)
(#6# #7# #8#)
(#9# #10# #11#))
(( #.`(baz ,@'#3# buz) #.`(,@'#4#) #.`(,@'#5#))
( #.`(,@'#6#) #.`(e ,@'#7# g) #.`(,@'#8#))
( #.`(,@'#9#) #.`(,@'#10#) #.`(,@'#11#)))
)")
*REFERENTIAL-MEMORIES-STRING*
The second memory admittedly looks messy, and is the first time I ever drew a snail ,@' in the wild (lisp user bingo card thing). In this case it says to splice the values of the labels being read in into new lists, along with other contents. In our example case the result is a new list with spliced values in the inference nonconsingly changed cell. However, the standard allows compilers to collate equal constant lists. So it is necessary to be careful and aware of what your current conforming ansi cl implementation does. Here, I am using the common steel bank common lisp. Actually, I explicitly stuck other symbols around the cell being destructively inferenced in a moment to stop it getting collated at read-time with #3# simply because they were equal.
See the appendix for my read-context-with-memories again (though it just uses make-concatenated-stream). You can see the machine-readable list of list matrices and lists of symbols being returned.
CL-USER> (with-input-from-string
(context-stream *context-string*)
(with-input-from-string
(memories-stream *referential-memories-string*)
(read-context-with-memories context-stream memories-stream)))
(((FOO) (FOO) (BAR FOO) (FOO) NIL) (NIL (FOO) (FOO) NIL (FOO))
(NIL NIL NIL NIL (FOO)))
((((BAR FOO) (FOO) NIL) (NIL (FOO) (FOO)) (NIL (FOO) NIL))
(((BAR FOO) (FOO) NIL) (NIL (E FOO G) (FOO)) (NIL (FOO) NIL)))
CL-USER> (multiple-value-setq
(*context* *memories*)
(apply 'values /))
(((FOO) (FOO) (BAR FOO) (FOO) NIL) (NIL (FOO) (FOO) NIL (FOO))
(NIL NIL NIL NIL (FOO)))
I deal those into the variables *context* and *memories*. They had to be read together.
According to the training data, with each training data memory placed with its top left at (0 0) in the current *context*, and performing the deep learning inference at (0 2) within those training data memories within the input context, the inference is that the new value there should be (BAR).
CL-USER> (test-infer 0 0 0 2 '(foo) *memories* *context*)
NEW-VALUE!
(BAR)
(BAR)
actually, the inference calculation happens on a copy-tree of the old-value so that any non-consing behavior must be done explicitly.
Nonconsingly modifying the input data context (handle with care).
CL-USER> (nintersection (elt (elt *context* 0) 2) *)
(BAR)
In my deep learning implementation being used, this should be done in the explicit restart-case for write-new-value. Here it was being done interactively in the read-evaluate-print-loop of this demo.
We see that (BAR FOO) of our initial context has changed to just (BAR) in the current context.
CL-USER> (mapc 'print *context*) nil
((FOO) (FOO) (BAR) (FOO) NIL)
(NIL (FOO) (FOO) NIL (FOO))
(NIL NIL NIL NIL (FOO))
NIL
and the first training data memoryâs #3# matches the new (BAR) in the current *context* rather than a copy of the value when it was read.
CL-USER> (dolist (m *memories*) (mapc 'print m) (terpri))
((BAR) (FOO) NIL)
(NIL (FOO) (FOO))
(NIL (FOO) NIL)
((BAZ BAR FOO BUZ) (FOO) NIL)
(NIL (E FOO G) (FOO))
(NIL (FOO) NIL)
NIL
CL-USER>
On the other hand, the (BAR FOO) we spliced into another list in the second memory is unperturbed.
We saw that a feedforward neural network of a single hidden layerâs training data can be defined in reference to the current context being read from two streams into application memory together in ansi common lisp using its #= and ## reader macros.
My hunch is that this capacity of the feedforward neural network of a single hidden layer to both store pieces of the initial input context in the application memory of the training data, and to have training data refer dynamically to the context are both very significant.
This was the last, and admittedly very difficult piece of deep learning coding I wanted to exhibit prior to my what-deep-learning-and-large-models-actually-do post I am working on.
Before that article I have been foreshadowing (WIP, original research o_o), my next article will actually be presenting distributing this deep learning ffnn via my modern fork of Sandewallâs Leonardo System AI Platform, distributed in its idiom on my nascent https://lispy-gopher-show.itch.io/leonardo-calculus - this project was also in part to learn about what my needs were in distributing and usefully sharing projects like this there.
About the read-time labelling #= and reference ## ansi common lisp reader macros. I think this is a good example of it not being good enough to say that all languages are Turing equivalent, and so suggesting to just create a new ad hoc read system in a poppy VC trending language. These reader macros are from a language standard that is in its 30s, not preschool like every other top-25 hot language from 2025 (you could compare language traditions instead of current stable release specification, but I cannot imagine you want to). Read-time labelling and splicing in a way that can be sanely communicated with others seem like bad targets for ad hoc and unstandardised reimplementation and reimplementation. That we already have ansi common lisp already is a great boon.
reference-mat(defun reference-mat
(stream matrix colonp atp &optional (start 0))
(declare (ignore colonp atp))
(format stream
"(~{(~{~{#~d=~s~}~^ ~})~^~%~})"
(mapcar (lambda (y)
(mapcar
(lambda (x) (list (incf start) x))
y))
matrix)))
The wikipedia page on format is pretty good. v means âpull in (an arguement (as this parameter))â.
Read-context-with-memoriesis basically just make-concatenated-stream and read.
(defun read-context-with-memories
(context-stream memories-stream)
(with-input-from-string
(begin "(")
(with-input-from-string
(end ")")
(apply 'values
(read (make-concatenated-stream
begin
context-stream memories-stream
end))))))
test-inferIs a helper I made that just wraps the deep learning inference with conditions article. The algorithm-as-runtime-condition-handlers and use of a return restart-case feature there. In particular, I suggested here that any possibly non-portable nonconsing / destructive operation be explicit at runtime e.g. in the restart for write-new-value.
(defun test-infer
(r1 c1 r2 c2 item memories input)
(restart-case
(handler-bind
((dl-inference #'handle-new-value)
(dl-inference #'handle-winner)
(dl-inference #'handle-compute-winner))
(infer r1 c1 r2 c2
:item item
:memories memories
:input input
:dl-keys *keys*))
(write-new-value (new-value)
(print 'new-value!)
(print new-value))))
*keys* is like that article. It carries the rectified polynomial part of the deep learning algorithm and specifies the :predicate being lispâs intersection, and then the idempotent positive outcome :hit is union and the idempotent negative outcome :miss is set-difference.