On Kent M Pitman’s shared Whither Original Thought previously unpublished essay

Here are some of my thoughts on and continuing from Kent’s reading of his previously unpublished essay, Whither Original Thought? read on our interview on the lispy gopher climate. The essay was written at a writer’s retreat in Italy, and as Kent helpfully clarifies, the title is a pun on the words “Wither Original Thought” about admonishments to programmers and other authors not to think and do for themselves but to look at the past instead. Kent M Pitman wrote the hyperspec and took over as chair of ANSI common lisp standard committee as well as contributing its condition system (sorry for my choice of document being revision-18.txt) and writes extensively about our modern crises, as well as sharing computer history that only survived til now in personal memory and spoken word on the mastodon.

Thoughts

Kent juxtaposes learning-by-doing-it-yourself versus learning-by-studying-past-work. Pointing out the two crucial points: One that the instruction to study the past instead is often meant as or effectively is an admonishment /not/ to gain this experience yourself in first person. Secondly, not everyone is well suited or positioned to read whether by personal dint or by access (imagine journal access pricing all but the most elite institutions out of their authors’ contributions).

Sorry for adding an outside-the-interview pers. comms., but Kent points out that the desirable freedom and sharing we identify with libre software now was experientially the case for the people who had access to lisp machines historically. If you could get on the machine, you implicitly had the right to access, use, develop, redevelop, borrow any and all source you could find in it. This effectively-libre lisp machine community experience must have contributed to the harsh admonishments at the top of Symbolics’ internal documents - THIS DOCUMENT DESCRIBES SYMBOLICS’ ERODABLE COMPETITIVE ADVANTAGE AND MUST NOT BE DISSEMINATED.

This reminded me of a moment when a good friend of mine, Vassil Nikolov, being genuinely helpful said to me about a bit of programming I was planning that there was zero chance that I was breaking new ground with it. That was meaningfully true in that context and an important piece of commentary to get from a friend. On the other hand, Kent points out that two authors separately implementing even explicitly the same software concept are actually each contributing something wholly new and unique in a way that is culturally unacknowledged.

When I was trying to grasp this point, I put it in terms of a board game like chess or go. Even if both players are trodding a well known opening, inevitably a small initial idiosyncracy appears in the game, a piece one space to the left in this game, a difference in what place was visited first, and by the time mid game arrives, each midgame has a literally unique character. Essentially Zwei/Zmacs, xemacs, Steele’s collected ?MACS antecedent, gnu emacs, Ramin’s emacs, cl-el/lem which are in some sense emacses, similarly powerful more loosely related editors like vi, for all their specifically shared intents and mutually tangled histories are each of them wholly unique contributions to the universe and cultures of computing.

Kent underscored the difference between these unique contributions of similarly-purposed software to the current lurch towards LLM chatbot software, ideas and writing. The above paragraph being new contributions of, in the LLM world perspective, ideal training data, in contrast to LLM code generated from those data, which is not training-suitable data in the same way. LLMs training from other LLMs is found to easily create similar behaving copies of the other LLMs, but this imperfect-photocopying is observed not to produce the pre-LLM training data from which large models were gestated.

In his self-effacing way, Kent points out that his contributions, such as the Cross Referenced Editing Facility for Open University in England were a result of the learning-by-doing-yourself rather than learning-by-studying-the-past, with it simply being an accident of history that Kent was amoung the pretty-much first modern computer programmers, so when he did something for the first time himself it was also fairly described as a world first at the time. His message being that later programmers should not be suppressed from learning-by-doing simply because he started programming before they were born.

Another note about CREF- on one hand, programmers might be told not-to-do-it-again. But what /was/ CREF? When Kent wrote it up in AIM-829 it was literarily connected to, as Kent found in reading afterwards, Ted Nelson’s Literary Machines in its exploration of densely hyperlinked traversal of information once Kent found out what he was doing had started to be called hyperlinking. However what actually is the CREF that has already-been-done for modern programmers to not-reinvent? Something like Amazon’s proprietary search programs, but optimized to usefully create paragraphwise navigations of Wikipedia’s data (instead of Amazon’s use to create likely sales events from dynamic product information assemblages).

Well, listen to Kent’s essay and interview. Sorry about my informal tone and references to outside of the interview. I will transcribe the essay, and the rest of the interview in later articles. See you to talk about this on the Mastodon.

Please do go out of your way to discuss and share this / Kent’s reading of the unpublished essay as far and wide as you can and have what conversation we can about it.

Edit: I believe that discussing at and retooting on the Mastodon post, or making your own posts in your places, or sharing the recording peertube link to the interview are important ways to spread this basically oral history of early computing. So please do that in ways that occur to you.