Computers have changed a lot in 51 years. 51 years ago computers were so expensive that we had to have multiple users per machine to make it financially feasible. 51 years ago most multi-user operating systems were messy, inconstant, and in general a pain in the ass. So some dude at Bell Labs built a little OS to fix the pain in the assery of multi-user OS’s and they did a wonderful job.
Unix was built for this:
Now it runs on this:
and pretty much everything else, from phones to servers to cars. It’s good enough and standard.
UNIX was designed as a self-contained system. you simply didn’t have other computers you would rely on. You had your department’s computer, and you would sometimes send messages and files to their department computers. That’s the full extent of UNIX’s intended networking abilty’’s.
A modern UNIX system isn’t self-contained. I have 4 UNIX systems on my desk (Desktop, laptop, iPhone, iPad) I’m contentiously using the cloud (iCloud for photos, GitHub for text files, Dropbox for everything else) to sync files between these machines. The cloud is just a workaround for UNIX’s self-contained nature
Then we add a gazillion programming languages, VMs, Containers, and a million other things, UNIX is a bloated mess of workaround for its own problems. We need a replacement, something that can be built for the modern world using technologies that are clean, secure, and extendable
Luckily UNIX wasn’t the only thing being built in the 70s and 80s, the wonderful people at MIT also needed a timesharing system for their PDP-10. So in 1967, they built ITS (Incompatible Timesharing System). It had some interesting design choices, no passwords, no file permissions, your shell was a binary debugger, etc. This is terrible for production use, but it’s a hacker’s dream come true and it gave us Emacs and scheme. It also gave us a program called MacLisp.
MacLisp would eventually evolve into Common Lisp in 1984, but it was also used in MIT’s cadr LISP machines starting in 1973. After promising results for the internal machines, MIT sold the rights to their LISP machine to two companies: Symbolics and LMI.
Symbolics is the more iconic of the two, but I wasn’t able to get my hands on a good emulator or the source, but I was able to talk to Alfred M. Szmidt. He maintains the LambdaDelta project and is familiar with “[the LMI] Lambda, [the Symbolics] 36xx to some extent, and [the TI] Explorer” lisp machines. He was kind enough to answer my many questions and give me wonderful insight into how these machines worked and were used (I’m definitely writing a full post on the history of LISP machines). I also personally used his emulator to mess around with the lisp machine for myself.
These machines used specialized hardware and microcode to optimize for the lisp environments (Because of microcode you could run UNIX and the Lisp OS at the same time). They were programmed in lisp the whole way down and could be run code interpreted for convenience or compiled to microcode for efficiency. You could open up system functions in the editor, modify and compile them while the machine was running. Everything worked in a single address space, programs could talk to each other in ways operating systems of today couldn’t dream of. They had Lisp worlds delivered over the network, a version-controlled filesystem, high-resolution displays, and proper windowing GUI’s.
Lisp machines are conceptually beautiful, but they were computationally heavy and expensive. They required a lot of memory and a frame buffer. While UNIX could run on much less memory and a Text Terminal. UNIX was cheaper and it won out, but when the price of frame buffers and memory came down UNIX was well established and good enough.
UNIX isn’t good enough anymore and it’s getting worse. We need a new system and we have more than enough frame buffers and memory for a lisp machine.
With lisp machines, we can cut out the complicated multi-language, multi library mess from the stack, eliminate memory leaks and questions of type safety, binary exploits, and millions of lines of sheer complexity that clog up modern computers.
A new operating system means we can explore new ideas in new ways. Distributed file systems? Sure. Persistent DIMMs show a lot of potential in a lispy world (Pun intended…). There is a massive hole forming in computing, a hole lisp machines can fill.
This won’t happen overnight, it will probably take UNIX getting worse (maybe even a lot worse) for people to really start looking at new operating systems, but they will start looking for UNIX alternatives and we should have a working lisp machine ready for them.
Sources:
A good explanation for why UNIX sucks and need to be replaced: https://archive.fosdem.org/2021/schedule/event/new_type_of_computer/
Lambda LISP machine docs’s: https://tumbleweed.nu/lm-3/
A little lisp machine history https://www.youtube.com/watch?v=dMFLQ1t7iaQ
Thank you Alfred M. Szmidt for answering my questions and giving me enough material for 3 posts.
Glad you enjoyed my talk. ;-)
You might enjoy this more recent post of mine; it grew out of a Reddit comment, but I turned it into a blog post that did quite well on HN.
https://liam-on-linux.dreamwidth.org/80795.html
Supergood thoughts!! I 100% agree. Lyckily we have Emacs as a personal "lisp machine"... but the change will come towards a full Lisp OS. I'm sure