This page was created in 2002 and last updated 2020 May 1.
Broken links are marked with [brackets], but the URLs are maintained as clues to their original location.
SHRDLU was a 1970 artificial intelligence (AI) tour de force, written in MACLISP for the Incompatible Time Sharing System (ITS). To quote SHRDLU's creator: The system answers questions, executes commands, and accepts information in an interactive English dialog... The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system... It can remember and discuss its plans and actions as well as carrying them out... Knowledge in the system is represented in the form of procedures, rather than tables of rules or lists of patterns.
You can download a Windows text-only console version of SHRDLU implemented in Common Lisp, or a graphical 3-D version implemented with an extra Java layer. Source code is included. These files were supplied by Greg Sharp, and were produced by the [university student project] to resurrect SHRDLU. Double-click the SHRDLU.BAT file in either version to start running.
The Windows version isn't capable of completely reproducing the classic demo dialog and is fairly brittle and easily crashable, but it does correctly handle a large portion of the classic input sentences and many reasonable variations. Note that different versions of the demo dialog exist. For example, the demo in Winograd's book includes some "owning" tests not included in his web site demo, and his web site demo includes a "support supports support" test not in the book's demo. Rephrasing your input can often help get past current bugs. For example, leaving out "will you please" lets the multi-block stack request be accepted (although the Java display reveals only two blocks actually end up stacked). [This film] shows what a correct SHRDLU demo should display.
SHRDLU is often described as an initially impressive program that only appears to succeed because of the limited blocks world domain it understands. On the other hand, it's hard to find many subsequently implemented projects that were as ambitious or as general as SHRDLU. Considering how many applications could benefit from even limited intelligence, why is SHRDLU-style technology still so difficult to find or exploit? One popular excuse is that subsequent efforts to generalize SHRDLU techniques were supposedly not fruitful, with the result that SHRDLU-style projects fell out of favor. Or perhaps the complexity required in SHRDLU just to attain rudimentary intelligence scared off anyone who might attempt a more sophisticated system, because SHRDLU code already exceeded the design and engineering capabilities of most programmers. Creating a program that understands "pick up anything green, at least three of the blocks, and either a box or a sphere which is bigger than any brick on the table" is not an easy task.
The required scale of intelligent software can be easy to underestimate. It took many years for the AI community to realize that the exclusive-or limitation of the 2-layer perceptron identified by Marvin Minsky and Seymour Papert could be overcome by going to 3 layers (contrary to their conjecture). SHRDLU is on the order of only 500 kilobytes of sequentially executing source code, while the human brain contains around 100 billion neurons with about 100 trillion parallel interconnections. SHRDLU-like software, or even simplistic brute-force style systems, wired at the scale of the brain might turn out to be quite capable.
Online MIT documents useful for understanding SHRDLU's internals include Winograd's thesis (subsequently published in book form, with some changes), the Micro-Planner manual and update, the PROGRAMMER manual, and Andee Rubin's flowcharts showing SHRDLU's structure. Perhaps someone will manage to get the original SHRDLU code running on an ITS emulator like those listed below.
Our investigations into SHRDLU led to exchanges with the following SHRDLU-related people (listed in last name order):
Henry Baker (email@example.com) posted comments about parsing and Terry Winograd's disenchantment after creating SHRDLU. Henry wrote a version of LINGOL (see Vaughn Pratt below), and guesses that even the original LINGOL would probably run in Emacs Lisp, since it didn't care about lexical scoping. Henry told us that when he saw SHRDLU running at MIT, it crashed "a lot". (In comparison, this document claims, with misspellings: "On the A.I. machine, a reasonably fluent and debuged version of SHRDLU is alway availlable..." for SHRDLU version 101 of 4/27/73).
[S. Simon Ben-Avi] (firstname.lastname@example.org) wrote a [critique] of SHRDLU as part of some [course notes].
Lars Brinkhoff (email@example.com) has a program called TWDEMO which replays a prerecorded interactive SHRDLU conversation with block world graphics on a PDP-10 emulator with a 340 vector display. Source code for the block graphics load into a contemporary MACLISP, but doesn't work fully.
[Keldon Jones] (firstname.lastname@example.org) worked on the [student project] to port SHRDLU to current machines. He's [posted] an early release of that project's Common Lisp version of SHRDLU and a MACLISP interpreter written in C for running original SHRDLU source code. (One problem we noticed while porting the interpreter to Delphi was that (apply 'cons '((+ 2 3) 4)) is evaluated to (5 . 4) instead of ((+ 2 3) . 4), so other fixes may be required before the interpreter is 100% MACLISP compliant.)
[Dan Knapp] (email@example.com) has also posted [code] from the [student project] (apparently a newer version than the one posted by Keldon Jones, possibly equivalent to Greg Sharp's submittal).
Andrey Lebedev (firstname.lastname@example.org) sent us links for [this demo] and [this demo] of a SHRDLU-like system implemented by Moscow State Institute of Electronics and Mathematics students in 2009.
[Dave McDonald] (email@example.com) was Terry Winograd's first research student at MIT. Dave reports rewriting "a lot" of SHRDLU ("a combination of clean up and a couple of new ideas") along with Andee Rubin, Stu Card, and Jeff Hill. Some of Dave's interesting recollections are: "In the rush to get [SHRDLU] ready for his thesis defense [Terry] made some direct patches to the Lisp assembly code and never back propagated them to his Lisp source... We kept around the very program image that Terry constructed and used it whenever we could. As an image, [SHRDLU] couldn't keep up with the periodic changes to the ITS, and gradually more and more bit rot set in. One of the last times we used it we only got it to display a couple of lines. In the early days... that original image ran like a top and never broke. Our rewrite was equally so... The version we assembled circa 1972/1973 was utterly robust... Certainly a couple of dozen [copies of SHRDLU were distributed]. Somewhere in my basement is a file with all the request letters... I've got hard copy of all of the original that was Lisp source and of all our rewrites... SHRDLU was a special program. Even today its parser would be competitive as an architecture. For a recursive descent algorithm it had some clever means of jumping to anticipated alternative analyses rather than doing a standard backup. It defined the whole notion of procedural semantics (though Bill Woods tends to get the credit), and its grammar was the first instance of Systemic Functional Linguistics applied to language understanding and quite well done." Dave believes the hardest part of getting a complete SHRDLU to run again will be to fix the code in MicroPlanner since "the original MicroPlanner could not be maintained because it had hardwired some direct pointers into the state of ITS (as actual numbers!) and these 'magic numbers' were impossible to recreate circa 1977 when we approached Gerry Sussman about rewriting MicroPlanner in Conniver."
Tom Moran (firstname.lastname@example.org) wrote the SHRDLU-like Mini-Linguistic System (MILISY) at Carnegie-Mellon in 1972. A version of that program slightly modified by a Stanford student for an AI course is archived here.
Vaughan Pratt (email@example.com) wrote [SHRDLV] (not SHRDLU) implemented in LINGOL. Our understanding of Vaughan's system is that this grammar allows the parsing of these SHRDLU-like test sentences. Vaughan recollects that "by 1974 SHRDLU appeared to be a victim of serious software rot", and he was unable to get SHDRLU to respond sensibly at MIT. Gerry Sussman's comment to him was "That's a pity, the program worked when Terry [Winograd] demonstrated it to us." Vaughan also reported that Mike Fischer, the third member of Winograd's thesis reading committee, never had the opportunity to try out SHRDLU at first hand.
[Henrik Prebensen] (firstname.lastname@example.org) wrote [Blockhead], a SHRDLU-like program written in Turbo Prolog with a graphical interface and documented in the book "The Advanced User's Guide to Turbo Prolog". Rudimentary blocks world programs are a common demo of natural language programming in Prolog, such as this code included in POPLOG.
[Yury Semenov] (email@example.com or firstname.lastname@example.org) modified [a version of MicroPlanner] for Franz LISP (because of its MACLISP compatibility) and also created a preliminary [web interface] for MicroPlanner as part of a plan to resurrect SHRDLU for a site dedicated to the Russian version of Hofstadter's Godel, Escher, Bach: an Eternal Golden Braid.
Greg Sharp (email@example.com) acquired the [student project] source code (a newer version created after Keldon Jones left) before the university's links broke, and those files became the console and graphic versions linked at the top of this document. Greg also saved the later postings to the school's mailing list, which contain a valuable record of the discoveries made as SHRDLU was converted from MACLISP. The [original school mail files] only went to message 256, and many messages in both collections don't deal directly with SHRDLU, so message fragments directly concerned with conversion issues have been extracted here. Greg has ITS running under KLH with MACLISP working, and is now debugging his distribution of SHRDLU (and very grateful for the "massive" help from Kent Pitman and the Lisp community).
[Chris Stacy] (firstname.lastname@example.org) reports he is currently running MACLISP on a Unix emulation of ITS. No word yet on whether it can run SHRDLU.
Josh Sutterfield (email@example.com) worked on the [student project] to port SHRDLU, and sent us the project page's new URL after the University of Missouri Rolla changed its named to Missouri University of Science and Technology.
Paul Svensson (firstname.lastname@example.org) has a [public] ITS running under KLH, but the [SHRDLU directory] shows the files have been [modified by Keldon Jones].
Björn Victor (email@example.com) has a public ITS running but its SHRDLU capabilities have not been determined.
[Yorick Wilks] (firstname.lastname@example.org) wrote a 1974 [survey] of natural language understanding systems, including a critique of SHRDLU.
Terry Winograd (email@example.com) created SHRDLU and discusses some of its history here. He's kindly allowed us to list below his answers to questions we emailed him in 2004:
How would you say SHRDLU influenced your subsequent work and/or philosophy in AI?
Having insight into the limitations I encountered in trying to extend SHRDLU beyond micro-worlds was the key opening to the philosophical views that I developed in the work with Flores. The closest thing I have online is the paper Thinking machines: Can there be? Are we?
How would you characterize AI since SHRDLU? Why do you think no one
took SHRDLU or SHRDLU-like applications to the next level?
There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains. I don't think that current research has made much progress in crossing that gulf, and the relevant science may take decades or more to get to the point where the initial ambitions become realistic. In the meantime AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).
What future do you see for natural language computing and/or general AI?
Continued progress in limited domain and approximate approaches (including with speech). Very long term research is needed to get a handle on human-level natural language.