Ray Arachelian, Creator of the Lisa Emulator

I had the great honor of interviewing an important person the other day. His name is Ray Arachelian, and he’s the creator of the Apple Lisa Emulator. Without him, there may have never been a way for anyone without a Lisa to see the LOS in action.

Apple LisaTo learn more about the Lisa Emulator, see my previous column.

Ted Hodges: Hi, Ray. How are you doing today?

Ray Arachelian: Very good. Getting ready to do another release – this will be the first beta, likely this Wednesday – and as more of the pieces fall into place, I tend to get happier.

Ted: How long have you been working with Lisas?

Ray: I’ve had my Lisas since 1989. I first used them as Macs via MacWorks. That led me to building a Hackintosh IIcx (a Mac IIcx motherboard in a PC case, etc.), and I got hooked on the Mac after that. Over the years that has led me to buy many a Mac.

Lisa Emulator

Lisa emulator

Originally, I wasn’t aware of the historical significance of the Lisa, nor of the fact that I could run other operating systems on it, but as I talked to other Lisa owners on FidoNet and other BBS networks, it became clear that what I had was quite the gem. Steve Hatle was very helpful in helping me explore the Lisa’s true operating system.

Ted: When did you have your first experience with a Lisa?

Ray: I was working as a techie at my very first part time job. It was a typical two hours after school. I think I was in my junior or senior year at Brooklyn Tech at the time. This was at the NYC Board of Education. It was at a training center for teachers. They had two or three classrooms filled with various PCs, Apple IIs, Macs, Commodores, and so on.

One day, I saw a strange looking machine out in the hallway, where trash was usually placed. It had an Apple logo, so I thought that it was some sort of Mac, since I saw a mouse there, too. I had played with the Mac Plus machines in the labs previously, so it was an interesting machine to me.

I asked my boss about it, and that was the first time I heard the word Lisa. He said I could have it and also gave me a hard drive. A couple of weeks later, when the power supply died, he was kind enough to order one for me, and he got me MacWorks floppies as well.

Now, this was back in the days of DOS and Windows 3.1, so relative to those, the Lisa running Mac XL was an amazing machine. I couldn’t believe my luck that I had gotten it for free.

A few weeks or months later, a teacher who had a couple of broken Lisas also bound for the trash bin asked me if I wanted them. Between the two, I managed to get one of these to work, the other I stripped and threw out. So now I had two nearly complete Lisas and some spare parts.

A year later, at work, they used Xenix 286, which was perfect, since the Lisa could run a version of Xenix, so I used that to teach myself Unix. By this time I had built the Hackintosh, and it was my main machine.

Ted: How many Lisas have you owned?

Ray: Three, of which one was non-repairable. I kept the reusable parts from it, but unfortunately threw out its case and CRT because at the time I lived in a very small apartment and didn’t have the room to spare.

Ted: What made you decide to write a Lisa Emulator?

Ray: In late 1997, I was cleaning up around the house and decided to power on all of my machines – something I typically do to update software, patch OSes, etc., and I could no longer power on one of the two Lisas. I looked around on the Web and was able to find some parts for it, but it seemed to me that it was getting harder to find the parts than before.

I realized that in the long run, none of my Lisas would continue to run, and the supply of replacement parts would get harder to find. At this point, I was of course very much aware of the historical importance of the Lisas. I thought perhaps that since the Lisa predated the Mac, and since the vMac emulator [had recently been] released, the Lisa should be less complex to write an emulator for. I couldn’t have been more wrong.

Luckily I found David T. Craig, who had collected a wealth of Lisa information over the years. He also provided a great deal of advice – contacts to actual Lisa developers. Looking back, had filled the role of a mentor in regards to this project. The emulator would not exist without his help.

There were other folks involved over the years who have helped above and beyond the call, but they have chosen to remain anonymous.

Ted: How difficult was it to write the emulator?

Ray: It was a lot harder than I ever imagined – which was perfect, because had I known how difficult it really was going to be, I might have given up before starting it.

At the time, I knew C, mostly. I knew 68000 from a college course and from poking around on my Mac. I had written a few small DOS programs here and there and an aborted BBS for the Mac. While I knew Unix very well, it was from a sysadmin’s point of view. I didn’t know how to program for the X Window system, and the emulator seemed to be a perfect excuse to learn all of these technologies.

I though it would take maybe six months to a year to get something working – perhaps two years at most to get it all done.

Ted: What all was involved in writing the emulator?

Ray: The first phase was scanning in all of the Lisa docs and putting them online. This was mostly so that I could reach them when not at home and also to have them be portable. I do most of my coding these days on my morning/night commutes on the LIRR, so electronic versions of the docs were perfect.

After that, it was a lot of reading and rereading. If you go back far enough in terms of computer documentation, you find that terms used in modern computer jargon don’t match, and you have to get things very crystal clear before writing any code. It wasn’t very bad for the Lisa’s docs, but it was mostly researching the docs.

There are common parts to any emulator. The CPU is the heart of it all. At the time there weren’t many 68K emulators out there. The UAE core was pretty much it.

One of the guys on the emu-scene recommended Generator’s core, since it was almost a just in time compiler. Those days 200 MHz machines were the common desktop, so anything that buys you speed is important. It takes 10-40 host instructions to emulate a single guest CPU instruction. So for a 5 MHz machine, you’d need at least a 100 MHz CPU to run an interpretive CPU, and you do need some bandwidth left over for other things such as I/O.

Memory is fairly easy, you just a used an array the same size as the machine’s memory.

The Lisa also had an added layer of complexity in its MMU (Memory Management Unit), this introduces a large overheard on every memory access. I spent a great deal of time thinking about how to optimize the MMU code and rewriting it. I think I’m on the third complete rewrite of the MMU in the current code. In hindsight, machines have gotten so much faster today that it probably wasn’t necessary.

I wasn’t able to get much of anything out of the emulator until about late 2001. That’s the way with emulators: You don’t see much in the way of results until you’re close. I barely got some output from the power on self test routines in the ROM, which revealed that some of the display routines were wrong, and I promptly fixed these.

That was the very first Eureka! moment.

Next, I had to get the floppy emulation to work, since there’s not much to do with just the ROM.

The second Eureka! moment was when I got LisaTest halfway booted up. Perhaps something in the I/O systems was broken, or the MMU.

Luckily the Lisa had multiple OSes and the previously mentioned LisaTest, which I was able to use as a sanity check against the I/O portions. Even after getting most of the I/O systems to pass LisaTest’s intensive probes, I was unable to get the Lisa Office System booting. So I tried to boot off Monitor, and MacWorks and Xenix, each had its own way of doing things, all perfectly legal since they worked on the actual Lisa.

By 2003, I had suspected that perhaps there were bugs in the CPU core, so I contacted James Ponder, the author of Generator, whose CPU code I used. He pointed me at some bugs in the shift/rotate opcodes that he was aware of, but still no luck.

I decided to build a tester program, but I needed a real 68K machine. I wanted something running Unix, that could assemble and run 68K code as well as the Generator CPU core itself, so that meant the GCC (GNU Compiler Collection) suite of tools. I thought to install OpenBSD or Linux on a IIsi, but this proved more difficult due to the lack of an FPU.

NeXTstationLuckily I had a NeXTstation that fit the bill. So I ripped out the CPU code from the emulator and wrote a small test suite program and some shell script glue to run it all. The tests took about a whole week to run and helped me weed out a lot of bugs.

I later realized that there were still differences between the CPU emulator code running on a 68040 and the same code on an Intel machine, so I weeded out those bugs by running the tester under Linux x86 against the output of the 68040. Luckily I compressed and saved the 68040 results from the previous tests.

Still no luck getting LOS to run. So I went back and worked some more on the I/O systems and got the Z8530 code to work – this was the serial port handler.

Eventually I found the magic that made the Lisa’s MMU work properly. This was June 2006. It was making use of undocumented hardware features. I found this partially by looking at what the code was trying to do versus what it couldn’t do in the emulated hardware.

After getting LOS to boot off the installer disk, I had to write the code for the hard drive. A Lisa is not very useful without one.

I finished the core of the emulator around September 2006 or so. I had to get rid of the X Window UI and was poking around the Net looking for something FOSS that would help it be portable to other operating systems, such as Windows and OS X. wxWidgets fit the bill. I bought the wxWidgets book and devoured it over a week and started coding the new UI.

I originally wanted to do the first release on January 19th, as that was the Lisa’s first announcement (in 1983), but I couldn’t get enough of it working until right before the other magic date, January 24th. The first release was very buggy, so a further release was needed. These aren’t even alpha test quality, so I named them Preview. They’re really development versions.

The next release will be the beta for 1.0, followed by a few more bug fix versions.

Ted: How long did it take you to write the emulator?

Ray: Roughly eight years. There were many days that I wanted to quit, but somehow I managed to convince, if not force myself to continue. I read lots of computer history books, watched Pirates of Silicon Valley every few months, and so on – whatever it took to stick to the goal.

There was never a single point where I felt I couldn’t get past, but at every turn, another layer of the puzzle revealed itself or gave hints to what else was wrong. Some of the road blocks were very tough problems, but most were easy.

There were many false starts and dead ends. At the very start, I wasted about two to three months rolling my own CPU core before switching to Generator’s. I wasted another three months learning and writing raw X Windows code. (When I started, the only good toolkits were commercial; there was no GTK/KDE – or at least I wasn’t aware of them.) Writing your own widgets in X is a lot of needless work.

Of course, there were some months where real life, whether work or family, got in the way, and I did no work on the emulator at all.

Ted: How does the emulator work?

Ray: Like most emulators, its heart is the CPU core. There are different types [of emulators] – the most common (and slowest) is the interpreter. It fetches instructions for the guest CPU, decodes, and then executes them.

The core used by Generator is a little bit smarter than this. When it fetches the instructions, it decodes them, then caches the parameters used by the instruction and information about it, such as whether to calculate flags or not. This is called an IPC or Instruction Parameter Cache. That way, the next time that code is executed, it’ll be able to run a lot quicker.

There was also a JIT (Just in Time) compiler for ARM hosts in Generator’s core. However, I do not use this feature.

More modern CPU cores do support full JIT compilation, which greatly speeds up the execution. However, the Generator CPU core is intimately connected to LisaEm’s MMU, so switching cores isn’t an option, at least not without a lot of redesign.

Once you get past the CPU, the I/O systems are very much like writing drivers for the hardware. The same way as you’d write a serial port driver or a parallel port driver, etc. Only they’re the inverse of a driver, since you’re building the behavior of the hardware, not the driver. The virtual I/O hardware then turns around and finds the analog of that same hardware on the host operating system. For example, writing a sector to a floppy means saving that data in the right place in a file.

A lot of the I/O subsystems can be dealt with by writing state machines. You don’t need to fork off threads or whatnot to multitask between hardware.

The level of difficulty for the hardware varies. Various Lisa operating systems do things differently from each other. For the VIA 6522s (parallel ports), you can set a direction mask that says the port will be used as an input or an output and then write a byte to it. This puts the byte on the port immediately. Another way is to write the value to the port and then switch its direction to an output, and it won’t put the value on the port until you switch the direction! The more flexible the hardware, the harder it is to write an emulator for it, since you must support all of these methods.

The Lisa hardware is built a little bit strangely, too. For example, reading from certain I/O addresses causes changes! You’d expect a write to do that, but it’s the act of touching the address that initiates the change, so it’s a bit counterintuitive. Emulating those features is of course easy. Next, in order of difficulty: emulating the VIAs or the COPS (clock/keyboard/mouse controller), then the Z8530 Serial Controller. The Z8530 is a very complex beast; I haven’t attempted to fully emulate it, but rather, emulated just enough to get it functional.

The MMU itself is fairly simple in concept, but since it introduces delays on every memory or I/O lookup, I had to sit down and think about how to design an optimized emulation for it. What I came up with was a caching system that ties the MMU tables to the CPU core’s IPCs. This requires a lot of memory handling and a larger footprint for this emulator than most, but the gains in performance are worth it.

The most difficult thing to get right is the timing. The emulator is still off even today, but the kind I mean here is the difference between it working or crashing because things happen too fast or out of order. This can be handled by a priority queue with some timing information that says when things such as interrupts should occur.

The Lisa’s documentation is extensive – but incomplete and in some cases misleading. You really have to look at what the executing code is trying to do and sometimes just ignore what the documentation says. After all, the folks who wrote the hardware guides didn’t intend an emulator writer as their audience. In other cases they needed to hide information, such as how the serial number hardware really works.

To get to the finish line, you have to log everything to a trace log, and you have to know what to search for inside it. A trace log shows every executed instruction, all of the CPU’s registers, any I/O that might have occurred, etc. – basically everything that went on at a given point in time as well as how many CPU cycles it took. A minute of run time easily means several gigabytes of trace log. You get very good at knowing how to narrow down what you’re looking for, and you build the tools to help you do that once you go through a few hours of reading trace logs.

Reading through gigs of logs is not a lot of fun, but it’s very rewarding. You quickly get to see what the software does. Even better, you get to see what parts of the software were written in assembly, what was generated by a compiler, what’s efficient, and what’s not. You get to recognize code that was written by 68000 wizards, and code that was written by beginners.

A lot of people don’t realize just how much overhead is wasted on pushing parameters to the stack, then popping them off just to call yet another function where you do more of the same. Trace logs expose this quite dramatically. Today’s architectures also suffer a lot of this, especially due to object oriented programming, but this is masked by the high throughput of the CPUs and memory systems.

As with most software project, development work is 20%, debugging is 80%. It’s probably worse for emulators, since you don’t get any results until you’re very close to finishing. Even then, it can take a long time to close the gap from the emulator starting to work to actually have it work.

Ted: Do you feel that preserving the history of “Lisa technology” is important?

Ray: I think it’s important to save the history of most computers, not just the Lisa. The Lisa happens to have been a very important milestone in the history of computing, but there are many computers out there whose technology should be remembered.

As with most nascent technologies, there is usually a Cambrian explosion, where you see hundreds – if not thousands – of different systems competing for market share. For micros, you see this pattern in the late 1970s to mid 1980s. The marketplace puts the equivalent of evolutionary force on these machines, and of course that means that most of them died out – the very same ones we’re trying to preserve. Many of those have interesting features, each a lesson to teach for the computer historian.

The big problem is that it gets more and more difficult to get technical documentation for these systems if it hasn’t already been saved. The Bitsavers project, for example, attempts to collect documentation for many such systems in order to help preserve them. This isn’t necessarily for just the purpose of emulating them, but also for historical reasons, and even to help owners of antique hardware maintain them.

In some ways getting technical documentation is getting easier, but only through the efforts of those who attempt to preserve the data, and only for well known systems such as the Commodore 8-bit line.

Software libraries also need to be preserved. Old media unfortunately is getting harder to access as the number of working drives drop. Mechanical parts tend to go first, rubber disintegrates over time, sprockets and gears break, what was once grease turns to glue, capacitors leak and ruin boards, etc.

In the long run, old computers will die off. You can go pretty far with modern equivalent parts, swapping capacitors and such, but rubber rollers and custom gears are nearly impossible to replace.

So all of these are areas of interest to computer historians. Luckily, there are many folks interested in the preservation old computers and their history. There are many social forums such as mailing lists, web BBS’s, blogs, podcasts, fairs, etc.:

I’m sure there are dozens more, if not hundreds.

From the point of building an emulator, what’s needed is detailed documentation about the hardware, ROM, and software dumps. Schematics can help a lot. Many companies view their technology as proprietary and do not release any documentation about their hardware. Once they go out of business or deem their products obsolete, that documentation is not released and their systems are in danger of being lost to time.

For example, I’d love to build a NeXTstation emulator, as I’m sure would many others. Some may ask, why bother, since OS X is NeXTstep’s replacement, but if you look at them, they’re as different as the Mac 128 and a modern Intel duo Mac Pro.

Even if you have a new MacBook, you may still want to run old Mac software. Back to the NeXTstation – there are no docs to be found as to the proprietary hardware. So you’d have to disassemble the object code. Without documentation, everything is reverse engineering – and that’s a hard path.

I was very lucky to be able to obtain the docs I needed. But many other machines are lost due to lack of information.

Ted: It seems to me that the Lisa is a very advanced machine. In some ways it’s even more advanced than anything we have now. Would you agree with that?

Ray: The Lisa’s hardware isn’t all that interesting. Let me qualify that, at least when compared with other machines of its class. There are points of interest where the Lisa’s hardware is very much unique, but the key to the Lisa’s soul is really its software. This isn’t to say that it would be easy to build your own, but rather, that it compares well to other minis.

The interesting bits in the Lisa’s hardware: The engineers used a brand new CPU, the 68000, which just came out at the time of the Lisa’s design. It wasn’t meant to run an MMU, but they managed to get it working. Originally they wanted to go the way of Xerox Alto with a custom made bit-sliced CPU. They invented a very simple way to display graphics with very few chips whose most complex piece was handled by a little bit of ROM. A similar system on the Xerox Star required a slow-refresh phosphor monitor at 43 Hz.

The Lisa itself was built by engineers who previously worked at HP, probably on mini computers. Some might say that the Lisa is a mini, or is almost one. It has multiple CPUs/microcontrollers (68000, 6504, COPS) to implement its hardware functions.

Some of the boards and their connector clips remind me of hardware in other minis, such as the PDP line. The physical design of the Lisa makes it very easy to swap out its hardware. It’s very well designed from this point of view. But unlike mini’s, the Lisa is meant as a single user machine. Again, the opposite of the Mac, which was a closed system with no user serviceable parts at all. The Mac is clearly a micro in its design.

If you look at the software, it’s almost Unix-like, but written in mostly Pascal with quite a bit of 68000 assembly. Under the hood, the OS has provisions for multiple processes, memory protection, virtual memory, pipes, data sharing, etc. All of this is on par with the mini computers of its era.

The real historical gold is the UI’s design. The research and ideas behind the LOS UI are what’s historically important. The Lisa is unlike anything previous to it in a lot of ways. You could also say that it is unlike anything that came after it.

Yes, there are a lot of bits that were based on the concepts from PARC, but the Lisa’s designers have built a lot more. Most of the ideas were copied by Microsoft Windows 1.x and 2.x, and some made their way into the Mac over the years, but not the interesting bits. I suppose that, had the features I point to as interesting been copied, they would seem commonplace, therefore not interesting.

Ted: Would you agree that there are features in the LOS that were never features in newer operating systems that should have been? If so, which ones?

Ray: Absolutely. The idea of a “document centric” desktop is foreign to today’s desktops. Today you work with documents, yet you think in terms of applications. You might send a Microsoft Word document to someone, not a memo, for example.

The Lisa was different. You thought and worked in terms of documents, not the applications. The system was designed to hide its mechanics from you. I look at this as similar to the difference between procedural programming and object oriented programming. The desktop let you perform actions on documents. A document and its icon seemed far more related and less abstract as they do today. It was about the nouns, not the verbs (applications being the verbs).

You didn’t fire up LisaWrite to write a memo, for example. There was no splash screen announcing that LisaWrite was copyrighted by Apple to distract your flow. No jumping, flirting paper clip annoyance in the corner. No prompts for what template to copy into your new document.

You’d just tear of a sheet of “LisaWrite” stationary and begin writing. Not much different than taking a piece of paper from the supply closet and writing on it. The same was true of the other programs, such as LisaCalc, which was the spreadsheet program, etc. You could of course copy and paste between the documents as on any modern OS.

When you made a copy of a document, the file showed up with the same exact file name! You can’t do this today. The modern desktops change the name to say “Copy of”. Of course this was a parlor trick, if anything, since the file names displayed on the desktops weren’t the file names stored on the disk. But it made sense for the user, and what’s what counted.

To back up your entire 5 MB hard drive, you’d just drag its icon to a 400K floppy (or presumably to a tape icon – I never saw a tape drive on the Lisa, so that’s my guess). The desktop itself would then copy and split files across multiple floppies, prompting you for more. In fact, if you dragged a file that was bigger than the size of the floppy, it would split it across two or more disks.

When you inserted a floppy containing a portion of the document, it would reconstruct the file, asking you for the other floppies as needed. This was all done with the intent of keeping things simple and consistent. No separate backup program to run, again, all very document centric instead of program centric.

When you shut down the Lisa at the end of the day, it remembered everything you had open and saved their locations on the display. When you powered it back on, it put everything back in the same place you left it. The illusion of a desk was properly kept. In real life, you might not be too happy if you came back in to work on Monday and someone had put your calculator or your clock inside a desk drawer or shuffled your papers around.

Today, if you open an Adobe Acrobat file or a Web page, read it half way, then close it, when you come back to it at a later date, you’ve lost your place. This is annoying. Sure, hibernating or sleep modes help, but if you reboot your computer, you still lose your place. The computer should do that sort of thing for you, just like the Lisa did.

Unfortunately neither OS X nor Windows do that. I personally avoid reboots for as long as possible because keeping my place is important to my work flow – but this eats up memory/swap space for documents I don’t actively have in focus.

The Lisa was probably one of the very first machines with a software controlled power supply. There was no Big Red Switch to push. The power switch acted as any other keyboard switch as far as the OS was concerned. We didn’t see that in Macs until the II series, or in PCs until after 1995.

The Lisa was very advanced for its time. With LOS 3.x you got a multitasking machine that does almost everything a modern machine with a full office suite does – and this was 25 years ago.

Ted: Is there anything else you would like to say?

Ray: Ideally, I’d like the experience an emulator provides to be identical to actually using the machine, whether that’s the Lisa or another emulator. This just isn’t possible, as there are many physical feedback systems that we’re not aware of – the tactile feeling of the keyboard and mouse, the flickering of the CRT monitor versus a modern LCD display, and so on.

You can get close in some areas with skins, sound effects, and such, but it’ll never be perfect. Historical accuracy of experience is what counts in this game.

There’s nothing quite like getting an email dripping full of joy saying, “I’ve always wanted a Lisa and couldn’t afford one, but now I get what it was all about.” That makes it all worth the time I spent on this.

Ted: Ray, on behalf of all of us at Low End Mac, thank you. I believe you have made something truly great here, and it was an honor talking to you. Thanks!

Ray: Gladly, thanks Ted.

Keywords: #lisaemulator #applelisa #rayarachelian

Short link: http://goo.gl/CdlO8Q

searchword: rayarachelian