Part 14: Tech Post 6: Q&A
Tech Post 6: Q&AWe've covered the screens of Solaris, so I opened up the thread to questions for one last survey before the final wrap-up. Since these are often more questions of research than of software engineering, I've had to give myself a bit of a crash course in these things along the way. I mostly ended up using my copy of Racing the Beam, the Stella Programmer's Guide, various Wikipedia articles and some threads on the AtariAge forums. Any errors are my own, either for misunderstanding what I read or trusting the wrong people.
TooMuchAbstraction posted:
Here's a question: when does the actual game logic happen? Are you constantly racing the beam, ducking away for a dozen cycles to handle controller input or something, and then ducking back to adjust sprite positions so the right thing gets drawn? If so, I can't fathom how anyone could possibly manage to write a game of this level of complexity.
Short answer: there's 70 scanlines worth of time between each frame, and the program logic happens there. This is kind of backwards from how its successors and even its contemporaries did it, and this is kind of cool because it leverages abilities that the Atari provided that its contemporaries and even its successors did not.
Long answer: Let's start by stepping back a bit and looking at what makes up a television signal. I'll be restricting my discussion to the NTSC standard, the 60fps television standard that is universal in North America. Some of the general ideas, including the parts that are relevant to TMA's actual question, generalize to other regions' standards, but the details vary widely.
A standard-definition NTSC television broadcast is made of scanlines. At this point that needs no introduction, but I start here because old televisions don't really know anything else. They are much more analog devices than one would expect. As a result, television signals include synchronization information that can be used to tune the beam so that the picture stays stable. Everything else in the signal is either part of a scanline or interpreted in terms of it.
The first few scanlines of a frame are called the vertical blanking period, or VBLANK for short. No video data is actually transmitted during this time. During the VBLANK signal, the TV's cathode ray is turned off completely.
One might think that starting VBLANK would be enough to tell the television that it's time to re-aim the cathode ray back up to the top left for the next frame. But that would require making the circuitry more complicated. The reset command is actually a signal independent from VBLANK, called vertical synchronization or VSYNC. The VSYNC signal is mixed in with the first three scanlines of VBLANK.
What there is for vertical, there is also for horizontal: those are called HBLANK and HSYNC. The HSYNC happens in the middle of the HBLANK period, and the segments before and after it are referred to asseriouslythe "front porch" and the "back porch". The back porch also includes a reference signal called the color burst which tunes the receiver to pick color information out of the signal.
There's one last kind of hilarious piece to this story, though. Apparently, even in the late 1970s, television displays were of sufficiently variable quality that you couldn't really rely on the picture entirely fitting on the screen. So in addition to not putting anything important at the bottom of the screen, there might also be a few more scanlines that were never intended for display at the bottom. That region was called the overscan.
This all added up to huge headaches for everybody. For the most part, TVs ran their images at about 60Hz because that was the frequency of the AC power circuit. But to get a stable picture you'd tune that oscillation a bit, and then there were other, much faster oscillators to handle the requirements of the scanlines and the color display. But the intent was that they'd sync to the signal, and the signal had a ton of fudge factor in it. VBLANK only really had to be long enough to let the cathode ray re-aim, and there wasn't really a requirement on that either. So you ended up kind of splitting the difference between what the TV's own oscillators demanded and what the signal commanded.
Atari's recommendation to its own developersand this was widely respected by other companies as wellwas to have 40 lines of VBLANK (of which the first 3 were also VSYNC), 192 lines of actual picture, and then 30 lines of overscan, for a total of 262 scanlines. Their electrical engineers had determined that this was the safest way to ensure that a picture would actually appear in its entirety on basically every TV you'd hook the VCS up to. Some of those TVs were probably older than the moon landing.
The 2600's successors up through the Playstation 2 also had to contend with this, but basic standards of television quality improved; they gained vertical resolution first by getting to assume that overscan would always be visible, and then by getting to assume that VBLANK doesn't really need to be 40 scanlines long. Modern SDTV is canonically 240 scanlines of picture at 60 fps, or 480 interlaced at 30. 480i is the shorthand.
Programs for most consoles never had to care about any of this. Their system had a vertical resolution, and the GPU would handle all the syncing and blanking signals on its own and send a separate signal to the CPU when a new frame began. Games would organize themselves around waiting for the new-frame signal (usually sent with the start of VSYNC), setting up the graphics for the frame about to be drawn during VBLANK, and then doing the work of computing the next frame while the GPU did the rendering of the frame itself.
The 2600, on the other hand, has to do it backwards; since the CPU is so intimately involved in working out the details of the display, it can only compute the frame to come during the VBLANK period. Fortunately, thanks to the overscan convention, it gets nearly twice as much time to do its work as an NES or C64 would have had.
Unfortunately, the Television Interface Adapter only handles HBLANK, HSYNC, and the colorburst on its own. VSYNC and VBLANK are under software control. (Interestingly, despite offloading so much of the usual graphical work to the CPU, it is in fact more hardcore about color and horizontal synchronization than its successors; the 2600 is the only system I know of that has the "pause the CPU until the very start of HBLANK" primitive, and the reason it has 160 pixels of horizontal resolution is because this is the smallest resolution you can display while guaranteeing no color distortion.)
Now, it would still be an impossible nightmare to get anything done if you had to manage the overscan and VBLANK signals the same way you handled the display. The CPU and the TIA are not, alone, really up to this task. But the Atari 2600 has three chips in it. The third is variably called the PIA or the RIOT. It's what has the 128 bytes of RAM, and it's what wrangles the two I/O ports. (The latter is where it gets the name PIAit's the Peripheral Interface Adapter.) But the RIOT also has four programmable timers, which is where the name RIOT comes in (RAM/Input/Output/Timers). The RIOT can be programmed to count down from any number from 1 to 255, counting down one unit every cycle, every 8 cycles, every 64 cycles, or every 1024 cycles. A scanline is 76 cycles long, so the 64-cycle counter is eminently well-suited for counting down the overscan or VBLANK periods while doing the true work of the game logic. Once the work is done, you just keep checking the timer value until it's zero, and then do one last sync against HBLANK, and you are in cycle-exact, solid position in the frame, ready to go once more.
Carbon dioxide posted:
What I do wonder is how the programming went back in those days. Did they write in some higher level language and compile that to the Atari instructions or did they write those instructions directly? Did they have anything resembling modern debugging tools?
Short answer: They had specialized debugging hardware, and devices that were the 80s equivalent of a flashcart programmed and powered via USB. By 1982, personal home computers were powerful enough to build Atari games and control the flashcarts. Major companies had access to compute servers to do their work on; they'd have been the equivalent of a single Unix workstation powering a whole office. But that's plenty when you're building a game that's capping out at 16K.
Long answer: More "finding rumors and interview snippets on the web" here than usual, so keep your salt shakers handy...
Let's start by acknowledging that Stella's debugger is phenomenal even in comparison to modern debugging tools for modern systems. But the toolkit available to an embedded software developer was very different back then, too. Remember how the Television Interface Adapter doesn't know anything about VBLANK or VSYNC? You could very easily hook the output of the chip to an oscilloscope and tune it to the length of a scanline, and then see what one line repeated looks like over and over. That'd make it very obvious that tricks like the lives/fuel trick really worked. A manually-steppable clock circuit would also let you "cycle count" with impossible-by-modern-eyes precision.
I don't know if those tricks were used. However, Racing the Beam notes that Parker Brothers figured out how make its licensed Atari games purely by reverse-engineering. They took the cap off the graphics chip and photographed its components, and had two electrical engineers study that while other engineers studied the code from ROM dumps. They got good enough results to make games that are still respected parts of the 2600 canon, too, so it certainly seems to have borne fruit.
I think it's reasonable to say that people programming the earliest home computers were as much electrical engineers as they were programmers.
As for software development... the Atari would have been programmed almost entirely in 6502 assembly language. The way the graphics chip works requires cycle-accurate timing to produce an accurate or even a stable display, and that level of control is only available when you're selecting every last addressing mode of every last instruction.
That's not really as bad as it sounds. 6502 is one of the easiest assembly languages to work with, and even in 1977 a wide array of powerful symbolic and macro assemblers would have been available. In the earliest days, or at the largest companies, you would use a timesharing minicomputer like a VAX or a SPARCStation.
By 1982, all the most popular home computer models would be powerful enough to run decent if not excellent assemblers. The assemblers would be easy to come by, too, since all the most popular home computer models used the same chip family. (This would also mean that you could test the correctness of your non-graphics logic on the computer, before ever feeding it to the console; the chips were slightly different models, but they all spoke the same machine language.) By 1986, when Solaris was released, 6502 assemblers good enough to meet 2016 quality standards would be available for cutting-edge home systems. The 68k-based Mac Plus, Atari ST, and Amiga would all be available and well-established, and you'd also have 386es running DOS 3.2 on the IBM side.
And in the extreme... well. Steve Wozniak wrote much of the Apple II's core software in assembly language on pen and paper and hand-assembled it into machine code that he then entered into RAM with a hex editor. 6502 assembly is actually simple enough that this is a semi-reasonable thing to dothe cheat sheet with all the information you need to do this will fit on a single sheet of paper with normal printing size. But you wouldn't do that for the Atari, because the Atari doesn't have a built-in hex editor. You need to get it into a cartridge or something very like one.
So how would you do that? Well, battery backed RAM (think NES cartridge saves) would have been a reasonable technology at the time, and EEPROM chips (the precursor to modern flash drives) might be reasonable too but they weren't invented until shortly after the Atari 2600 went to market, and they were super expensive anyway. It appears, however, that specialized equipment was useda ROM emulator that downloaded programs through a serial port and kept alive for as long as the computer feeding it data kept it powered.
That makes the barrier to entry by 1982 pretty low. Apple IIs, Atari 8-bit, and Commodore 64 computers were all available and more than powerful enough to create programs on the scale of Solaris with a symbolic assembler, and could also drive ROM emulators through their expansion ports.
One last funny story on this: there was a short-lived expansion for the Atari 2600 called the Starpath Supercharger. This was a RAM expansion cartridge (6K! Triskadecuple your RAM! Over triple the RAM of an NES!) that could be loaded with programs off a cassette tape. Starpath apparently did all its dev work on Apple II-series computers, and tested it by just writing stuff out to tape and loading it up in a Supercharger. That's pretty much the closest anyone at the time got to programming the 2600 the way a small team would have programmed more typical 8-bit machines.
NEXT TIME: We conclude the tech strand with a duel of legends.