File I/O

In SCOPE/Hustler, nearly all input and output from user programs was done through local files. A local file could reside on a disk drive, a tape drive, or an interactive terminal.

Local filenames

Local filenames could be 1-7 alphanumeric characters long; the first character had to be alphabetic. Famed CDC systems programmer G. R. Mansfield wrote a brilliant segment of CPU assembly language, about 12 instructions long, that tested a 60-bit word to see whether it contained a valid local filename. Operating system convention required local filenames to be left-justified and zero-filled. Mansfield's code depended heavily upon the Display Code character set.

Local filenames were unique only within a job; there could be many jobs with identically-named files that were completely unrelated. In fact, it was difficult for jobs to share files; see Permanent Files.

File tables

User programs maintained data structures (named File Environment Tables FETs) through which they issued I/O requests to the OS. Unlike most modern OSs, SCOPE/Hustler programs did not pass a file handle to the OS to specify a file. Instead, programs passed a pointer to a FET, the first word of which contained the filename. For each I/O request, the OS had to look up the file by name to find the proper internal data structure. This sounds inefficient, but remember that a filename fit into a single CPU word. A memory-resident table called the File Name Table (FNT) contained entries for all the open files for all jobs currently in memory. Since a job could have at most 63 files open, and since there could be at most 7 (later, 15) jobs in memory at once, there were only a few hundred table entries to be scanned for each request. Furthermore, the OS maintained a field in the user's FET indicating the last known location of that file's entry in the FNT. When an I/O request was processed, the OS conducted a circular search of the table, starting at that location. The reason that the file's entry might have moved is that the job may have been swapped out and back in since that last I/O on that file. When a job was swapped out, its entries in the FNT were also swapped out. When the job was swapped back in, its file entries were copied back into the FNT, into whatever slots happened to be free.

Non-disk files

Most I/O was, of course, to disk files. This included I/O nominally from a card reader or to a card punch or printer. In SCOPE/Hustler, only the OS could actually do I/O to these devices. When a card deck was read in, the OS created a disk file containing the contents of the cards. When the resulting user job ran, this file was a local file named INPUT.

Printer output was written to a file named OUTPUT, which was printed at job termination. Similarly, the file PUNCH, if it existed, was sent to the card punch. (I can't remember whether these were hard-wired magic filenames, or whether these files had a special "disposition".) Other files could be printed or punched by giving them a print or punch "disposition", usually via the DISPOSE control statement.

A local file could be created and associated with a reel of tape on a tape drive via the REQUEST statement.

Files could be associated with the user's terminal via the CONNECT statement. This only worked from interactive jobs, of course, and it only applied to that job's terminal. There were different ways of "connecting" a file, depending upon the character set (Display Code vs. ASCII) and perhaps a few other attributes. Normally, only brand-new files were connected, but a trivial anomoly of the implementation was that an existing disk file could be connected. In that case, the contents of the disk file would be unavailable until the file was disconnected.

Circular I/O

All file I/O was accomplished through the CIO (Circular I/O) PP request. CIO used circular buffers in which data transfers could wrap from the last word of a buffer to the first word. The user job and the OS together kept track of buffer information through four 18-bit fields in the File Environment Table in the user's field length:

FIRST pointed to the first address of the file's buffer (in the user's address space, aka field length).
LIMIT was the last word address + 1 of the buffer. The length of the buffer was not stated explicitly, as it was slightly more efficient to check for the end of the buffer by comparing a pointer to the contents of LIMIT.
IN was the address in the buffer of the next location into which data would be placed. In the case of an input request, this would be the OS placing data into the buffer from a file. In the case of output, this would be the next place that the user program would put data to be written to a file.
OUT was the opposite of IN: the address in the buffer of the next valid location in the buffer containing data to be processed. In the case of an input request, this would be the user job retrieving data recently placed there by the OS. In the case of an output request, this would be the OS removing data from the buffer in order to write it to a file.

If IN == OUT, then the buffer was empty. As a result, the effective size of the buffer was one word less than the number of words in the buffer. Believe it or not, this bothered me: memory was tight in those days!

The use of circular I/O allowed a job to issue an I/O request before it had completely finished processing the previous request. It also allowed a single I/O request to transfer more data than the size of the buffer. This was possible because the user job, for instance, could be processing data and updating the OUT pointer while the OS was placing data into the buffer from a file. As long as neither side caught up to the other, a single I/O request could go on and on for several buffer's worth of data. Since I/O was performed by PPs and user jobs executed in a CPU, it was in fact quite feasible for more than one buffer's worth of data to be transferred in a single request.

Years after we shut down our last CDC machine, I became peripherally involved in a lawsuit between two other parties regarding circular I/O. One party claimed that they had recently (like, in the 1980's) invented the idea, and they apparently had a patent on it. The other party contested the patent, correctly claiming that circular I/O was quite common in the industry as far back as the 1960's. As a foe of software patents, I was trying to help the second party. I wasn't much help because I no longer had listings or ready access to documentation that discussed CDC circular I/O. I believe that the case was settled in favor of the good guys without me, though.

Back to CDC 6500 frameset