February 10, 2018
It is possible to program in many of the modern programming languages, without understanding what takes place in the CPU of the computer during program execution. In many cases, however, it is helpful to under the nature of a stored program.
- Standard Cobol, Mike Murach
There are two community libraries, or little libraries on my street. Little Libraries are a delight; they look like small little houses, often in front of actual houses, and serve as a home for books that can be freely leant, taken and deposited by community members. Little libraries are one of the things that I get to see, every now and then as I walk around the city, that makes me feel like the internet doesn’t command and run my life.
On several occasions, I have borrowed and deposited books into little libraries. It’s not unlike the “need-a-penny take-a-penny / have-a-penny leave-a-penny” tray you see in some stores.
A few weeks ago I walked by a little library on my to the subway and noticed a book cover that made me double back for a second look.
Interesting. I walked away, a little faster, realizing that I was running a bit late. This was not the first time I had seen a programming-language book in a little-library. But something about the Cobol book struck me, and I thought about it for a few days (I think it was the cover; it’s a really cool cover). I had heard of Cobol a few times in passing at meet-ups or on HackerNews. I didn’t know much about it, other than that it was an early programming language.
In case you’re wondering whether I now, with this book sitting next to me, know much more about Cobol—let me be quick to disappoint you—I don’t. But I did skim the first chapter, both to prove to myself that I wasn’t judging the book merely by its cover, but also because I had a nagging feeling that I should push myself to indulge in some history of computing.
The first chapter of the book is exactly that, a selection of “background information” to acquaint the reader with how a computer processes data. The year is 1965. The diagrams are cool (and likely soon to be a synth-wave album-cover). The punch cards are punchy. The instructions—very manual.
I’ve pulled a few quotes out of the first chapter that I found striking and/or interesting. Note: Gender pronouns from quotes have been switched from “he” to “they”.
Computer systems can accept input in the form of punched cards, punched paper tapes, magnetic tapes, magnetic disks, and checks recorded in magnetic ink. Computer systems can give output in the form of printed reports, punched cards, magnetic tapes, magnetic disks, and visual displays similar to a television screen.
We’re so far away from this history: unlimited cloud storage, 16-32 gigabytes of ram in laptops, SSD’s, high-resolution displays—all are quite commonplace. Occasionally, I hear someone say that computers used to take up an entire room!.
In university (or was it highschool?) we had to use these things called Scantron™ cards—“bubble sheets” for filling out multiple choice questions for exams. I recall one teacher telling us that there were only one or two Scantron Scanning Machines (aka, computers, aka, giant boxes from the ’70s; that, yes, they took up an entire room, and yes, it will be a while before we get your grades back) across the school board. I suppose that’s the closest I’ve gotten to these (invokes documentary voiceover-voice) “ancient machines.”
Somehow I doubt that students still use these things. I vividly remember being told the accuracy with which we must fill out our multiple choice answers (or the machine wouldn’t process it and you would lose marks!)
A forms-control tape is a loop of paper tape, punched with holes that correspond to the printing lines of a form to be printed. [… paragraph of detail … ]. Although it is very difficult to visualize the operation of a forms-control tape from reading about it, it is quite simple to grasp the theory when you actually see one work. (10)
The above excerpt is describing the printer-portion of a computer. The paragraph leading up to the quote is indeed a detailed description of how the thing works; I couldn’t picture it. At least, computers today are so complex and tiny that I don’t even have to try.
Although a program may consist of thousands of instructions, there are basically only four types that a computer can execute, plus some miscellaneous instructions. Therefore, a 6000 instruction program consists of the same types of instructions being executed over and over again. These basic types of instructions are (1) input and output (I/O), (2) data movement, (3) arithmetic, and (4) logic instructions. (20)
I like this one. These are the kinds of things I would have benefitted from reading when I started on the paths of self-learning programming. In retrospect, sure, I needed to learn React and Angular and Mongo (this was 2015) so that I could figure out how I could build My Cool Side Project™️ and hopefully get a job.
I’d like to come back to this quote whenever I fall into the potential analysis-paralysis of trying to write “idiomatic” code, specific to the language I’m using.
Just make the thing work. There are basically 4 things you can do.
(I cannot verify that this is true today. It’s entirely possible that now computers can do 4 million things, 4 billion times a second, on 4 million EC2 instances, for only 4 million bucks.)
There are many different languages in which a programmer can write a program. On the lowest level, [they] can code instructions using the same codes and addresses that are used by the computer; that is, [they] can write a program in machine language. These machine-language instructions can then be keypunched into an object deck that can be loaded and executed by the computer.
This one made me laugh a little bit too. Earlier in my learning-to-code journey, I referred to Java as a low-level language. The person I was speaking to gently corrected me that Java isn’t really considered a low level language.
A few months later I referred to C as a low level language in another conversation. 🙄 At least, reading about actual “memory” slots (card storage) helps demystify memory management a bit. A bit.
It’s a neat book. I won’t read any further; I’m too busy catching up on the next 30 articles that come out on medium/hackernews/reddit on why I should rebuild my single page application to make use of react-hooks or
whatever-x new web-technology.
Let me reign that in. Let’s come back to the little-libraries. I’m still having fun imagining who is going to see the Cobol book next, and who might take it out (when I return it), or who might have an old memory pulled into the forefront the next time they catch that cool, cool, cover when they walk by.