Those of us old folks who got in on the start of embedded controllers can remember when small-room-sized computers started to have competition from Minicomputers. They were built of multiple circuit boards in an enclosure about the size of a large microwave oven and were the first computers to be uksed for more control-related applications.The computing was done with core chipsets that were 4-bits wide (extremely low level of integration by today’s standards, but the available technology drove the applications). If you wanted a 12-bit processor you laid out 3 sets of processor chips (or 4 sets for 16-bit processing) on circuit boards.
About that time Intel and Motorola (and was it Fairchild and RCA?) began to develop more highly integrated ICs that held more of the processor on a single chip, called microprocessors. To get a computer you had to add memory, clock, and IO functions on additional ICs. The first one I can recall was the 4004 from Intel and the 6800 from Motorola. My first exposure was to the 8008. It came in a rack with several small circuit boards and you could add 1K cards of memory for about $100 each. There was no such thing as a boot ROM and you had to start the machine by manually setting an assembly language jump (3-bytes, hence 3 switch settings each followed by pushing the load button. Software development was done on a teletype machine which included a punched paper tape reader. If you have heard of a “3-pass assembler” it literally required you to read in the assembler program off the paper tape reader and then feed the punched tape version of your alphanumeric assembly language program through the tape reader three times before you got the punched tape holding the actual assembly codes. Then you could load the program and start it, but there was no way to tell what was wrong if it didn’t work. Debugging was a slow process of writing a small program, putting in breaks or some way to indicate how far it got, and then writing a new program. I’m starting to recall the intense mental effort required to have even a small program working!
Our “reconditioned” Teletype was something we grew to hate. The visual printout was typed on a continuous roll of paper so you had to cut or fold it as you went. Our particular ‘cross to bear’ was the paper tape reader which occasionally misread a byte (the tape ran 8-bits across with a hole for each 1 and no hole for a 0, I think). The assembler sized the bytes for each assembly instructions the first time, leaving the symbolic jumps and calls with 00 00 (unpunched). I think it caught the symbolic jump-to locations on that pass. It then filled them in on the second pass. As best I recall, the third pass printed out the machine codes beside the assembly (mnemonic) instructions. The big problem would be the tape reader misreading a byte on either the first or second pass, making some codes be 2-bytes instead of three or vice versa. The result was a jump that would lead, not to an op code but to a second or third byte of the instruction (which would result in some totally different instruction). I can remember manually patching jumps on occasion just to save the pain of doing it all over (with a real possibility of the reader giving a new set of errors)!
In later posts I will reminisce about subsequent great steps forward in the development process.