FIVE GENERATIONS OF COMPUTERS

Since the development of the Harvard Mark I, the digital computing machines have progressed at a rapid pace. Computers are often divided into five generations according to a series of advances in hardware, mainly in logic circuitry. Each generation comprises a group of machines that share a common technology.

The First Generation (the 1940s – much of the 1950s)

ENIAC, along with other electronic computers built in the 1940s, marks the beginning of the so-called first-generation computers. These computers cost millions of dollars and filled entire rooms. They used thousands of vacuum tubes for calculation, control, and sometimes for memory as well. Vacuum tubes were bulky, unreliable, energy consuming devices generating large amounts of heat. The vacuum tubes of one machine consumed enough electricity to power a small town. As long as computers were tied down to vacuum tube technology, they could only be huge, heavy and expensive. Though their operations were very fast in comparison with manual calculations, they were slow by today's standards.

The Second Generation (the late 1950s – the early 1960s)

The invention of the transistor in 1947 resulted in a revolution in computer development. Germanium (later silicon) transistors were smaller, more reliable and efficient than the vacuum tubes that had been used in electronics up to that time. These semi-conductor devices generated and controlled the electric signals that operated the computer. By the late 1950s and early 1960s, vacuum tubes were no longer used in computers.

Transistors led to the creation of smaller, more powerful and faster computers known as minicomputers. They were operated by specialized technicians, who were often dressed in white lab coats and usually referred to as "computer priesthood1". The machines were expensive and difficult to use. Few people came in direct contact with them, not even their programmers. The typical interaction was as follows: a programmer coded instructions and data on preformatted paper, a keypunch operator transferred the data onto punch cards, a computer operator fed the cards into a card reader, and, finally, the computer executed the instructions or stored the cards' information for later processing.

The so-called second-generation computers, which used large numbers of transistors, were able to reduce computational time from milliseconds to microseconds or millionths of seconds. At that time, there were two types of computers. There were room-sized mainframes, costing hundreds of thousands of dollars that were built one at a time by companies such as International Business Machines Corporation and Control Data Corporation. There also were smaller (refrigerator-sized), cheaper (about 100,000 dollars), mass-produced minicomputers built by such companies as Digital Equipment Corp. and Hewlett-Packard Company for scientific research laboratories, large businesses and higher educational institutions.

Most people, however, had no direct contact with either type of computer, and the machines were popularly viewed as giant brains that threatened to eliminate jobs as a result of automation. The idea that anyone would have his or her own desktop computer was generally considered as far-fetched2.

The Third Generation (much of the 1960s – the 1970s)

The step forward in computer miniaturization came in 1958, when Jack Kilby, an American engineer, designed the first integrated circuit (IC). His prototype consisted of a germanium wafer that included hundreds of tiny transistors, diodes, resistors, and capacitors – the main components of electronic circuitry. The microchip itself was small enough to fit on the end of your finger (See Figure 1).

The invention of the IC marks the beginning of the third generation of computers. With integrated circuits, computers could be made smaller, less expensive and more reliable. They could perform many data processing operations in nanoseconds, which are billionths of seconds.

 

Figure 1