[CS Dept logo]

Com Sci 222/322
Machine Organization

Lectures

A Computer as a Big Circuit

[back] Department of Computer Science
[] The University of Chicago



Last modified: Sun Jan 4 21:10:42 CST




In this lecture, we'll run through the essential structure of an electronic computer very briefly, observing how it can be implemented as a very large and carefully organized Boolean circuit.

Key modules in a computer

Conventionally, electronic computers are understood in terms of 4 components:

  1. Processor (aka Central Processing Unit, aka CPU)
  2. Memory (aka Random Access Memory, aka RAM, formerly known as Core Memory)
  3. Input (keyboard, mouse, serial lines, network connection)
  4. Output (monitor, printer, etc.)
There are a few nonobvious and confusing aspects of this organization in today's computers: There is another key component of electronic computers that gets neglected in this list: the bus. The bus is a general-purpose communication channel connecting the 4 classical components together. All modern computers have a bus, and some have extra ones to provide more flexible I/O.

We'll probe into the circuitry of the memory and processor just enough to get a rough idea of how it can all work.

Clocking

Mention edge-triggered clocking vs. 2-phase clocking.

Memory

In principle, the same logical structure works for registers, which are memory elements in the processor, cache, and RAM memory. The individual bits are remembered by flip-flops, or by capacitors that can store a charge long enough for it to be read and refreshed. The interesting part is the addressing circuitry.

The height of a memory is the number of addresses. The width is the number of bits per address. The essential problem is to create a sufficiently high memory, 1 bit wide. A 1-bit wide memory is called a ``bitplane.'' By fanning out the address signal to a sequence of bitplanes, we can get as wide a memory as we like.

Cascaded multiplexors for read addressing

A multiplexor takes a 1-bit address, and selects one of two 1-bit values. Two flip-flops plus a multiplexor is the read side of a memory unit with 2 addresses each storing 1 bit. To get more addresses, cascade multiplexors in a binary tree as shown below:



Decoders for write enabling

To write a memory location, fan out the value to be written to all cells, but enable only the addressed cell. Here's a decoder for a 3-bit address:



When addresses have been decoded, an alternate form of multiplexor can be used, made up of ``3-state buffers'':



For this multiplexor to work, only one of the address wires from the top may be asserted at a given time. In the right electronic context, the or gate in the circuit above is simply a juncture of wires.

Complications for larger memories

As memories get larger, and spread over more than one silicon chip, both the decoding and the multiplexing get done in more complicated, hierarchical forms.

The essence of building memories is that they can be combined essentially as if they were rectangles in a plane, adding a little bit of extra circuitry for each combining operation. An axb memory is one with a addresses and b bits at each address.

  1. Given n axb memories, build a single axnb memory by fanning out the address input and bundling the read & write lines. (the bitplaning technique mentioned before is the special case where b=1).
  2. Given n axb memories, build a single naxb memory by adding logn address wires, decoding them to activate a single one of the axb memories, fanning out the original loga address bits, multiplexing together the outputs from the corresponding words in different memory units.
Here are schematic pictures of the combinations:







We may also convert an axnb memory to naxb, by using the first logn address bits to multiplex seqments of nb-bit words. The tricky part is writing: the unselected portion of the nb-bit word must be rewritten as it was before.

Processor: datapath and control

Datapath example: adder

2's complement vs. other integer encodings.

Control using decoders, multiplexors, finite-state machines

Selecting a computation step is logically the same as selecting a memory location, but more complicated geometrically because of the irregular structure of the processor. Decoders and multiplexors selectively activate some of the computational datapath, and choose between the results of redundantly activated portions. In particular, the Arithmetic/Logic Unit (ALU) is often organized in bitplanes, much like memory. For logical operations, the bitplanes are independent, but for addition, subtraction, and shift operations there is wiring between adjacent planes. In many cases, it is simplest to let the circuitry calculate all possible operations on the given operands, and use a multiplexor to select the desired one. Integer addition is the most complicated operation that is usually done in this bitplaned/multiplexed fashion. Integer multiplication, and all floating-point operations, usually require some sort of loop, often implemented through microcode.

Integer addition doesn't require a controlled loop, but the need for a carry to propagate down all of the bits of the result slows down the circuit. Lots of tricks are used to improve this, including precharging (since dropping a voltage to ground is usually faster than raising it to VDD), carry-lookahead (calculate the nth carry directly from the input bits at 0 through n), carry-skip (apply carry-lookahead to blocks, ripple between blocks), carry-select (do each 1-bit add both with and without carry, then select the correct answer with a multiplexor)

The control circuitry in the CPU is typically the least regular circuitry in the computer. Functionally, control circuitry implements a finite-state machine (called a ``finite automaton'' in more theoretical classes). Circuitry for finite-state machines can be regularized somewhat by using the form of a combinational transition circuit providing feedback from a state register to itself. The transition circuit may be implemented by a PLA.

Appendix A of the text has a lot more detail on computer arithmetic

Microcode

From a logical point of view, microcode architecture uses a more primitive computer on the CPU chip to implement the more sophisticated instructions of the real architecture. Microcode programs are stored in read-only memory, but are easier to change during design and production than detailed wiring. Notice that microcoding replaces relatively unstructured control wiring with the highly regular form of memory. They usually manipulate the electronic control signals, rather than values in memory.


Michael J. O'Donnell
Last modified: Wed Jan 21 13:17:15 CST