Wednesday, August 12, 2015

History Of Computer Architecture

The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. Two other early and important examples were:

    John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and
    Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited von Neumann's paper

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Mohammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture” – a term that seemed more useful than “machine organization.”

Subsequently, Brooks, a Stretch designer, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing,

    Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.

Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which “architecture” became a noun defining “what the user needs to know”. Later, computer users came to use the term in many less-explicit ways.

The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architecture prototypes were physically built in the form of a Transistor–Transistor Logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form.
Subcategories

The discipline of computer architecture has three main subcategories:

    Instruction Set Architecture, or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data formats.
    Microarchitecture, or computer organization describes how a particular processor will implement the ISA.The size of a computer's CPU cache for instance, is an organizational issue that generally has nothing to do with the ISA.
    System Design includes all of the other hardware components within a computing system. These include:
        Data processing other than the CPU, such as direct memory access (DMA)
        Other issues such as virtualization, multiprocessing and software features.

Some architects at companies such as Intel and AMD use finer distinctions:

    Macroarchitecture: architectural layers more abstract than microarchitecture
    Instruction Set Architecture (ISA): as above but without:
        Assembly ISA: a smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
    Programmer Visible Macroarchitecture: higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture.
    UISA (Microcode Instruction Set Architecture)—a family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.
    Pin Architecture: The hardware functions that a microprocessor should provide to a hardware platform, e.g., the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated (emptied). Pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term "architecture" fits, because the functions must be provided for compatible systems, even if the detailed method changes.

The Roles
Definition

The purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes Instruction Set Design, Functional Organization, Logic Design, and Implementation. The implementation involves Integrated Circuit Design, Packaging, Power, and Cooling. Optimization of the design requires familiarity with Compilers, Operating Systems to Logic Design and Packaging.
Instruction set architecture
An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high level languages which have few, if any, language elements that translate directly into a machine's native opcodes. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate high level languages, such as C, into instructions.

Besides instructions, the ISA defines items in the computer that are available to a program—e.g. data types, registers, addressing modes, and memory. Instructions locate operands with Register indexes (or names) and memory addressing modes.

The ISA of a computer is usually described in a small book or pamphlet, which describes how the instructions are encoded. Also, it may define short (vaguely) mnenonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers, software programs to isolate and correct malfunctions in binary computer programs.

ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (more operations can be better), cost of the computer to interpret the instructions (cheaper is better), speed of the computer (faster is better), and size of the code (smaller is better). For example, a single-instruction ISA is possible, inexpensive, and fast, (e.g., subtract and jump if zero. It was actually used in the SSEM), but it was not convenient or helpful to make programs small. Memory organization defines how instructions interact with the memory, and also how different parts of memory interact with each other.

During design emulation software can run programs written in a proposed instruction set. Modern emulators running tests may measure time, energy consumption, and compiled code size to determine if a particular instruction set architecture is meeting its goals.
Computer organization
Main article: Microarchitecture

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing ability of processors. They may need to optimize software in order to gain the most performance at the least expense. This can require quite detailed analysis of the computer organization. For example, in a multimedia decoder, the designers might need to arrange for most data to be processed in the fastest data path.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while supervisory software may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of virtualization needs virtual memory hardware so that the memory of different simulated computers can be kept separated. Computer organization and features also affect power consumption and processor cost.
Implementation

Once an instruction set and micro-architecture are described, a practical machine must be designed. This design process is called the implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering. Implementation can be further broken down into several (not fully distinct) steps:

    Logic Implementation designs the blocks defined in the micro-architecture at (primarily) the register-transfer level and logic gate level.
    Circuit Implementation does transistor-level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at this level, or even (partly) at the physical level, for performance reasons.
    Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed.
    Design Validation tests the computer as a whole to see if it works in all situations and all timings. Once implementation starts, the first design validations are simulations using logic emulators. However, this is usually too slow to run realistic programs. So, after making corrections, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Many hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.

For CPUs, the entire implementation process is often called CPU design.

No comments:

Post a Comment