Friday, December 25, 2009

Unit 1

Computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer — focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.


 

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.


 

Computer hardware is the physical part of a computer, including the digital circuitry, as distinguished from the computer software that executes within the hardware. Computer hardware

A typical Personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:


 


 

Internals of typical personal computer Typical Motherboard found in a computer


 

Motherboard
or system board with slots for expansion cards and holding parts

Central processing unit (CPU) , Computer fan - used to cool down the CPU

Random Access Memory (RAM) - for program execution and short term data storage, so the computer does not have to take the time to access the hard drive to find the file(s) it requires. More RAM will normally contribute to a faster PC. RAM is almost always removable as it sits in slots in the motherboard, attached with small clips. The RAM slots are normally located next to the CPU socket.

Firmware
usually Basic Input-Output System (BIOS) based or in newer systems Extensible Firmware Interface (EFI) compliant


 

Buses : Taking data, instructions from one place to another ( processor to main memory ….)

PCI

PCI-E

USB

HyperTransport

CSI (expected in 2008)

AGP (being phased out)

VLB (outdated)

ISA (outdated)

EISA (outdated)

MCA (outdated)


 

Power supply
- a case that holds a transformer, voltage control, and (usually) a cooling fan

Storage controllers of IDE, SATA, SCSI or other type, that control hard disk, floppy disk, CD-ROM and other drives; the controllers sit directly on the motherboard (on-board) or on expansion cards


 

Video display controller that produces the output for the computer display. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E or AGP), requiring a Graphics Card.


 

Computer bus controllers (parallel, serial, USB, FireWire) to connect the computer to external peripheral devices such as printers or scanners

Some type of a removable media writer:


 

CD - the most common type of removable media, cheap but fragile.

CD-ROM Drive

CD Writer

DVD

DVD-ROM Drive

DVD Writer

DVD-RAM Drive

BD

BD-ROM Drive

BD Writer

Floppy disk

Zip drive

USB flash drive AKA a Pen Drive, memory stick

Tape drive - mainly for backup and long-term storage

Internal storage - keeps data inside the computer for later use.

Hard disk - for medium-term storage of data.

Disk array controller

Sound card - translates signals from the system board into analog voltage levels, and has terminals to plug in speakers.

Networking - to connect the computer to the Internet and/or other computers

Modem - for dial-up connections

Network card - for DSL/Cable internet, and/or connecting to other computers.

Other peripherals

In addition, hardware can include external components of a computer system. The following are either standard or very common.


 

Wheel Mouse


 

Input or Input devices

Text input devices

Keyboard

Pointing devices

Mouse

Trackball

Gaming devices

Joystick

Gamepad

Game controller

Image, Video input devices

Image scanner

Webcam

Audio input devices

Microphone


 

Output or Output devices

Image, Video output devices

Printer
Peripheral device that produces a hard copy. (Inkjet, Laser)

Monitor
Device that takes signals and displays them. (CRT, LCD)

Audio
output devices

Speakers
A device that converts analog audio signals into the equivalent air vibrations in order to make audible sound.

Headset A device similar in functionality to computer speakers used mainly to not disturb others nearby.


 

Computer software, consisting of programs, enables a computer to perform specific tasks, as opposed to its physical components (hardware) which can only do the tasks they are mechanically designed for. The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to run the necessary services for user-interfaces and applications, and middleware which controls and co-ordinates distributed systems.


 

A Central processing unit (CPU), or sometimes simply processor, is the component in a digital computer capable of executing a program.(Knott 1974) It interprets computer program instructions and processes data. CPUs provide the fundamental digital computer trait of programmability, and are one of the necessary components found in computers of any era, along with primary storage and input/output facilities. A CPU that is manufactured as a single integrated circuit is usually known as a microprocessor.


 

CPU operation : The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. Discussed here are devices that conform to the common von Neumann architecture. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all von Neumann CPUs use in their operation: fetch, decode, execute, and writeback.


 


 

Diagram showing how one MIPS32 instruction is decoded. (MIPS Technologies 2005)

The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a


 

Processor registers:

IR – instruction register holds current instruction

PC - Program counter register which stores a number that identifies the current position in the program. In other words, the program counter keeps track of the CPU's place in the current program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units. Often the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned.


 

The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.


 


 

Block diagram of a simple CPU


 

FETCHà DECODEà EXECUTE


 

After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, an arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set (see the discussion of integer range below).


 

The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.


 

After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously.

------------------------------

Control unit:


 

A control unit is the part of a CPU or other device that directs its operation. The outputs of the unit control the activity of the rest of the device. A control unit can be thought of as a finite state machine.


 

Operations:

  1. Pass control signals
  2. Control/regulate other devices
  3. Controlling the Data transfer
  4. Timely signal


 

Now they are often implemented as a microprogram that is stored in a control store. Words of the microprogram are selected by a microsequencer and the bits from those words directly control the different parts of the device, including the registers, arithmetic and logic units, instruction registers, buses, and off-chip input/output. In modern computers, each of these subsystems may have its own subsidiary controller, with the control unit acting as a supervisor.


 

The control unit is the circuitry that controls the flow of information through the processor, and coordinates the activities of the other units within it. In a way, it is the "brain within the brain", as it controls what happens inside the processor, which in turn controls the rest of the PC.

The functions performed by the control unit vary greatly by the internal architecture of the CPU, since the control unit really implements this architecture. On a regular processor that executes x86 instructions natively, the control unit performs the tasks of fetching, decoding, managing execution and then storing results.

-----------------------------------

Arithmetic logic unit


 

A typical schematic symbol for an ALU: A & B are operands; R is the output; F is the input from the Control Unit; D is an output status


 

The arithmetic logic unit (ALU) is a digital circuit that calculates an arithmetic operation (addition, subtraction, etc.) and logic operations (Exclusive Or, AND, etc.) between two numbers. The ALU is a fundamental building block of the central processing unit of a computer.

Many types of electronic circuits need to perform some type of arithmetic operation, so even the circuit inside a digital watch will have a tiny ALU that keeps adding 1 to the current time, and keeps checking if it should beep the timer, etc...


 

Memory (storage devices)


 

Primary storage is directly connected to the central processing unit of the computer. It must be present for the CPU to function correctly. As shown in the diagram, primary storage typically consists of three kinds of storage:

Processor registers are internal to the central processing unit. Registers contain information that the arithmetic and logic unit needs to carry out the current instruction. They are technically the fastest of all forms of computer storage, being switching transistors integrated on the CPU's silicon chip, and functioning as electronic "flip-flops".

Cache memory is a special type of internal memory used by many central processing units to increase their performance or "throughput". Some of the information in the main memory is duplicated in the cache memory, which is slightly slower but of much greater capacity than the processor registers, and faster but much smaller than main memory. Multi-level cache memory is also commonly used—"primary cache" being smallest, fastest and closest to the processing device; "secondary cache" being larger and slower, but still faster and much smaller than main memory.

Main memory contains the programs that are currently being run and the data the programs are operating on. In modern computers, the main memory is the electronic solid-state random access memory. It is directly connected to the CPU via a "memory bus" (shown in the diagram) and a "data bus". The arithmetic and logic unit can very quickly transfer information between a processor register and locations in main storage, also known as a "memory addresses". The memory bus is also called an address bus or front side bus and both busses are high-speed digital "superhighways". Access methods and speed are two of the fundamental technical differences between memory and mass storage devices. (Note that all memory sizes and storage capacities shown in the diagram will inevitably be exceeded with advances in technology over time.)

Secondary and off-line storage


 

ROM (read only memory) :    Bootstrap/loader – a program stored in ROM for starting the

computer S/W operating when power is turned on.


 

Secondary storage requires the computer to use its input/output channels to access the information, and is used for long-term storage of persistent information. However most computer operating systems also use secondary storage devices as virtual memory - to artificially increase the apparent amount of main memory in the computer. Secondary storage is also known as "mass storage", as shown in the diagram above. Secondary or mass storage is typically of much greater capacity than primary storage (main memory), but it is also much slower. In modern computers, hard disks are usually used for mass storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in thousand-millionths of a second, or nanoseconds. This illustrates the very significant speed difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, are typically even slower than hard disks, although their access speeds are likely to improve with advances in technology. Therefore, the use of virtual memory, which is millions of times slower than "real" memory, significantly degrades the performance of any computer. Virtual memory is implemented by many operating systems using terms like swap file or "cache file". The main historical advantage of virtual memory was that it was much less expensive than real memory. That advantage is less relevant today, yet surprisingly most operating systems continue to implement it, despite the significant performance penalties.

Off-line storage is a system where the storage medium can be easily removed from the storage device. Off-line storage is used for data transfer and archival purposes. In modern computers, CDs, DVDs, memory cards, flash memory devices including "USB drives", floppy disks, Zip disks and magnetic tapes are commonly used for off-line mass storage purposes. "Hot-pluggable" USB hard disks are also available. Off-line storage devices used in the past include punched cards, microforms, and removable Winchester disk drums.


 

Computer memory architecture


 


 

Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads. Since the read/write head only covers a part of the surface, magnetic storage is sequential access and must seek, cycle or both. In modern computers, the magnetic surface will take these forms:

Magnetic disk

Floppy disk, used for off-line storage

Hard disk, used for secondary storage

Magnetic tape data storage, used for tertiary and off-line storage

In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Also unlike today, magnetic tape was often used for secondary storage.

Semiconductor memory uses semiconductor-based integrated circuits to store information. A semiconductor memory chip may contain millions of tiny transistors or capacitors. Both volatile and non-volatile forms of semiconductor memory exist. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor memory or dynamic random access memory. Since the turn of the century, a type of non-volatile semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers.

Optical disc storage

Optical disc storage uses tiny pits etched on the surface of a circular disc to store information, and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile and sequential access. The following forms are currently in common use:

CD, CD-ROM, DVD: Read only storage, used for mass distribution of digital information (music, video, computer programs)

CD-R, DVD-R, DVD+R: Write once storage, used for tertiary and off-line storage

CD-RW, DVD-RW, DVD+RW, DVD-RAM


 

Memory Hierarchy


 


 

Memory system characteristics

Location        :    Register (processor)

                Internal (main )

                External (auxiliary memory)

Capacity        :    word size (40 cells..) , number of words

Unit of transfer    :    word (collection of cells each carry ' 0/1') , block, no. of words

Access Methods    :    sequential, random, direct, associate

Performance        :    access time, cycle time, transfer rate

Physical type        :    semiconductor, magnetic, optical

Physical

Characteristics    :     volatile / non volatile , erasable/non erasable

Throughput     :    is the rate at which information can be read from or written to the

storage. In computer storage, throughput is usually expressed in terms of megabytes per second or MB/s, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated.


 

Von Neumann architecture

Key points :

  • Stored program (assembly, compiler program)
  • Instruction & data both are stored in memory
  • Arithmetic operation – Arithmetic Unit
  • Logic operation- Logical Unit
  • 1000 storage locations – called words of 40 digits . each word contain two 20-bit instructions.
  • Includes 21 instructions.


 


 


 


 


 

A computer that makes use of von Neumann architecture has five components: a control circuitry, an arithmetic-logic unit, an input/output device, a memory, and a bus that ensures a data path connecting these components.


 

Design of the Von Neumann architecture


 

The "von Neumann" in von Neumann architecture refers to Hungarian-American mathematician John von Neumann (1903-1957). Von Neumann was initially interested in access to the fastest computers available (of which there were few) during World War II in order to perform complex computations for a variety of war-related problems. In 1944, Von Neumann became a consultant to the ENIAC (Electronic Numerical Integrator and Computer) project, which upon its completion in 1945 became the world's first general purpose, electronic computer.


 

The von Neumann architecture is a computer design model that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after mathematician and early computer scientist
John von Neumann. Such a computer implements a universal Turing machine, and the common "referential model" of specifying sequential architectures, in contrast with parallel architectures. The term "stored-program computer" is generally used to mean a computer of this design.


 

The Von Neumann model is as used in a desktop computer - executes instructions sequentially

Von Neumann computations are a class of computer programs ideally suited to sequential processing.


 

The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example, a desk calculator (in principle) is a fixed program computer. It can do basic mathematics, but it cannot be used as a word processor or to run video games. To change the program of such a machine, you have to re-wire, re-structure, or even re-design the machine. Indeed, the earliest computers were not so much "programmed" as they were "designed". "Reprogramming", when it was possible at all, was a very manual process, starting with flow charts and paper notes, followed by detailed engineering designs, and then the often-arduous process of implementing the physical changes.


 

The idea of the stored-program computer changed all that. By creating an instruction set architecture and detailing the computation as a series of instructions (the program), the machine becomes much more flexible. By treating those instructions in the same way as data, a stored-program machine can easily change the program, and can do so under program control.


 

The terms "von Neumann architecture" and "stored-program computer" are generally used interchangeably, and that usage is followed in this article. However, the Harvard architecture concept should be mentioned as a design which stores the program in an easily modifiable form, but not using the same storage as for general data.


 

A stored-program design also lets programs modify themselves while running. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions, which had to be done manually in early designs. This became less important when index registers and indirect addressing became customary features of machine architecture. Self-modifying code is deprecated today since it is hard to understand and debug, and modern processor pipelining and caching schemes make it inefficient.


 

There are drawbacks to the von Neumann design. Aside from the von Neumann bottleneck described below, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs, a malfunctioning program can damage itself, other programs, or the operating system, possibly leading to a crash. A buffer overflow is one very common example of such a malfunction. Memory protection and other forms of access control can help protect against both accidental and malicious program modification.


 

The term "von Neumann architecture" arose from mathematician John von Neumann's paper, First Draft of a Report on the EDVAC.[2] Dated June 30, 1945, it was an early written account of a general purpose stored-program computing machine (the EDVAC). However, while von Neumann's work was pioneering, the term von Neumann architecture does somewhat of an injustice to von


 

The idea of a stored-program computer existed at the Moore School of Electrical Engineering at the University of Pennsylvania before von Neumann even knew of the ENIAC's existence. The exact person who originated the idea there is unknown.


 

When the ENIAC was being designed, it was clear that reading instructions from punched cards or paper tape would not be fast enough, since the ENIAC was designed to execute instructions at a much higher rate. The ENIAC's program was thus wired into the design, and it had to be rewired for each new problem. It was clear that a better system was needed. The initial report on the proposed EDVAC was written during the time the ENIAC was being built, and contained the idea of the stored program, where instructions were stored in high-speed memory, so they could be quickly accessed for execution.


 

The separation between the CPU and memory leads to the von Neumann bottleneck, the limited throughput (data transfer rate) between the CPU and memory compared to the amount of memory. In modern machines, throughput is much smaller than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continuously forced to wait for vital data to be transferred to or from memory. As CPU speed and memory size have increased much faster than the throughput between them, the bottleneck has become more of a problem.


 

The performance problem is reduced by a cache between CPU and main memory, and by the development of branch prediction algorithms. It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had a major influence. Modern functional programming and object-oriented programming are much less geared towards pushing vast numbers of words back and forth than earlier languages like Fortran, but internally, that is still what computers spend much of their time doing.


 


 

Sample instructions:


 

Instruction type    Opcode        Symbolic             Description

Representation


 

Data transfer        00001010        LOAD MQ            transfer content from

                                        Register to another

Unconditional

Branch            00001101        JUMP M(X,0:19)        take next instruction from

                                        Left half of M(x)


 

Conditional branch    00001111        JUMP + M(X 0:A)         ; ; with condition


 

Arithmetic        00000101        ADD M(X)            Arithmetic addition


 

Address Modify    00010010        STOR M(X,8:19)        Replace left cell


 


 

Instruction fetch and execution


 


 


 


 


 


 


 

    

1 comment:

  1. Tutorial Computer learning is useful for all.Thanks for sharing this information

    php web development

    ReplyDelete