您的位置: 网站首页 > 公共课 > 计算机英语 > 第1章 COMPUTER HARDWARE > 【1.1 GENERAL INTRODUCTION OF COMPUTER HARDWARE】

1.1 GENERAL INTRODUCTION OF COMPUTER HARDWARE

 

1.1  GENERAL INTRODUCTION OF COMPUTER HARDWARE

We build computer to solve problems. Early computer solved mathematical and engineering problems, and later computers emphasized information processing for business applications. Today, computers also control machines as diverse as automobile engines, robots, and microwave ovens. A computer system solves a problem from any of these domains by accepting input, processing it, and producing output. Fig.1-1 illustrates the function of a computer system.

Fig.1-1  The three activities of a computer system

Computer systems consist of hardware and software. Hardware is the physical part of the system. Once designed, hardware is difficult and expensive to change. Software is the set of pro- grams that instruct the hardware and is easier to modify than hardware. Computers are valuable because they are general-purpose machines that can solve many different kinds of problems, as opposed to special-purpose machines that can each solve only one kind of problem. Different problems can be solved with the same hardware by supplying the system with a different set of instructions, that is , with different software.

Every computer has four basic hardware components:

·    Input devices.

·    Output devices.

·    Main memory.

·    Central processing unit (CPU).

Fig.1-2 shows these components in a block diagram. The lines between the blocks represent the flow of information flows from one component to another on the bus, which is simply a group of wires connecting the components. Processing occurs in the CPU and main memory. The organization, in Fig.1-2, with the components connected to each other by the common bus. However, other configurations are possible as well.

Computer hardware is often classified by its relative physical size:

·    Small microcomputer.

·    Medium minicomputer.

·    Large mainframe.

Just the CPU of a mainframe often occupies an entire cabinet. Its input/output (I/O) devices and memory might fill an entire room. Microcomputers can be small enough to fit on a desk or in a briefcase. As technology advances, the amount of processing previously possible only on large machines becomes possible on smaller machines. Microcomputers now can do much of the work that only minicomputers or mainframes could do in the past.

Fig.1-2  Block diagram of the components of a computer system

The classification just described is based on physical size as opposed to storage size. A computer system user is generally more concerned with storage size, because that is a more direct indication of the amount of useful work that the hardware can perform. Speed of computation is another characteristic that is important to the user. Generally speaking, users want a fast CPU and large amounts of storage , but a physically small machine for the I/O devices and main memory.

When computer scientists study problems, therefore, they are concerned with space and timethe space necessary inside a computer system to store a problem and the time required to solve it. They commonly use the metric prefixes of Table 1-1 to express large or small quantities of space or time.

Table 1-1  Prefixes for power of 10

Multiple

Prefix

Abbrev

Multiple

Prefix

Abbrev

109

giga-

G

10-3

milli-

m

106

mega-

M

10-6

micro-

m

103

kilo-

K

10-9

nano-

n

Example:Suppose it takes 4.5 microseconds, also written 4.5 μs, to transfer some infor-

mation across the bus from one component to another. (a) How many seconds are required for the transfer? (b) How many transfers can take place during one minute?

(a) A time of 4.5 μs is 4.5×10-6 from Table 1-1 0.0000045s. (b) Because there are 60 seconds in one minute, the number of times the transfer can occur is (60s)/(0.0000045s/transfer) or 13 300 000 transfers. Note that since the original value was given with two significant figures, the result should not be given to more than two or three significant figures.

Table 1-1 shows that in the metric system the prefix kilo- is 1000 and mega- is 1 000 000. But in computer science, a kilo- is 210 or 1024. The different between 1000 and 1024 is less than 3%, so you can think of a computer science kilo- as being about 1 000 even though it is a little more. The same applies to mega- and giga-, as in Table 1-2. This time, the approximation is a little worse, but for mega- it is still with in 5%.

Table 1-2  Computer science values of the large prefixes

Prefix

Computer science values

giga-

230=1 073 741 824

mega-

220= 1 048 576

kilo-

210=1 024

KEYWORDS

computer

计算机

input device

输入设备

information processing

信息处理

output device

输出设备

hardware

硬件

main memory

主存储器

software

软件

central processing unit (CPU)

中央处理器

program

程序

bus

总线

general-purpose machine

通用(计算)机

microcomputer

微型计算机

special-purpose machine

专用(计算)机

minicomputer

小型计算机

instruction

指令

mainframe

主机,特大型机

set of instruction

指令集,指令系统

 

 

NOTES

1hardware(计算机硬件)。主要包括CPU、内存、主板、显卡、声卡、键盘,这些构成了计算机的物理部件。

2software(计算机软件)。主要包括操作系统、应用程序等在计算机中运行的程序包。

3main memory(主存储器)。简称主存。计算机中最主要的存储设备。常见的类似表达方式如下:

·    auxiliary memory辅助存储器;辅助性记忆装置。

·    buffer memory缓冲存储器;超高速缓冲存储器。

·    dynamic random access memory(计算机的)动态随机存取存储器。

·    dynamic memory动态存储器。

·    external memory外存储器,外部记忆装置。

·    hypothetical memory虚拟存储器。

4bus(总线)。它是计算机系统各部件之间的一种电连接,信号及电源就是通过它们传输的。信息可以从多个源部件中的任何一个经总线传输到多个目标部件中的任何一个。总线由若干平行导线组成,分别传送地址、数据、同步信号、控制信息及电源等。总线的类型有:

·    address bus,地址总线:一种单向总线,用来传输标识特定的存储单元或特定的输入输出设备的数字信息。

·    data bus,数据总线:在处理器、存储器及外部设备之间进行通信的信息通路。

·    control bus,控制总线:一种用来传输调整系统运行信号的总线。

5microcomputer(微型计算机)。通常其存储空间较小,内存较小,运行速度较慢,家庭用或者办公用计算机大多数为微型计算机。

6minicomputer(小型计算机)。一种比微型计算机内存大、存储空间大、运行速度快的计算机,通常用于网络服务器或者科学及工程用途。

7mainframe(大型计算机)。功能强大的计算机,通常用来进行特别复杂、计算量特别大的数学计算,例如天气模拟等。

8set of instruction(指令系统)。指一台计算机中所有指令的集合。

EXERCISES

1Multiple choices.

1When we store a program into a computer,      is necessary.

Aspace              Btime             Cinput device             Doutput device

2Early computer solved      problems.

Acontrol                                    Bbusiness applications

Cengineering                              Dmathematical

3We can use prefix micro to express      .

Atime metric                              Bspace metric

Cboth time and space metric        D10-6

4We can say a bus is simply      .

Aa group of wires                       Ba wire

Ca 8-bit bus                               Da 16-bit bus

5A computer system user generally more cares for      .

Aphysical size of the computer

Bstorage size

Cspeed of computation

Defficiency of the computer

6According to the physical size of computers we can classify the computers into      .

Amicrocomputer                        Bminicomputer

Cmainframe                               Dsupercomputer

7Prefix “mega-” used for computer science is       .

Alarger than 106                         Bsmaller than 106

Cequal to 220                              D1 048 576

8The basic hardware components of any computer include       .

ACPU                                        Bmain memory

Cinput devices                            Doutput devices

2Fill in the blank with appropriate words or phrases found behind this exercise.

1A computer system solves a problem by       .

2The amount of effective work of a computer can be indicated by       directly.

3Computer systems consist of       .

4Computer that can solve only one kind of problem is a       .

5Computer that can solve many different kinds of problems is a       .

6        instruct the hardware.

7        is difficult and expensive to change.

8We usually show the computer components in a       .

Ageneral-purpose machine

Bhardware

Caccepting input, processing problems, and producing output

Dblock diagram

Esoftware

Fstorage size

Gspecial-purpose machine

Hhardware and software

READING MATERIALS

1. Instruction pipeline

Pipeline processing can occur not only in the data stream but also in the instruction stream as well. An instruction pipeline reads consecutive instructions from memory while previous instruct-

tions are being executed in other segments. This causes the instruction fetch and execute phases to overlap and perform simultaneous operations. One possible digression associated with such a scheme is that an instruction may cause a branch out of sequence. In that case the pipeline must be emptied and all the instructions that have been read from memory after the branch instruction must be discarded.

Consider a computer with an instruction fetch unit and an instruction execution unit designed to provide a two-segment pipeline. The instruction fetch segment can be implemented by means of a first-in, first-out (FIFO) buffer. This is a type of unit that forms a queue rather than a stack. Whenever the execution unit is not using memory, the control increments the program counter and uses its address value to read consecutive instruction from memory. The instructions are inserted into the FIFO buffer so that they can be executed on a first-in, first-out basis. Thus an instruction stream can be placed in a queue, waiting for decoding and processing by the execution segment. The instruction l stream queuing mechanism provides an efficient way for reducing the average access time to memory for reading instructions. Whenever there is space in the FIFO buffer, the control unit initiates the next instruction fetch phase. The buffer acts as a queue from which control then extracts the instructions for the execution unit.

Computers with complex instructions require other phase in addition to the fetch and execute to process an instruction completely. In the most general case, the computer needs to process each instruction with the following sequence of steps.

·    Fetch the instruction from memory.

·    Decode the instruction.

·    Calculate the effective address.

·    Fetch the operands from memory.

·    Execute the instruction.

·    Store the result in the proper place.

There are certain difficulties that will prevent the instruction pipeline from operating at its maximum rate. Different segments may take different times to operate on the incoming information. Some segments are kipped for certain operations. For example, a register mode instruction does not need an effective address calculation. Two or more segments may require memory access at the same time, causing one segment to wait until another is finished with the memory. Memory access conflicts are sometimes resolved by using two memory buses for access instructions and data in separate modules. In this way, an instruction word and a data word can be read simultaneously from two different modules.

The design of an instruction pipeline will be most efficient if the instruction cycle is divided into segment of equal duration. The time that each step takes to fulfill its function depends on the instruction and the way it is executed.

2. Supercomputers

A commercial computer with vector instructions and pipeline floating-point arithmetic ope- rations is referred to as a supercomputer. Supercomputers are very powerful, high-performance machines used mostly for scientific computations. To speed up the operation, the components are packed tightly together to minimize the distance that the electronic signals have to travel. Supercomputers also use special techniques for removing the heat from circuits to prevent them from burning up because of their close proximity.

The instruction set of supercomputers contains the standard data transfer, data manipulation, and program control instructions of conventional computers. This is augmented by instructions that process vectors and combinations of scalars and vectors. A supercomputer is a computer system best known for its high computational speed, fast and large memory systems, and the extensive use of parallel processing. It is equipped with multiple functional units and each unit has its own pipeline configuration. Although the supercomputer is capable of general-purpose applications found in all other computers, it is specifically optimized for the type of numerical calculations involving vectors and matrices of floating-point numbers.

Supercomputers are not suitable for normal processing of a typical computer installation. They are limited in their use to a number of scientific applications, such as numerical weather forecasting, seismic wave analysis, and space research. They have limited use and limited market because of their high price.

The first supercomputer developed in 1976 is the Cray-1supercomputer. It uses vector processing with 12 distinct function units in parallel. Each functional unit is segmented to process the incoming data through a pipeline. All the functional units can operate concurrently with operands stored in the large number of registers (over 150) in the CPU. A floating-point operation can be performed on two sets of 64-bit operands during one clock cycle of 12.5 ns. This gives a rate of 80 megaflops during the time that the data are processed through the pipeline. It has a memory capacity of 4 million 64-bit words. The memory is divided into 16 banks, with each bank having a 50ns access time. This means that when all 16 banks are accessed simultaneously, the memory transfer rate is 320 million words per second. Cray research extended its supercomputer to a multiprocessor configuration called Cray X-MP and Cray Y-MP. The new cray2 supercomputer is 12 times more powerful than the cray1 in vector processing mode.

3. The Development of Computer Technology

Whatever you are, a scientist or an apprentice, a farmer or a successful scholar; and whether you are diligent or lazy, old or young; in the modern work, study and life, you always need your honest friendscomputers.

The first electronic computers were built in the 1940s. By the early 1970s, they were in common use in large businesses, government, and the military. The largest computers (like the ENIAC=the Electronic Numerical Integrator and Computer) were called mainframes. And typically cost more than a million dollars. Designed for use by a major company or a government installation, they were housed in a large room, and required special electrical cabling and air conditioning.

In the late 1960s and early 1970s, engineers made great strides in reducing the size of electronic components. They developed the semiconductor chip, which was about the size of a fingernail and could contain hundreds of transistors. The semiconductor chips enabled engineers to miniaturize the circuits contained in all electronic devices. Most importantly, it produced a new generation of mainframes and minicomputers with increased capability, greater speed, and smaller size.

In the early 1970s, semiconductor technology progressed to the point where the circuits for the “brain” of a computer (the central processing unit or CPU) could be manufactured on a single semiconductor chip. These miniaturized computers were called microprocessors, and were manufactured by corporations such as Intel and Motorola.

By the mid-1970, several such microcomputers were available to consumers. The first microcomputers were sold in the form of kits, designed for electronic hobbyists. In order for microcomputers to become problem-solving tools, a number of hurdles needed to be overcome. The first was to simplify the program for the machines. One step in this direction was taken by a young Harvard drop-out named Bill Gates, who wrote a version of the programming language BASIC for one of the earliest microcomputers. BASIC had been introduced at Dartmouth College in the mid-1960s by John Kemeny and Kenneth Kurtz. Thus it was a popular programming language on mainframe computers. Gates founded a computer company called Microsoft, which has become one of the major producers of software for microcomputers.

In 1977, Stephen Jobs and Stephen Wozniak, two microcomputer enthusiasts, working in a garage, designed their own microcomputer. This was to be named the Apple. And their fledgling business was to become the Apple Computer Corporation. Business grew at an unprecedented rate. In no time, Apple was selling hundreds and then thousands of machines per month.

One reason behind Apple's success was the availability of a number of useful application programs. The most important of these was spreadsheet VISICALS, which allowed accountants and financial planners to automate many of the calculations that they were accustomed to doing on adding machines, or with pencil and paper. Hours of calculations were thus completed in a matter of seconds. Such raw power did much to convince people that microcomputers were real problem-solving tools, not toys.

At about the same time as the introduction of the Apple II, a number of the microcomputers appeared on the market. One of the most popular was Tandy Corporation's TRS-80. Apple and Tandy were the two largest manufacturers, each with about a 25 percent share of the market.

Early microcomputer users banded together into groups to exchange ideas and to share solutions to problems. A strong spirit of adventure encouraged users to feel they were participating in a major intellectual turning point in computer use. Part of the excitement was created by the unusual mixture of people who participated. In addition to computer scientists and engineers, physicians, business people, and students become microcomputer enthusiasts, at work as well as home. All were interested in the same goal: using microcomputers to solve problems.

So many application packages began to appear around 1980. The first generation programs for word processing, data management, spreadsheets, and communication allowed novice users to experience the power of microcomputing.

However, most corporations underestimated the significance of bringing computing power down to the level of the individual users. This view abruptly changed in 1981 when International Business Machines (IBM), the largest computer company in the world, introduced its own microcomputer, dubbed the IBM PC (PC being the abbreviation for personal computer). The fact that IBM, a company of such corporate prestige, would enter this market convinced businesses that the microcomputer was more than a passing fad. Within a short time, the microprocessor was recognized as a productivity tool to be used by workers at all levels to process, store, retrieve, and analyze information. Almost every business could find a legitimate place for the microcomputer.

Now, there is a light-weight, notebook computer, or portable computer, designed to be moved easily.

4. Number System

A number system of base, or radix, r is a system that uses distinct symbols for r digits. Numbers are represented by a string of digit symbols. To determine the quantity that the number represents, it is necessary to multiply each digit by an integer power of r and then form the sum of all weighted digits. For example, the decimal number system in everyday use employs the radix 10 system. The 10 symbols are 0, 1, 2, 3, 4, 5, 6, 7 ,8, and 9. The string of digits 123.5 is interpreted to represent the quantity

1×102+2×101+3×100+5×10-1

That is, l hundred, plus 2 tens, plus 3 units, plus 5 tenths. Every decimal number can be similarly interpreted to find the quantity it represents.

The binary number system uses the radix 2. The two digit symbols are 0 and 1. The string of digits 101101 is interpreted to represent the quantity

1×25+0×24+1×23+1×22+0×21+1×20=45

To distinguish between different radix numbers, the digits will be enclosed in parentheses and the radix of the number inserted as a subscript. For example, to show the equality between decimal and binary forty-five we will write (101101)2 = (45)10. Besides the decimal and binary number systems, the octal and hexadecimal are important in digital computer work. The eight symbols of the octal system are 0, 1, 2, 3, 4, 5, 6, and 7. The 16 symbols of the hexadecimal system are 0, l, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. The last six symbols are, unfortunately, identical to the letters of the alphabet and can cause confusion at times. However, this is the convention that has been adopted. When used to represent hexadecimal digits, the symbols A, B, C, D, E, F correspond to the decima1numbers 10, 11, 12, 13, 14, 15 respectively.

A number in radix r can be converted to the familiar decimal system by forming the sum of the weighted digits. For example, octal 123.5 is converted to decimal as follows:

(123.5)8 = 1×82+2×81+3×80+5×8-1=(83.625)10

The equivalent decimal number of hexadecimal 2D is obtained from the following calculation:

(2D)16 = 2×161+13×160 = (45)10

Separating the number into its integer and fraction parts and converting each part separately carry out conversion from decimal to its equivalent representation in the radix r system. The conversion of a decimal integer into a base r representation is done by successive divisions by r and accumulation of the remainders. The conversion of a decimal fraction to radix r representation is accomplished by successive multiplications by r and accumulation of the integer digits so obtained.

The conversion of decimal 38.125 into binary is done by first separating it into its integer part 38 and fraction part 125. The integer part is converted by dividing 38 by r = 2 to give an integer quotient of 19 and a remainder of 0. The quotient is again divided by 2 to give a new quotient and remainder. This process is repeated until the integer quotient becomes 0. The coefficients of the binary number are obtained from the remainders with the first remainder giving the low-order bit of the converted binary number.

The fraction part is converted by multiplying it by r = 2 to give an integer and a fraction. The new fraction is multiplied again by 2 to give a new integer and a new fraction. This process is repeated until the fraction part becomes 0 or until the number of digits obtained gives the required accuracy. The coefficients of the binary fraction are obtained from the integer digits with the first integer computed being the digit to be placed next to the binary point. Finally, the two parts are combined to give the total required conversion.

Binary codes for decimal digits require a minimum of four bits. Numerous different codes can be formulated by arranging four or more bits in 10 distinct possible combinations. A few possibilities are shown in Table 1-3.

Table 1-3  Three Different Binary Codes for the Decimal Digit

Decimal Digit

8421 BCD

2421

Excess-3

0

1

2

3

4

5

6

7

8

9

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

0000

0001

0010

0011

0100

1011

1100

1101

1110

1111

0011

0100

0101

0110

0111

1000

1001

1010

1011

1100

Unused

Bit

Combinations

1010

1011

1100

1101

1110

1111

0101

0110

0111

1000

1001

1010

0000

0001

0010

1101

1110

1111

The BCD (binary-coded decimal) has been introduced before. It uses a straight assignment of the binary equivalent of the digit. The six unused bit combinations listed have no meaning when BCD is used, just as the letter H has no meaning when decimal digit symbols are written down. For example, saying that 0111 1110 is a decimal number in BCD is like saying that 7H is a decimal number in the conventional symbol designation. Both cases contain an invalid symbol and therefore designate a meaningless number.

One disadvantage of using BCD is the difficulty encountered when the 9's complement of the number is to be computed. On the other hand, the 9's complement is easily obtained with the 2421 and the excess-3 codes listed in Table 1-3. These two codes have a self-complementing property which means that the 9's complement of a decimal number, when represented in one of these codes, is easily obtained by changing 1's to 0's and 0's to 1's. This property is useful when arithmetic operations are done in signed-complement representation.

The 2421 is an example of a weighted code. In a weighted code, the bits are multiplied by the weights indicated and the sum of the weighted bits gives the decimal digit. For example, the bit combination l101, when weighted by the respective digits 2421, gives the decimal equivalent of 2×1+4×1+2×0+1×1=7. The BCD code can be assigned the weights 8421 and for this reason it is sometimes called the 8421 code.

The excess-3 code is a decimal code that has been used in older computers. This is an unweighted code. Its binary code assignment is obtained from the corresponding BCD equivalent binary number after the addition of binary 3.