Get exceptional power, performance and reliability for all your computing needs with an Intel®!!!
Monday, March 22, 2010
Heat and Cooling Problems
Heat and Cooling Problems
Heat can be a problem in any high-performance system. The higher-speed processors normally consume more power and therefore generate more heat. The processor is usually the single most power-hungry chip in a system, and in most situations, the fan inside your computer case might not be capable of handling the load without some help.
Heat Sinks
To cool a system in which processor heat is a problem, you can buy (for less than $5, in most cases) a special attachment for the CPU chip called a heat sink, which draws heat away from the CPU chip. Many applications may need only a larger standard heat sink with additional or longer fins for a larger cooling area. Several heat-sink manufacturers are listed in the Vendor List, on the CD.
A heat sink works like the radiator in your car, pulling heat away from the engine. In a similar fashion, the heat sink conducts heat away from the processor so that it can be vented out of the system. It does this by using a thermal conductor (usually metal) to carry heat away from the processor into fins that expose a high amount of surface area to moving air. This allows the air to be heated, thus cooling the heat sink and the processor as well. Just like the radiator in your car, the heat sink depends on airflow. With no moving air, a heat sink is incapable of radiating the heat away. To keep the engine in your car from overheating when the car is not moving, auto engineers incorporate a fan. Likewise, there is always a fan somewhere inside your PC helping to move air across the heat sink and vent it out of the system. Sometimes the fan included in the power supply is enough, other times an additional fan must be added to the case, or even directly over the processor to provide the necessary levels of cooling.
The heat sink is clipped or glued to the processor. A variety of heat sinks and attachment methods exist. Figure 3.25 shows various passive heat sinks and attachment methods.
Figure 3.25 Passive heat sinks for socketed processors showing various attachment methods.
TIP
According to data from Intel, heat sink clips are the number-two destroyer of motherboards (screwdrivers are number one). When installing or removing a heat sink that is clipped on, make sure you don't scrape the surface of the motherboard. In most cases, the clips hook over protrusions in the socket, and when installing or removing the clips, it is very easy to scratch or scrape the surface of the board right below where the clip ends attach. I like to place a thin sheet of plastic underneath the edge of the clip while I work, especially if there are board traces that can be scratched in the vicinity.
Heat sinks are rated for their cooling performance. Typically the ratings are expressed as a resistance to heat transfer, in degrees centigrade per watt (°C/W), where lower is better. Note that the resistance will vary according to the airflow across the heat sink. To ensure a constant flow of air and more consistent performance, many heat sinks incorporate fans so they don't have to rely on the airflow within the system. Heat sinks with fans are referred to as active heat sinks (see Figure 3.26). Active heat sinks have a power connection, often using a spare disk drive power connector, although most newer motherboards now have dedicated heat sink power connections right on the board.
Figure 3.26 Active heat sinks for socketed processors.
Active heat sinks use a fan or other electric cooling device, which require power to run. The fan type is most common but some use a peltier cooling device, which is basically a solid-state refrigerator. Active heat sinks require power and normally plug into a disk drive power connector or special 12v fan power connectors on the motherboard. If you do get a fan-type heat sink, be aware that some on the market are very poor quality. The bad ones have motors that use sleeve bearings, which freeze up after a very short life. I only recommend fans with ball-bearing motors, which will last about 10 times longer than the sleeve-bearing types. Of course, they cost more, but only about twice as much, which means you'll save money in the long run.
Figure 3.27 shows an active heat sink arrangement on a Pentium II/III type processor. This is common on what Intel calls its "boxed processors," which are sold individually and through dealers.
Figure 3.27 An active (fan-powered) heat sink and supports used with Pentium II/III–type processors.
The passive heat sinks are 100 percent reliable, as they have no mechanical components to fail. Passive heat sinks (see Figure 3.28) are basically aluminum-finned radiators that dissipate heat through convection. Passive types don't work well unless there is some airflow across the fins, normally provided by the power supply fan or an extra fan in the case. If your case or power supply is properly designed, you can use a less-expensive passive heat sink instead of an active one.
Figure 3.28 A passive heat sink and supports used with Pentium II/III–type processors.
TIP
To function effectively, a heat sink must be as directly attached to the processor as possible. To eliminate air gaps and ensure a good transfer of heat, in most cases, you should put a thin coating of thermal transfer grease on the surface of the processor where the heat sink attaches. This will dramatically decrease the thermal resistance properties and is required for maximum performance.
To have the best possible transfer of heat from the processor to the heat sink, most heat sink manufacturers specify some type of thermal interface material to be placed between the processor and the heat sink. This normally consists of a zinc-based white grease (similar to what skiers put on their noses to block the sun), but can also be a special pad or even a type of double-stick tape. Using a thermal interface aid such as thermal grease can improve heat sink performance dramatically. Figure 3.29 shows the thermal interface pad or grease positioned between the processor and heat sink.
Figure 3.29 Thermal interface material helps transfer heat from the processor die to the heat sink.
Most of the newer systems on the market use an improved motherboard form factor (shape) design called ATX. Systems made from this type of motherboard and case allow for improved cooling of the processor due to the processor being repositioned in the case near the power supply. Also, most of these cases now feature a secondary fan to further assist in cooling. Normally the larger case-mounted fans are more reliable than the smaller fans included in active heat sinks. A properly designed case can move sufficient air across the processor, allowing for a more reliable and less-expensive passive (no fan) heat sink to be used.
CPU Operating Voltages
CPU Operating Voltages
One trend that is clear to anybody that has been following processor design is that the operating voltages have gotten lower and lower. The benefits of lower voltage are threefold. The most obvious is that with lower voltage comes lower overall power consumption. By consuming less power, the system will be less expensive to run, but more importantly for portable or mobile systems, it will run much longer on existing battery technology. The emphasis on battery operation has driven many of the advances in lowering processor voltage, because this has a great effect on battery life.
The second major benefit is that with less voltage and therefore less power consumption, there will be less heat produced. Processors that run cooler can be packed into systems more tightly and will last longer. The third major benefit is that a processor running cooler on less power can be made to run faster. Lowering the voltage has been one of the key factors in allowing the clock rates of processors to go higher and higher.
Until the release of the mobile Pentium and both desktop and mobile Pentium MMX, most processors used a single voltage level to power both the core as well as run the input/output circuits. Originally, most processors ran both the core and I/O circuits at 5 volts, which was later was reduced to 3.5 or 3.3 volts to lower power consumption. When a single voltage is used for both the internal processor core power as well as the external processor bus and I/O signals, the processor is said to have a single or unified power plane design.
When originally designing a version of the Pentium processor for mobile or portable computers, Intel came up with a scheme to dramatically reduce the power consumption while still remaining compatible with the existing 3.3v chipsets, bus logic, memory, and other components. The result was a dual-plane or split-plane power design where the processor core ran off of a lower voltage while the I/O circuits remained at 3.3v. This was originally called Voltage Reduction Technology (VRT) and first debuted in the Mobile Pentium processors released in 1996. Later, this dual-plane power design also appeared in desktop processors such as the Pentium MMX, which used 2.8v to power the core and 3.3v for the I/O circuits. Now most recent processors, whether for mobile or desktop use, feature a dual-plane power design. Some of the more recent Mobile Pentium II processors run on as little as 1.6v for the core while still maintaining compatibility with 3.3v components for I/O.
Knowing the processor voltage requirements is not a big issue with Socket 8, Socket 370, Socket A, Pentium Pro (Socket 8), or Pentium II (Slot 1 or Slot 2) processors, because these sockets and slots have special voltage ID (VID) pins that the processor uses to signal to the motherboard the exact voltage requirements. This allows the voltage regulators built in to the motherboard to be automatically set to the correct voltage levels by merely installing the processor.
Unfortunately, this automatic voltage setting feature is not available on Socket 7 and earlier motherboard and processor designs. This means you must normally set jumpers or otherwise configure the motherboard according to the voltage requirements of the processor you are installing. Pentium (Socket 4, 5, or 7) processors have run on a number of voltages, but the latest MMX versions are all 2.8v, except for mobile Pentium processors, which are as low as 1.8v. Table 3.11 lists the voltage settings used by Intel Pentium (non-MMX) processors that use a single power plane. This means that both the CPU core and the I/O pins run at the same voltage.
Processor Slots
Processor Slots
After introducing the Pentium Pro with its integrated L2 cache, Intel discovered that the physical package it chose was very costly to produce. Intel was looking for a way to easily integrate cache and possibly other components into a processor package, and it came up with a cartridge or board design as the best way to do this. In order to accept its new cartridges, Intel designed two different types of slots that could be used on motherboards.
Slot 1 is a 242-pin slot that is designed to accept Pentium II, Pentium III, and most Celeron processors. Slot 2 is a more sophisticated 330-pin slot that is designed for the Pentium II and III Xeon processors, which are primarily for workstations and servers. Besides the extra pins, the biggest difference between Slot 1 and Slot 2 is the fact that Slot 2 was designed to host up to four-way or more processing in a single board. Slot 1 only allows single or dual processing functionality.
Note that Slot 2 is also called SC330, which stands for Slot Connector with 330 pins.
Slot 1 (SC242)
Slot 1, also called SC242 (Slot Connector 242 pins), is used by the SEC (Single Edge Cartridge) design used with the cartridge-type Pentium II/III and Celeron processors. Inside the cartridge is a substrate card that includes the processor and L2 cache. Unlike the Pentium Pro, the L2 cache was mounted on the circuit board and not within the same chip package as the processor. This allowed Intel to use aftermarket SRAM chips instead of making them internally and also allowed it to make processors with different amounts of cache easily. For example, the original Celeron was created by taking a Pentium II and simply leaving out the external cache chips.
Zero Insertion Force (ZIF) Sockets
Zero Insertion Force (ZIF) Sockets
When the Socket 1 specification was created, manufacturers realized that if users were going to upgrade processors, they had to make the process easier. The socket manufacturers found that it typically takes 100 pounds of insertion force to install a chip in a standard 169-pin screw Socket 1 motherboard. With this much force involved, you easily could damage either the chip or socket during removal or reinstallation. Because of this, some motherboard manufacturers began using Low Insertion Force (LIF) sockets, which typically required only 60 pounds of insertion force for a 169-pin chip. With the LIF or standard socket, I usually advise removing the motherboard—that way you can support the board from behind when you insert the chip. Pressing down on the motherboard with 60–100 pounds of force can crack the board if it is not supported properly. A special tool is also required to remove a chip from one of these sockets. As you can imagine, even the low insertion force was relative, and a better solution was needed if the average person was going to ever replace his CPU.
Manufacturers began inserting special Zero Insertion Force (ZIF) sockets in their later Socket 1 motherboard designs. Since then, virtually all processor sockets have been of the ZIF design. Note, however, that a given Socket X specification has nothing to do with whether it is ZIF, LIF, or standard; the socket specification covers only the pin arrangement. These days, nearly all motherboard manufacturers are using ZIF sockets. These sockets almost eliminate the risk involved in upgrading because no insertion force is necessary to install the chip. Most ZIF sockets are handle-actuated; you lift the handle, drop the chip into the socket, and then close the handle. This design makes replacing the original processor with the upgrade processor an easy task.
Because of the number of pins involved, virtually all CPU sockets from Socket 2 through the present are implemented in ZIF form. This means that since the 486 era, removing the CPU from most motherboards does not require any tools.
Single Edge Contact (SEC) and Single Edge Processor (SEP) Packaging
Single Edge Contact (SEC) and Single Edge Processor (SEP) Packaging
Abandoning the chip-in-a-socket approach used by virtually all processors until this point, the Pentium II/III chips are characterized by their Single Edge Contact (SEC) cartridge design. The processor, along with several L2 cache chips, is mounted on a small circuit board (much like an oversized memory SIMM), which is then sealed in a metal and plastic cartridge. The cartridge is then plugged into the motherboard through an edge connector called Slot 1, which looks very much like an adapter card slot.
By placing the processor and L2 cache as separate chips inside a cartridge, Intel now has a CPU module that is easier and less expensive to make than the Pentium Pro that preceded it. The Single Edge Contact (SEC) cartridge is an innovative—if a bit unwieldy—package design that incorporates the backside bus and L2 cache internally. Using the SEC design, the core and L2 cache are fully enclosed in a plastic and metal cartridge. These subcomponents are surface mounted directly to a substrate (or base) inside the cartridge to enable high-frequency operation. The SEC cartridge technology allows the use of widely available, high-performance industry standard Burst Static RAMs (BSRAMs) for the dedicated L2 cache. This greatly reduces the cost compared to the proprietary cache chips used inside the CPU package in the Pentium Pro.
A less expensive version of the SEC is called the Single Edge Processor (SEP) package. The SEP package is basically the same circuit board containing processor and (optional) cache as the Pentium II, but without the fancy plastic cover. The SEP package plugs directly into the same Slot 1 connector used by the standard Pentium II. Four holes on the board allow for the heat sink to be installed.
Slot 1 is the connection to the motherboard and has 242 pins. The Slot 1 dimensions are shown in Figure 3.6. The SEC cartridge or SEP processor is plugged into Slot 1 and secured with a processor-retention mechanism, which is a bracket that holds it in place. There may also be a retention mechanism or support for the processor heat sink. Figure 3.7 shows the parts of the cover that make up the SEC package. Note the large thermal plate used to aid in dissipating the heat from this processor. The SEP package is shown in Figure 3.8.
Figure 3.6 Pentium II Processor Slot 1 dimensions (metric/English).
Figure 3.7 Pentium II Processor SEC package parts.
Figure 3.8 Celeron Processor SEP package front-side view.
With the Pentium III, Intel introduced a variation on the SEC packaging called SECC2 (Single Edge Contact Cartridge version 2). This new package covers only one side of the processor board and allows the heat sink to directly attach to the chip on the other side. This direct thermal interface allows for better cooling, and the overall lighter package is cheaper to manufacture. Note that a new Universal Retention System, consisting of a new design plastic upright stand, is required to hold the SECC2 package chip in place on the board. The Universal Retention System will also work with the older SEC package as used on most Pentium II processors, as well as the SEP package used on the slot based Celeron processors, making it the ideal retention mechanism for all Slot 1-based processors. Figure 3.9 shows the SECC2 package.
Figure 3.9 SECC2 packaging used in newer Pentium II and III processors.
The main reason for going to the SEC and SEP packages in the first place was to be able to move the L2 cache memory off the motherboard and onto the processor in an economical and scalable way. Using the SEC/SEP design, Intel can easily offer Pentium II/III processors with more or less cache and faster or slower cache.
PGA Chip Packaging
PGA Chip Packaging
PGA packaging has been the most common chip package used until recently. It was used starting with the 286 processor in the 1980s and is still used today for Pentium and Pentium Pro processors. PGA takes its name from the fact that the chip has a grid-like array of pins on the bottom of the package. PGA chips are inserted into sockets, which are often of a ZIF (Zero Insertion Force) design. A ZIF socket has a lever to allow for easy installation and removal of the chip.
Most Pentium processors use a variation on the regular PGA called SPGA (Staggered Pin Grid Array), where the pins are staggered on the underside of the chip rather than in standard rows and columns. This was done to move the pins closer together and decrease the overall size of the chip when a large number of pins is required. Figure 3.5 shows a Pentium Pro that uses the dual-pattern SPGA (on the right) next to an older Pentium 66 that uses the regular PGA. Note that the right half of the Pentium Pro shown here has additional pins staggered among the other rows and columns.
Figure 3.5 PGA on Pentium 66 (left) and dual-pattern SPGA on Pentium Pro (right).
Processor Manufacturing
Processor Manufacturing
Processors are manufactured primarily from silicon, the second-most common element on the planet (only the element oxygen is more common). Silicon is the primary ingredient in beach sand; however, in that form it isn't pure enough to be used in chips.
The manner in which silicon is formed into chips is a lengthy process that starts by growing pure silicon crystals via what is called the Czochralski method (named after the inventor of the process). In this method, electric arc furnaces transform the raw materials (primarily quartz rock which is mined) into metallurgical-grade silicon. Then to further weed out impurities the silicon is converted to a liquid, distilled, and then redeposited in the form of semiconductor-grade rods, which are 99.999999 percent pure. These rods are then mechanically broken up into chunks and packed into quartz crucibles, which are loaded into the electric crystal pulling ovens. There the silicon chunks are melted at over 2,500° Fahrenheit. To prevent impurities, the ovens are normally mounted on very thick concrete cubes often on a suspension to prevent any vibration which would damage the crystal as it forms.
Once the silicon is melted, a small seed crystal is inserted into the molten silicon, and slowly rotated (see Figure 3.3). As the seed is pulled out of the molten silicon, some of the silicon sticks to the seed and hardens in the same crystal structure as the seed. By carefully controlling the pulling speed (10 to 40 millimeters per hour) and temperature (approximately 2,500° F) the crystal grows with a narrow neck that then widens into the full desired diameter. Depending on the chips being made, each ingot is approximately eight or 12 inches in diameter and over five feet long, weighing hundreds of pounds.
Figure 3.3
Growing a pure silicon ingot in a high-pressure, high-temperature oven.
The ingot is then ground into a perfect 200mm- (eight-inch) or 300mm-diameter cylinder, with normally a flat cut on one side for positioning accuracy and handling. Each ingot is then cut with a high-precision diamond saw into over a thousand circular wafers, each less than a millimeter thick (see Figure 3.4). Each wafer is then polished to a mirror-smooth surface.
Figure 3.4
Slicing a silicon ingot into wafers with a diamond saw.
Chips are manufactured from the wafers using a process called photolithography. Through this photographic process, transistors and circuit and signal pathways are created in semiconductors by depositing different layers of various materials on the chip, one after the other. Where two specific circuits intersect, a transistor or switch can be formed.
The photolithographic process starts when an insulating layer of silicon dioxide is grown on the wafer through a vapor deposition process. Then a coating of photoresist material is applied and an image of that layer of the chip is projected through a mask onto the now light-sensitive surface.
Doping is the term used to describe chemical impurities added to silicon (which is naturally a non-conductor), creating a material with semiconductor properties. The projector uses a specially created mask, which is essentially a negative of that layer of the chip etched in chrome on a quartz plate. The Pentium III currently uses twenty or more masks to create six layers of metal and semiconductor interconnects.
As the light passes through a mask, the light is focused on the wafer surface, imprinting it with the image of that layer of the chip. Each individual chip image is called a die. A device called a stepper then moves the wafer over a little bit and the same mask is used to imprint another chip die immediately next to the previous one. After the entire wafer is imprinted with chips, a caustic solution washes away the areas where the light struck the photoresist, leaving the mask imprints of the individual chip vias (interconnections between layers) and circuit pathways. Then, another layer of semiconductor material is deposited on the wafer with more photoresist on top, and the next mask is used to produce the next layer of circuitry. Using this method, the layers and components of each chip are built one on top of the other, until the chips are completed.
The final masks add the metallization layers, which are the metal interconnects used to tie all the individual transistors and other components together. Most chips use aluminum interconnects today, although many will be moving to copper in the future. The first commercial PC chip using copper is the Athlon made in AMD's Dresden fab. Copper is a better conductor than aluminum and will allow smaller interconnects with less resistance, meaning smaller and faster chips can be made. The reason copper hasn't been used up until recently is that there were difficult corrosion problems to overcome during the manufacturing process that were not as much a problem with aluminum. As these problems have been solved, there will be more and more chips fabricated with copper interconnects.
A completed circular wafer will have as many chips imprinted on it as can possibly fit. Because each chip is normally square or rectangular, there are some unused portions at the edges of the wafer, but every attempt is made to use every square millimeter of surface.
The standard wafer size used in the industry today is 200mm in diameter, or just under eight inches. This results in a wafer of about 31,416 square millimeters. The Pentium II 300MHz processor, for example, was made up of 7.5 million transistors using a 0.35 micron (millionth of a meter) process. This process results in a die of exactly 14.2mm on each side, which is 202 square millimeters of area. This means that about 150 total Pentium II 300MHz chips on the .35 micron process could be made from a single 200mm-diameter wafer.
The trend in the industry is to go to both larger wafers and a smaller chip die process. Process refers to the size of the individual circuits and transistors on the chip. For example, the Pentium II 333MHz through 450MHz processors were made on a newer and smaller .25 micron process, which reduced the total chip die size to only 10.2mm on each side, or a total chip area of 104 square millimeters. On the same 200mm (8-inch) wafer as before, Intel can make about 300 Pentium II chips using this process, or double the amount over the larger .35 micron process 300MHz version.
The Pentium III in the 600MHz and faster speeds is built on a .18 micron process and has a die size of only 104 square millimeters, which is about 10.2mm on each side. This is the same size as the older Pentium II, even though the newer PIII has 28.1 million transistors (including the on-die L2 cache) compared to only 7.5 million for the Pentium II.
In the future, processes will move from .18 micron to .13 micron, and from 200mm (eight-inch) wafers to 300mm (12-inch) wafers. The larger 300mm wafers alone will allow for more than double the number of chips to be made, compared to the 200mm mostly used today. The smaller 0.13-micron process will allow more transistors to be incorporated into the die while maintaining a reasonable die size allowing for sufficient yield. This means the trend for incorporating L2 cache within the die will continue, and transistor counts will rise up to 200 million per chip or more in the future. The current king of transistors is the Intel Pentium III Xeon introduced in May 2000 with 2MB of on-die cache and a whopping 140 million transistors in a single die.
The trend in wafers is to move from the current 200mm (eight-inch) diameter to a bigger, 300mm (12-inch) diameter wafer. This will increase surface area dramatically over the smaller 200mm design and boost chip production to about 675 chips per wafer. Intel and other manufacturers expect to have 300mm wafer production in place during 2001. After that happens, chip prices should continue to drop dramatically as supply increases.
Note that not all the chips on each wafer will be good, especially as a new production line starts. As the manufacturing process for a given chip or production line is perfected, more and more of the chips will be good. The ratio of good to bad chips on a wafer is called the yield. Yields well under 50 percent are common when a new chip starts production; however, by the end of a given chip's life, the yields are normally in the 90 percent range. Most chip manufacturers guard their yield figures and are very secretive about them because knowledge of yield problems can give their competitors an edge. A low yield causes problems both in the cost per chip and in delivery delays to their customers. If a company has specific knowledge of competitors' improving yields, it can set prices or schedule production to get higher market share at a critical point. For example, AMD was plagued by low-yield problems during 1997 and 1998, which cost it significant market share. It has since solved the problems, and lately it seems Intel has had the harder time meeting production demands.
After a wafer is complete, a special fixture tests each of the chips on the wafer and marks the bad ones to be separated out later. The chips are then cut from the wafer using either a high-powered laser or diamond saw.
After being cut from the wafers, the individual die are then retested, packaged, and retested again. The packaging process is also referred to as bonding, because the die is placed into a chip housing where a special machine bonds fine gold wires between the die and the pins on the chip. The package is the container for the chip die, and it essentially seals it from the environment.
After the chips are bonded and packaged, final testing is done to determine both proper function and rated speed. Different chips in the same batch will often run at different speeds. Special test fixtures run each chip at different pressures, temperatures, and speeds, looking for the point at which the chip stops working. At this point, the maximum successful speed is noted and the final chips are sorted into bins with those that tested at a similar speed. For example, the Pentium III 750, 866, and 1000 are all exactly the same chip made using the same die. They were sorted at the end of the manufacturing cycle by speed.
One interesting thing about this is that as a manufacturer gains more experience and perfects a particular chip assembly line, the yield of the higher speed versions goes way up. This means that out of a wafer of 150 total chips, perhaps more than 100 of them check out at 1000MHz, while only a few won't run at that speed. The paradox is that Intel often sells a lot more of the lower-priced 933 and 866MHz chips, so it will just dip into the bin of 1000MHz processors and label them as 933 or 866 chips and sell them that way. People began discovering that many of the lower-rated chips would actually run at speeds much higher than they were rated, and the business of overclocking was born. Overclocking describes the operation of a chip at a speed higher than it was rated for. In many cases, people have successfully accomplished this because, in essence, they had a higher-speed processor already—it was marked with a lower rating only because it was sold as the slower version.
An interesting problem then arose: Unscrupulous vendors began taking slower chips and remarking them and reselling them as if they were faster. Often the price between the same chip at different speed grades can be substantial, in the hundreds of dollars, so by changing a few numbers on the chip the potential profits can be huge. Because most of the Intel and AMD processors are produced with a generous safety margin—that is, they will normally run well past their rated speed—the remarked chips would seem to work fine in most cases. Of course, in many cases they wouldn't work fine, and the system would end up crashing or locking up periodically.
At first the remarked chips were just a case of rubbing off the original numbers and restamping with new official-looking numbers. These were normally easy to detect. Remarkers then resorted to manufacturing completely new processor housings, especially for the plastic-encased Slot 1 and Slot A processors from Intel and AMD. Although it may seem to be a huge bother to make a custom plastic case and swap it with the existing case, since the profits can be huge, criminals find it very lucrative. This type of remarking is a form of organized crime and isn't just some kid in his basement with sandpaper and a rubber stamp.
Intel and AMD have seen fit to put a stop to some of the remarking by building overclock protection in the form of a multiplier lock into most of its newer chips. This is usually done in the bonding or cartridge manufacturing process, where the chips are intentionally altered so they won't run at any speeds higher than they are rated. Normally this involves changing the bus frequency (BF) pins on the chip, which control the internal multipliers the chip uses. Even so, enterprising individuals have found ways to run their motherboards at bus speeds higher than normal, so even though the chip won't allow a higher multiplier, you can still run it at a speed higher than it was designed.
Be Wary of PII and PIII Overclocking Fraud
Also note that unscrupulous individuals have devised a small logic circuit that bypasses the multiplier lock, allowing the chip to run at higher multipliers. This small circuit can be hidden in the PII or PIII cartridge, and then the chip can be remarked or relabeled to falsely indicate it is a higher speed version. This type of chip remarketing fraud is far more common in the industry than people want to believe. In fact, if you purchase your system or processor from a local computer flea market show, you have an excellent chance of getting a remarked chip. I recommend purchasing processors only from more reputable direct distributors or dealers. Contact Intel, AMD, or Cyrix, for a list of their reputable distributors and dealers.
I recently installed a 200MHz Pentium processor in a system that is supposed to run at a 3x multiplier based off a 66MHz motherboard speed. I tried changing the multiplier to 3.5x but the chip refused to go any faster; in fact, it ran at the same or lower speed than before. This is a sure sign of overclock protection inside, which is to say that the chip won't support any higher level of multiplier than it was designed for. Today, all Intel Pentium II and III processors are multiplier locked, which means the multiplier can no longer be controlled by the motherboard. This means that overclocking can be accomplished only by running the motherboard at a higher bus speed than the processor was designed for. My motherboard at the time included a jumper setting for an unauthorized speed of 75MHz, which when multiplied by 3x resulted in an actual processor speed of 225MHz. This worked like a charm, and the system is now running fast and clean. Many new motherboards have BIOS or jumper settings which can be used to tweak the motherboard bus speeds a few MHz higher than normal, which is then internally multiplied by the processor to even higher speeds. Note that I am not necessarily recommending overclocking for everybody; in fact, I normally don't recommend it at all for any important systems. If you have a system you want to fool around with, it is interesting to try. Like my cars, I always seem to want to hotrod my computers.
The real problem with the overclock protection as implemented by Intel and AMD is that the professional counterfeiter can still override it by inserting some custom circuitry underneath the plastic case enclosing the processor. This again is particularly a problem with the slot-based processors, since they use a case cover that can hide this circuitry. Socketed processors are much more immune to these remarking attempts. To protect yourself from purchasing a fraudulent chip, verify the specification numbers and serial numbers with Intel and AMD before you purchase. Also beware where you buy your hardware. Purchasing over online auction sites can be extremely dangerous since it is so easy to defraud the purchaser. Also the traveling computer show/flea market arenas can be a hotbed of this type of activity.
Fraudulent computer components are not limited to processors; I have seen fake memory (SIMMs/DIMMs), fake mice, fake video cards, fake cache memory, counterfeit operating systems and applications, and even fake motherboards. The hardware that is faked normally works, but is of inferior quality to the type it is purporting to be. For example, one of the most highly counterfeited pieces of hardware is the Microsoft mouse. They sell for $35 wholesale and yet I can purchase cheap mice from overseas manufacturers for as little as $2.32 each. It didn't take somebody long to realize that if they made the $2 mouse look like a $35 Microsoft mouse, they could sell it for $20 and people would think they were getting a genuine article for a bargain, while the thieves run off with a substantial profit.
Dual Independent Bus (DIB) Architecture
Dual Independent Bus (DIB) Architecture
The Dual Independent Bus (DIB) architecture was first implemented in the sixth-generation processors from Intel and AMD. DIB was created to improve processor bus bandwidth and performance. Having two (dual) independent data I/O buses enables the processor to access data from either of its buses simultaneously and in parallel, rather than in a singular sequential manner (as in a single-bus system). The second or backside bus in a processor with DIB is used for the L2 cache, allowing it to run at much greater speeds than if it were to share the main processor bus.
NOTE
The DIB architecture is explained more fully in Chapter 4, "Motherboards and Buses." To see the typical Pentium II/III system architecture, see Figure 4.34.
Two buses make up the DIB architecture: the L2 cache bus and the processor-to-main-memory, or system, bus. The P6 class processors from the Pentium Pro to the Celeron, Pentium II/III, and Athlon/Duron processors can use both buses simultaneously, eliminating a bottleneck there. The Dual Independent Bus architecture enables the L2 cache of the 1GHz Pentium III or Athlon, for example, to run 15 times faster than the L2 cache of older Pentium and K6 processors. Because the backside or L2 cache bus is coupled to the speed of the processor core, as the frequency of processors increases, so will the speed of the L2 cache.
The key to implementing DIB was to move the L2 cache memory off of the motherboard and into the processor package. L1 cache has always been directly a part of the processor die, but L2 was larger and had to be external. By moving the L2 cache into the processor, the L2 cache could run at speeds more like the L1 cache, much faster than the motherboard or processor bus. To move the L2 cache into the processor initially, modifications had to be made to the CPU socket or slot. There are two slot-based and three socket-based solutions that fully support DIB: Slot 1 (Pentium II/III/Celeron), Slot A (Athlon), Socket 8 (Pentium Pro), Socket 370 (Pentium III/Celeron), and Socket A (Athlon/Duron).
DIB also allows the system bus to perform multiple simultaneous transactions (instead of singular sequential transactions), accelerating the flow of information within the system and boosting performance. Overall DIB architecture offers up to three times the bandwidth performance over a single-bus architecture processor.
Dynamic Execution
Dynamic Execution
First used in the P6 or sixth-generation processors, dynamic execution is an innovative combination of three processing techniques designed to help the processor manipulate data more efficiently. Those techniques are multiple branch prediction, data flow analysis, and speculative execution. Dynamic execution enables the processor to be more efficient by manipulating data in a more logically ordered fashion rather than simply processing a list of instructions, and it is one of the hallmarks of all sixth-generation processors.
The way software is written can dramatically influence a processor's performance. For example, performance will be adversely affected if the processor is frequently required to stop what it is doing and jump or branch to a point elsewhere in the program. Delays also occur when the processor cannot process a new instruction until the current instruction is completed. Dynamic execution allows the processor to not only dynamically predict the order of instructions, but execute them out of order internally, if necessary, for an improvement in speed.
Multiple Branch Prediction
Multiple branch prediction predicts the flow of the program through several branches. Using a special algorithm, the processor can anticipate jumps or branches in the instruction flow. It uses this to predict where the next instructions can be found in memory with an accuracy of 90 percent or greater. This is possible because while the processor is fetching instructions, it is also looking at instructions further ahead in the program.
Data Flow Analysis
Data flow analysis analyzes and schedules instructions to be executed in an optimal sequence, independent of the original program order. The processor looks at decoded software instructions and determines whether they are available for processing or are instead dependent on other instructions to be executed first. The processor then determines the optimal sequence for processing and executes the instructions in the most efficient manner.
Speculative Execution
Speculative execution increases performance by looking ahead of the program counter and executing instructions that are likely to be needed later. Because the software instructions being processed are based on predicted branches, the results are stored in a pool for later referral. If they are to be executed by the resultant program flow, the already completed instructions are retired and the results are committed to the processor's main registers in the original program execution order. This technique essentially allows the processor to complete instructions in advance and then grab the already completed results when necessary.
3DNow and Enhanced 3DNow
3DNow and Enhanced 3DNow
3DNow technology is AMD's alternative to the SSE instructions in the Intel processors. Actually 3DNow originally came out in the K6 series before Intel released SSE in the Pentium III, and then AMD added Enhanced 3DNow to the Athlon and Duron processors. AMD licensed MMX from Intel and all its K6 series, Athlon, Duron, and later processors include full MMX instruction support. Not wanting to additionally license the SSE instructions being developed by Intel, AMD first came up with a different set of extensions beyond MMX called 3DNow. Introduced in May 1998 in the K6-2 processor and later enhanced when the Athlon was introduced in June 1999, 3DNow and Enhanced 3DNow are sets of instructions that extend the multimedia capabilities of the AMD chips beyond MMX. This allows greater performance for 3D graphics, multimedia, and other floating-point-intensive PC applications.
3DNow technology is a set of 21 instructions that uses SIMD (Single Instruction Multiple Data) techniques to operate on arrays of data rather than single elements. Enhanced 3DNow adds 24 more instructions to the original 21 for a total of 45 new instructions. Positioned as an extension to MMX technology, 3DNow is similar to the SSE (streaming SIMD extensions) found in the Pentium III and Celeron processors from Intel. According to AMD, 3DNow provides approximately the same level of improvement to MMX as did SSE, but in fewer instructions with less complexity. Although similar in capability, they are not compatible at the instruction level so that software specifically written to support SSE will not support 3DNow, and vice versa.
Just as with SSE, 3DNow also supports single precision floating point SIMD (Single Instruction Multiple Data) operations and enables up to four floating point operations per cycle. 3DNow floating point instructions can be mixed with MMX instructions with no performance penalties. 3DNow also supports data prefetching.
Also like SSE, 3DNow is well supported by software including Microsoft Windows 9x, Windows NT 4.0, and all newer Microsoft operating systems. Application programming interfaces such as Microsoft's DirectX 6.x API and SGI's Open GL API have been optimized for 3DNow technology, as have the drivers for many leading 3D graphic accelerator suppliers, including 3Dfx, ATI, Matrox, and nVidia. While many games and video drivers support 3DNow, support is lacking from some of the major business graphics applications like Adobe Photoshop.
SSE (Streaming SIMD Extensions)
SSE (Streaming SIMD Extensions)
In February 1999, Intel introduced the Pentium III processor, and included in that processor was an update to MMX called Streaming SIMD Extensions (SSE). These were also called Katmai New Instructions (KNI) up until their debut, as they were originally included on the Katmai processor, which was the codename for the Pentium III. The Celeron 533A and faster Celeron processors based on the Pentium III core also support SSE instructions. The earlier Pentium II and Celeron 533 and lower (based on the Pentium II core) do not support SSE.
SSE includes 70 new instructions for graphics and sound processing over what MMX provided. SSE is similar to MMX; in fact, besides being called KNI (Katmai New Instructions), SSE was also called MMX-2 by some before it was released. Besides adding more MMX style instructions, the SSE instructions allow for floating-point calculations, and now use a separate unit within the processor instead of sharing the standard floating-point unit as MMX did.
The Streaming SIMD Extensions consist of 70 new instructions, including Single Instruction Multiple Data (SIMD) floating-point, additional SIMD integer, and cacheability control instructions. Some of the technologies that benefit from the Streaming SIMD Extensions include advanced imaging, 3D video, streaming audio and video (DVD playback), and speech recognition applications. The benefits of SSE include the following:
Higher resolution and higher quality image viewing and manipulation for graphics software
High-quality audio, MPEG2 video, and simultaneous MPEG2 encoding and decoding for multimedia applications
Reduced CPU utilization for speech recognition, as well as higher accuracy and faster response times when running speech recognition software
The SSE instructions are particularly useful with MPEG2 decoding, which is the standard scheme used on DVD video discs. This means that SSE equipped processors should be more capable of doing MPEG2 decoding in software at full speed without requiring an additional hardware MPEG2 decoder card. SSE-equipped processors are much better and faster than previous processors when it comes to speech recognition as well.
One of the main benefits of SSE over plain MMX is that it supports single-precision floating-point SIMD (Single Instruction Multiple Data) operations, which have posed a bottleneck in the 3D graphics processing. Just as with plain MMX, SIMD enables multiple operations to be performed per processor instruction. Specifically SSE supports up to four floating point operations per cycle; that is, a single instruction can operate on four pieces of data simultaneously. SSE floating point instructions can be mixed with MMX instructions with no performance penalties. SSE also supports data prefetching, which is a mechanism for reading data into the cache before it is actually called for.
Note that for any of the SSE instructions to be beneficial, they must be encoded in the software you are using, which means that SSE-aware applications must be used to see the benefits. Most software companies writing graphics and sound-related software today have updated those applications to be SSE-aware and use the features of SSE. For example high-powered graphics applications such as Adobe Photoshop support SSE instructions for higher performance on processors equipped with SSE. Microsoft included support for SSE in its DirectX 6.1 and later video and sound drivers, which were included with Windows 98 Second Edition, Windows Me, Windows NT 4.0 (with service pack 5 or later), and Windows 2000.
Note that SSE is an extension to MMX, meaning that processors supporting SSE also support the original MMX instructions. This means that standard MMX-enabled applications run as they did on MMX-only processors.
MMX Technology
MMX Technology
MMX technology was originally named for multimedia extensions, or matrix math extensions, depending on whom you ask. Intel officially states that it is actually not an abbreviation and stands for nothing other than the letters MMX (not being an abbreviation was apparently required so that the letters could be trademarked); however, the internal origins are probably one of the preceding. MMX technology was introduced in the later fifth-generation Pentium processors (see Figure 3.2) as a kind of add-on that improves video compression/decompression, image manipulation, encryption, and I/O processing—all of which are used in a variety of today's software.
Figure 3.2 An Intel Pentium MMX chip shown from the top and bottom (exposing the die). Photograph used by permission of Intel Corporation.
MMX consists of two main processor architectural improvements. The first is very basic; all MMX chips have a larger internal L1 cache than their non-MMX counterparts. This improves the performance of any and all software running on the chip, regardless of whether it actually uses the MMX-specific instructions.
The other part of MMX is that it extends the processor instructions set with 57 new commands or instructions, as well as a new instruction capability called Single Instruction, Multiple Data (SIMD).
Modern multimedia and communication applications often use repetitive loops that, while occupying 10 percent or less of the overall application code, can account for up to 90 percent of the execution time. SIMD enables one instruction to perform the same function on multiple pieces of data, similar to a teacher telling an entire class to "sit down," rather than addressing each student one at a time. SIMD allows the chip to reduce processor-intensive loops common with video, audio, graphics, and animation.
Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently. These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations. Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image. The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary. These drawbacks were corrected in the additions to MMX from Intel and AMD.
Intel licensed the MMX capabilities to competitors such as AMD and Cyrix, who were then able to upgrade their own Intel-compatible processors with MMX technology.
Superscalar Execution
Superscalar Execution
The fifth-generation Pentium and newer processors feature multiple internal instruction execution pipelines, which enable them to execute multiple instructions at the same time. The 486 and all preceding chips can perform only a single instruction at a time. Intel calls the capability to execute more than one instruction at a time superscalar technology. This technology provides additional performance compared with the 486.
See "Pentium Processor."
Superscalar architecture usually is associated with high-output RISC (Reduced Instruction Set Computer) chips. An RISC chip has a less complicated instruction set with fewer and simpler instructions. Although each instruction accomplishes less, overall the clock speed can be higher, which can usually increase performance. The Pentium is one of the first CISC (Complex Instruction Set Computer) chips to be considered superscalar. A CISC chip uses a richer, fuller- featured instruction set, which has more complicated instructions. As an example, say you wanted to instruct a robot to screw in a light bulb. Using CISC instructions you would say
1. Pick up the bulb.
2. Insert it into the socket.
3. Rotate clockwise until tight.
Using RISC instructions you would say something more along the lines of
1. Lower hand.
2. Grasp bulb.
3. Raise hand.
4. Insert bulb into socket.
5. Rotate clockwise one turn.
6. Is bulb tight? If not repeat step 5.
7. End.
Overall many more RISC instructions are required to do the job because each instruction is simpler (reduced) and does less. The advantage is that there are fewer overall commands the robot (or processor) has to deal with, and it can execute the individual commands more quickly, and thus in many cases execute the complete task (or program) more quickly as well. The debate goes on whether RISC or CISC is really better, but in reality there is no such thing as a pure RISC or CISC chip, it is all just a matter of definition, and the lines are somewhat arbitrary.
Intel and compatible processors have generally been regarded as CISC chips, although the fifth- and sixth-generation versions have many RISC attributes and internally break CISC instructions down into RISC versions.
SMM (Power Management)
SMM (Power Management)
Spurred on primarily by the goal of putting faster and more powerful processors in laptop computers, Intel has created power-management circuitry. This circuitry enables processors to conserve energy use and lengthen battery life. This was introduced initially in the Intel 486SL processor, which is an enhanced version of the 486DX processor. Subsequently, the power-management features were universalized and incorporated into all Pentium and later processors. This feature set is called SMM, which stands for System Management Mode.
SMM circuitry is integrated into the physical chip but operates independently to control the processor's power use based on its activity level. It allows the user to specify time intervals after which the CPU will be partially or fully powered down. It also supports the Suspend/Resume feature that allows for instant power on and power off, used mostly with laptop PCs. These settings are normally controlled via system BIOS settings.
Processor Specifications
Processor Specifications
Many confusing specifications often are quoted in discussions of processors. The following sections discuss some of these specifications, including the data bus, address bus, and speed. The next section includes a table that lists the specifications of virtually all PC processors.
Processors can be identified by two main parameters: how wide they are and how fast they are. The speed of a processor is a fairly simple concept. Speed is counted in megahertz (MHz), which means millions of cycles per second—and faster is better! The width of a processor is a little more complicated to discuss because there are three main specifications in a processor that are expressed in width.
They are
Internal registers
Data input and output bus
Memory address bus
Systems below 16MHz usually had no cache memory at all. Starting with 16MHz systems, high-speed cache memory appeared on the motherboard because the main memory at the time could not run at 16MHz. Prior to the 486 processor, the cache on the motherboard was the only cache used in the system.
Starting with the 486 series, processors began including what was called L1 (Level 1) cache directly on the processor die. This meant that the L1 cache always ran at the full speed of the chip, especially important when the later 486 chips began to run at speeds higher than the motherboards they were plugged into. During this time the cache on the motherboard was called the second level or L2 cache, which ran at the slower motherboard speed.
Starting with the Pentium Pro and Pentium II, Intel began including L2 cache memory chips directly within the same package as the main processor. Originally this built-in L2 cache was implemented as physically separate chips contained within the processor package but not a part of the processor die. Since the speed of commercially available cache memory chips could not keep pace with the main processor, most of the L2 cache in these processors ran at one-half speed (Pentium II/III and AMD Athlon), while some ran the cache even slower, at two-fifths or even one-third the processor speed (AMD Athlon).
The original Pentium II, III, Celeron, and Athlon (Model 1 and 2) processors use 512KB of either one-half, two-fifths, or one-third speed L2 cache as Table 3.1 shows:
Pre-PC Microprocessor History
Pre-PC Microprocessor History
It is interesting to note that the microprocessor had only existed for 10 years prior to the creation of the PC! The microprocessor was invented by Intel in 1971. The PC was created by IBM in 1981. Now nearly 20 years later, we are still using systems based more or less on the design of that first PC (and mostly backward compatible with it). The processors powering our PCs today are still backward compatible in many ways with the 8088 selected by IBM in 1981.
The story of the development of the first microprocessor, the Intel 4004, can be read in Chapter 1, "Personal Computer Background." The 4004 processor was introduced on November 15, 1971, and originally ran at a clock speed of 108KHz (108,000 cycles per second, or just over one-tenth a megahertz). The 4004 contained 2,300 transistors and was built on a 10 micron process. This means that each line, trace, or transistor could be spaced about 10 microns (millionths of a meter) apart. Data was transferred four bits at a time, and the maximum addressable memory was only 640 bytes. The 4004 was designed for use in a calculator, but proved to be useful for many other functions because of its inherent programmability.
In April 1972, Intel released the 8008 processor, which originally ran at a clock speed of 200KHz (0.2MHz). The 8008 processor contained 3,500 transistors and was built on the same 10 micron process as the previous processor. The big change in the 8008 was that it had an 8-bit data bus, which meant it could move data 8 bits at a time—twice as much as the previous chip. It could also address more memory, up to 16KB. This chip was primarily used in dumb terminals and general-purpose calculators.
The next chip in the lineup was the 8080, introduced in April 1974, running at a clock rate of 2MHz. Due mostly to the faster clock rate, the 8080 processor had 10 times the performance of the 8008. The 8080 chip contained 6,000 transistors and was built on a 6 micron process. Like the previous chip, the 8080 had an 8-bit data bus, so it could transfer 8 bits of data at a time. The 8080 could address up to 64KB of memory, significantly more than the previous chip.
It was the 8080 that helped start the PC revolution, as this was the processor chip used in what is generally regarded as the first personal computer, the Altair 8800. The CP/M operating system was written for the 8080 chip, and Microsoft was founded and delivered its first product: Microsoft BASIC for the Altair. These initial tools provided the foundation for a revolution in software because thousands of programs were written to run on this platform.
In fact, the 8080 became so popular that it was cloned. A company called Zilog formed in late 1975, joined by several ex-Intel 8080 engineers. In July of 1976, it released the Z-80 processor, which was a vastly improved version of the 8080. It was not pin compatible, but instead combined functions such as the memory interface and RAM refresh circuitry, which allowed cheaper and simpler systems to be designed. The Z-80 also incorporated a superset of 8080 instructions, meaning it could run all 8080 programs. It also included new instructions and new internal registers, so software that was designed for the Z-80 would not necessarily run on the older 8080. The Z-80 ran initially at 2.5MHz (later versions ran up to 10MHz), and contained 8,500 transistors. The Z-80 could access 64KB of memory.
Radio Shack selected the Z-80 for the TRS-80 Model 1, its first PC. The chip was also the first to be used by many pioneering systems including the Osborne and Kaypro machines. Other companies followed, and soon the Z-80 was the standard processor for systems running the CP/M operating system and the popular software of the day.
Intel released the 8085, its follow up to the 8080, in March of 1976. Even though it predated the Z-80 by several months, it never achieved the popularity of the Z-80 in personal computer systems. It was popular as an embedded controller, finding use in scales and other computerized equipment. The 8085 ran at 5MHz and contained 6,500 transistors. It was built on a 3-micron process and incorporated an 8-bit data bus.
Along different architectural lines, MOS Technologies introduced the 6502 in 1976. This chip was designed by several ex-Motorola engineers who had worked on Motorola's first processor, the 6800. The 6502 was an 8-bit processor like the 8080, but it sold for around $25, whereas the 8080 cost about $300 when it was introduced. The price appealed to Steve Wozniak who placed the chip in his Apple I and Apple II designs. The chip was also used in systems by Commodore and other system manufacturers. The 6502 and its successors were also used in computer games, including the original Nintendo Entertainment System (NES) among others. Motorola went on to create the 68000 series, which became the basis for the Apple Macintosh line of computers. Today those systems use the PowerPC chip, also by Motorola, and a successor to the 68000 series.
All these previous chips set the stage for the first PC chips. Intel introduced the 8086 in June 1978. The 8086 chip brought with it the original x86 instruction set that is still present on x86-compatible chips such as the Pentium III. A dramatic improvement over the previous chips, the 8086 was a full 16-bit design with 16-bit internal registers and a 16-bit data bus. This meant that it could work on 16-bit numbers and data internally and also transfer 16-bits at a time in and out of the chip. The 8086 contained 29,000 transistors and initially ran at up to 5MHz. The chip also used 20-bit addressing, meaning it could directly address up to 1MB of memory. Although not directly backward compatible with the 8080, the 8086 instructions and language was very similar and allowed older programs to be ported over quickly to run. This later proved important to help jumpstart the PC software revolution with recycled CP/M (8080) software.
Although the 8086 was a great chip, it was expensive at the time and more importantly required an expensive 16-bit support chip and board design. To help bring costs down, in 1979, Intel released a crippled version of the 8086 called the 8088. The 8088 processor used the same internal core as the 8086, had the same 16-bit registers, and could address the same 1MB of memory, but the external data bus was reduced to 8 bits. This allowed support chips from the older 8-bit 8085 to be used, and far less expensive boards and systems could be made. It is for these reasons that IBM chose the crippled chip, the 8088, for the first PC.
This decision would affect history in several ways. The 8088 was fully software compatible with the 8086, so it could run 16-bit software. Also, because the instruction set was very similar to the previous 8085 and 8080, programs written for those older chips could be quickly and easily modified to run. This allowed a large library of programs to be quickly released for the IBM PC, thus helping it become a success. The overwhelming blockbuster success of the IBM PC left in its wake the legacy of requiring backward compatibility with it. In order to maintain the momentum, Intel has pretty much been forced to maintain backward compatibility with the 8088/8086 in most of the processors it has released since then.
In some ways the success of the PC, and the Intel architecture it contains, has limited the growth of the personal computer. In other ways, however, its success has caused a huge number of programs, peripherals, and accessories to be developed, and the PC to become a de facto standard in the industry. The original 8088 processor used in the first PC contained close to 30,000 transistors and ran at less than 5MHz. Intel recently introduced a version of the Pentium III Xeon with 2MB of on-die cache that has a whopping 140 million transistors, the largest ever in a single processor chip. Both AMD and Intel are manufacturing processors that run at 1GHz (AMD has some bragging rights there; it beat Intel to 1GHz by two days), and both have demonstrated processors running in the 2GHz range. And the progress doesn't stop there, as according to Moore's Law, processing speed and transistor counts are doubling every 1.5 to 2 years.
Subscribe to:
Comments (Atom)