The Race For A New Game Machine:. David Shippy
Чтение книги онлайн.
Читать онлайн книгу The Race For A New Game Machine: - David Shippy страница 13
When I signed on with IBM, my buddies laughed at me and said, “Why do you want to go there? They won’t give a rookie like you any good design work.”
With confidence, I replied, “They will if I’m a good designer.”
How wrong my buddies were. IBM happened to be in an expansion stage just then and was willing to risk putting new hires into key roles. This product never saw the light of day, but it gave me valuable lessons in computer design.
I also learned to make homebrew and to snow ski while in Endicott. I sometimes think those skills have served me almost as well as my computer design experience. My cube mate in Endicott was Brice Feal. Brice was a zany bachelor with a wide variety of interests. He invited me over to his house after work and took me to his cellar, where several hundred bottles of beer filled the shelves. He blew the dust off one of them, cracked it open, and filled two frosty mugs.
“Give it a try,” he said with a smile, and we clinked our mugs together in a silent toast.
It was the smoothest, best beer I’d ever tasted. “Wow, Brice!” I exclaimed. “Where can I buy this stuff?”
He replied, “You can’t get this in stores. I make it.”
His particular brand of beer was really a barley wine with a smooth flavor and high alcohol content. I was hooked, so Brice taught me all of the tricks and soon I was brewing my own beer.
Brice also introduced me to another passion—snow skiing. There was a local ski resort called Greek Peak. Many Friday afternoons we skipped out of work early and hit the slopes. The ski resort lit the runs with spotlights, so we skied until very late at night.
RISC computer design, homebrewing, and skiing. Life wasn’t too bad in Endicott for a young engineer. However, I heard about a new development project at the IBM site in Austin, Texas, and the central processor was RISC rather than CISC. From everything I knew, this seemed like the way to go. Exposed to both methods, I could see that the RISC approach would deliver simpler, higher performance hardware. If I stayed in Endicott, the sexier computer design assignments would be on a messy CISC design. I wanted an opportunity to create streamlined fast microprocessors using the RISC techniques.
So in 1989, I went to the office of my third-line manager, Bobby Dunbar, and told him I was going to Austin to work on a RISC microprocessor. Bobby was just a good ol’ boy, content to ride the success of the s/370 computers he’d come to know and love. He propped his boots on the desk and laughed at me. “Nothing will ever come of that RISC architecture,” he said.
His predictions did not prove true. Today, the highest volume chips produced at IBM and at Freescale (Motorola’s spin-off) carry the PowerPC RISC architecture. PowerPC is the architecture of choice at IBM for everything from game chips to supercomputer server chips. The PowerPC and Intel’s X86 are the two primary architectures that stood the test of time. I knew a good thing when I saw it.
Intel stayed with their proven architecture, but they adopted virtually all the RISC techniques developed by our Somerset team. They employed a brute force approach to microprocessor design. They applied a team of thousands of engineers to streamline the instructions, and then to optimize and tweak those X86 microprocessors until they could offer as good or better performance at higher frequencies than we could with our designs.
Intel capitalized on parallel processing techniques invented by IBM and other companies, such as superscalar and out-of-order processing. A superscalar design gains efficiencies by having multiple parallel execution units that operate in parallel on groups of instructions. It was like the difference between having multiple checkout lines at the grocery store versus a single checkout line. An out-of-order design gains efficiencies by scheduling instructions when they are ready to execute. This meant when even one long instruction stalled, other shorter instructions could be routed around it. Idle time is wasted time.
Intel remained the only game in town when it came to the PC, in spite of our best efforts at Somerset. Of course, it didn’t help that IBM experienced various software failures around the same time.
This early defeat at the hands of Intel was in the back of my mind as I walked beside the snazzy glass wall that separated cube-city from the STI war room, the place where Kahle held his daily architecture meetings. Another engineer beat me into the room and snagged the last available Ethernet port for his laptop. There were never enough outlets for everyone at the table, and the security team had recently disabled the wireless capabilities in the building as a defensive security measure. I sat near the back of the room and opened my laptop, intending to work offline while we waited for the meeting to start. But a swirl of thoughts about Intel grabbed my attention. I knew Intel wasn’t sitting on its hands; their people were working, just as we were, to push the limits of technology. They had thousands of engineers working on their next chip, while we had a few dozen. I knew several former IBMers, smart guys, who moved to Intel, and I knew they were inventing cool new stuff for our enemy. How could we compete?
Kahle waited for a quorum to gather, then stood and explained Hofstee’s mission and our job. He said, “You have to be paranoid when it comes to beating Intel. Basically, we need to attack with multiple weapons, because just having a higher frequency will not be enough to make Intel’s customers switch. This calls for an extraordinary new design offering an order-of-magnitude improvement in performance.”
It wasn’t quite a battle cry, but it generated the right discussions. We batted around various strategies, scribbled ideas on the board, and argued about competing technologies, all the while lacing our language with words like frequency, throughput, process, and performance. Also, there was a new term in the industry to describe the raw frequency of a processor. It was called “fanout-of-four” (FO4), which described the number of gate delays in each pipeline stage or, more specifically, the number of simple inverter gates connected in series, each having a fanout or load of four gates connected to them. A smaller FO4 gate delay translates to a faster frequency. This new term provided us with a way to describe and compare processor speeds across multiple manufacturing technologies. Therefore, when a processor design migrated from, say, a 90 nanometer manufacturing process to the newer generation 65 nanometer technology, the FO4 gate delay would stay the same while the frequency (gigahertz) could increase. The Power4 processor I worked on had a 24 FO4 gate delay, which translated into a 1.1 gigahertz clock speed. That was the fastest in IBM. Intel had the current speed record in 2001 with an 18 FO4 gate delay, which translated into 1.5 gigahertz.
The pressure was on.
The need for low power really tied our hands. For the compact cost-conscious PlayStation 3, achieving the frequency of a PC with the reduced power budget of the game console would be a huge challenge. Game consoles are smaller than PCs and have less capacity to keep the chips cool, and games are very compute-intensive functions that tend to max out the processor usage. Higher power on the PlayStation 3 would lead to more costly thermal control techniques like fans and heat sinks, and the costs for those components were very hard to reduce over time. Kahle explained that Kutaragi’s aggressive cost-cutting strategy proved to be a huge money maker for Sony on previous products, so of course that would be the plan for this product too.
The Sony architect, Takeski Yamazaki, said in broken English, “Seventy-five watts is the highest power the console can physically tolerate.” Heads nodded in agreement, all Sony engineers.
I was skeptical. Most of the server chips