News

IBM Details Memory Advance for Chips

IBM has devised a way to triple the amount of memory stored on computer chips and double the performance of data-hungry processors by replacing a problematic type of memory with a variety that uses much less space on the slice of silicon.

International Business Machines Corp. said Wednesday that its new memory technology will help unclog crippling bottlenecks that build up as increasingly powerful microprocessors attempt to retrieve data from a separate memory chip faster than it can be delivered.

"We kill ourselves in the semiconductor industry to try to get a little bit more performance in each generation. What we're doing here is trying to merge two technologies ... on the same chip to get significantly more memory," said Lisa Su, vice president for semiconductor research and development at IBM.

Armonk, N.Y.-based IBM said its solution entails swapping out most of the static random access memory, or SRAM, used to store information directly on computer chips and integrating onto the chip another kind of memory, known as dynamic random access memory, or DRAM.

SRAM is a type of memory that's fast and easy to manufacture but takes up a lot of valuable real estate on the chips. DRAM, the most common type of memory used in personal computers, has typically been stored on a separate chip and has previously been viewed as too slow to be integrated directly onto the microprocessor.

IBM said it has been able to speed up the DRAM to the point where it's nearly as fast as SRAM, and that the result is a type of memory known as embedded DRAM, or eDRAM, that helps boost the performance of chips with multiple core calculating engines and is particularly suited for enabling the movement of graphics in gaming and other multimedia applications. DRAM will also continue to be used off the chip.

"A lot of people have been trying to do this," Su said. "As we look into the processor roadmap, this is one of the most difficult things to solve. We were basically memory-limited in the high-power processors, so this has been very significant for us."

IBM was scheduled to present details of its research Wednesday at the International Solid State Circuits Conference in San Francisco. The company said the technology will be included in its server chips starting in 2008 and will expand to other products.

Independent semiconductor experts said the technology will lead to faster processors that will be able to fetch data quicker and with far fewer stalls.

"That's what's really important about this -- this isn't just some R&D exercise about some memory that will be used far off in the future," said David Lammers, director of WeSRCH.com, a social networking Web site for semiconductor enthusiasts and part of VLSI Research Inc.

Semiconductor companies are scrambling to avoid any slowdowns in processing speed as they invent ways to cram more transistors onto the same slice of silicon while boosting performance and consuming as little energy as possible.

Earlier this week, Intel Corp. said it has developed a research chip capable of performing calculations as quickly as a supercomputer while only consuming as much energy as a light bulb.

And last month, Intel and IBM separately announced they had figured out how to replace problematic but vital materials in transistors that had begun leaking too much electric current as the circuitry on computer chips gets smaller.

All three announcements help the semiconductor industry maintain the pace of Moore's Law, the 1965 prediction by Intel co-founder Gordon Moore that the number of transistors on a chip should double about every two years.

Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025