The role of DRAM in computing system
What is DRAM?
DRAM takes a role as storing and providing data required by computing processors. All modern computing processor follows Von-neumann structure. Under Von-neumann structure, processor gets instructions and operands from memory to execute the program. For example, to operate the simple math as “2+3”, instruction - “addition” - and operands - 2 and 3- are needed. Although computing processor is powerful, it cannot achieve its potential performance if memory system including DRAM cannot provide data properly.
The role of DRAM
In the memory system, DRAM supplements insufficient capacity of SRAM and slow speed of SSD (HDD). The below figure represents memory hierarchy. From the top, register, SRAM cache (L1 to L3), DRAM and SSD (HDD) exist. Memory from register to DRAM is volatile, which means data perish if the power is off. Therefore, volatile memory is kind of temporal storage with power supply. Nonvolatile memory is SSD (HDD). Since SSD (HDD) is slow in terms of latency and data transfer, computing processor would spend most of time to wait for data if it only relies on SSD (HDD). Therefore, frequently used data is stored in volatile memory (register, cache and DRAM), which enables computing processor to quickly take data from them. Upper-level memory is fast but expansive, whereas lower-level memory has the opposite characteristics.
With the comparison between structures of SRAM and DRAM, we can understand why DRAM should supplement SRAM. Ideal computing system might build its memory only with SRAM cache to maximize its performance. In reality, however, there are limits on die size and costs. For instance, high-performance CPU implemented on servers includes tens of megabytes. To store 1 bit, SRAM generally requires 4 to 6 transistors, whereas DRAM needs 1 transistor and 1 capacitor. Of course, SRAM takes larger area on chip and cost more than DRAM. By the way, SSD stores 2 to 4 bits with 1 transistor. Currently, research is going on to include more bits in 1 transistor. Since most current programs take hundreds of MBs to GBs, cache cannot store all data of program. By placing DRAM between cache and SSD (HDD), computing system dramatically lowers the access to SSD (HDD).
Historically, the advancement of processors requires performance enhancement of memory system including DRAM. The clock speed of Intel CPU steadily has increased, which means the amount of data CPU can execute within a given time expands. Therefore, DRAM is required to increase bandwidth, throughput of DRAM, to keep the pace with CPU. DRAM has kept trying to increase bandwidth with technology advance. By the way, due to various problems including heat dissipation, CPU reached the limit on increasing clock speed. To continuously enhance CPU performance, Intel has increased the number of cores in the CPU and enables executes multiple instructions in parallel. With this change, it is critical for computing system to increase not only bandwidth but also density of DRAM.
Demand drivers of DRAM
There has been trends of main application which drives demand of DRAM. At first, PC creates huge demand for DRAM. The alliance of Intel’s CPU and Microsoft’s Windows provided PC to every home and office but the shipment of PC peaked in 2011. However, smartphone opened another big market for DRAM. Since the first release of iPhone in 2007, the shipment of smartphone had steadily increased until 2017. There was other application, server, which helped DRAM companies. After accumulating large amount of data, data scientists find the way to make value from them with big data technology. Also, AI benefits from the accumulated data. To support those technologies in server, DRAM demand skyrocketed. It is believed that server application will increase the demand of dram for next couple of years. Who will be the next hero for DRAM?



Comments
Post a Comment