Server/Cloud Computing/Edge Forum Korea

Tuesday, May 16 • Seoul

Morning Session

 Program Moderator: Youngsu Kwon, ETRI

Composable Memory Systems at Meta

Keynote Presenter: Manoj Wadekar, Meta

AI and other applications have been driving new and dramatic use cases in data center. This is demanding major changes in the underlying hardware infrastructure. The last decade has seen significant changes to GPUs (Accelerators), CPUs, and networks. As a result, we are seeing dramatic growth in memory-bound workloads in the data center and there is a need to re-think memory solutions. AI/ML, Cache, Database, and Data Warehouse servers are driving the need for higher memory capacity and bandwidth.

The current memory hierarchy and solutions are limited to CPU-attached memory. However, CXL now opens up at least two new potential “Composable Memory Systems” in the next generation data center solutions. First, we have the potential to dramatically increase memory capacities in some platforms using memory expansion. Second, we can now build TCO-optimized memory tiers. This requires the industry to come together to develop HW/SW co-designed solutions. Meta will share its plans to enable Composable Memory Systems which are driving its future AI/ML and TCO-optimized memory servers.


Future Memory Technology Needs for Hyperscale Cloud Servers

Presenter: Nagi Aboulenein, Ampere

What are future directions for memory technology requirements as seen through the lens of hyperscale cloud server SOCs.


Adaptable and Programmable System Architecture and Applications driving DDR5 to Meet the Demands of the Next 5 Years

Presenter: Thomas To, AMD

The explosion of data traffic makes data center/cloud computing workloads demand to grow exponentially. The data center processors are seeing mixture of file sizes, diversified data types and new algorithms for varying processing requirements. Adding to the challenge is the workload evolution, with cloud-based ML/AI (Hardware Machine Learning & Artificial Intelligence) being the first and foremost. The processing speed and bandwidth demand increase the data center burden. Example workloads targeted for acceleration are data analytics, networking application and cybersecurity. Adaptable system accelerator, such as implemented with FPGA, have bridged the computational gap by providing heterogenous acceleration to offload the burden. However, the new data path, such as in ML, is fundamentally different from the traditional CPU data path flow. This presentation will highlight the diverse applications of programmable system and contrast the different system memory (e.g., DDR5) requirement to traditional CPU system requirement. The discussion will stress on the balance among system cost, bandwidth and memory density requirement going forward.


Data-Centric Computing

Keynote Presenter: Dr. Sung Ryu, Samsung

The "memory wall" refers to the challenge in computer architecture of providing a sufficient amount of memory bandwidth to keep up with the processing power of a CPU. This problem arises because the speed of the CPU is increasing much faster than the speed of memory, and resulting in performance degradation. The solutions to memory wall issues have been designed for traditional von Neumann architectures and memory hierarchies. However, these existing architectures are not well suited for handling big data and large machine learning models, because the working set is too big to fit in the existing memory hierarchy.

For the solution to bandwidth wall issue, a new approach called Data-Centric computing has emerged as an alternative to the traditional compute-centric paradigm. This approach prioritizes determining the optimal location for computation based on the data's location and the computation's complexity, instead of merely transferring all data to the CPU. Data-Centric computing technologies includes Computational Storage, Processing In Memory (PIM), Processing Near-Memory (PNM), which show promising results for large data workload.


DDR5 In System Validation

Presenter: Barbara Aichinger, FuturePlus

DDR5 is now being introduced in Servers, Desktops and Laptops and has two high speed channels, with Double Data Rate and Single Data Rate signals. UDIMMs, RDIMMs and SODIMMs modules are all pinned out differently and these modules have PMICs, SPD, HUB, TS’s, and RCD. Certainly, DDR5 is more complicated than DDR4! This presentation will review the lab validation problems facing Engineers currently working on DDR5. See how Engineers are solving these problems and what challenges they face.


DDR5 Interface Test and Validation Methodology

Presenter: Randy White, Keysight

There’s the standard, and then there’s how to measure it. Usually the specification drives measurement procedures but at DDR5 speeds development must go hand-in-hand to ensure that what works in theory will not only work in practice, but can be confirmed on the lab bench and in production. This session focuses on the DDR5 measurement methodologies that have been driven by the specification and the practical considerations that have influenced the DDR5 specification. Probing and test fixturing, use of new DFT features in the DDR5 specification itself, measurement algorithms and automation, and specific examples are presented that enable characterization and troubleshooting of DDR5 memory and support devices, DIMMs, as well as entire systems, both server and embedded.

12:00-1:00PMLunch Break

Afternoon Session


JEDEC Welcome
Mian Quddus, JEDEC Board of Directors


Memory Market and Industry Technology Trend

Keynote Presenter: Taek Woon Kim , Samsung

Comprehensive presentation for compute memory's market analysis and relevant up-to-date memory technology's introduction.


Choosing the right DRAM Memory for Custom Computing Chips: Bandwidth, Capacity and Power for DDR5, LPDDR5/5X, GDDR6 and HBM3

Presenter: Marc Greenberg, Cadence

DDR5 is a popular DRAM memory for new server/cloud and edge designs. DDR5 is capable of providing very high memory capacity while mounted on DIMMs, CXL™ or directly attached to the PCB, making DDR5 the obvious choice for compute-heavy and big data server designs. Meanwhile there is rapid growth in specialized server machines for artificial intelligence / machine learning, cryptography and media, as well as edge applications of all types that may benefit from different memories optimized for different tradeoffs of bandwidth, power, capacity and form-factor. In this presentation we’ll discuss where DDR5 is a strong choice, and where LPDDR5/5X, GDDR6 or HBM3 may provide a better tradeoff for particular types of Server/Cloud and Edge designs.


DDR5, What to Innovate: DDR5 SDRAM, Module and Supporting Chips as a Whole

Presenter: DY Lee, ONE Semiconductor

This presentation tries to explain what have been improved by DDR5 SDRAM generation from DRAM, Module to supporting chip as a whole. Memory bottleneck becomes more critical as time goes, and memory industry try to respond it. This presentation covers key innovation items made at DDR5 generation.


Intel Server New Memory Feature to Improve DDR5 Reliability and Performance

Presenter: Taeyun Kim, Intel

In this session, you will learn how Intel take serious effort to improve DDR5 quality, by validation as well as utilizing DDR5 memory features to address customers’ memory quality and reliability concerns. Also, see some of the innovative approaches to improve server system performance



Memory Offerings for Data Centers: Now and Beyond

Keynote Presenter: Eugene Hongbae Kim, SK hynix

The data centers are now the core of the new industrial revolution, and the role that memory solutions play has become ever more important as the amount of the data, fueled into many different services including AI like chatGPT, increases at an explosive rate. This presentation explains what is taking place and what can be anticipated in the memory offerings for data centers.


Memory, Test and Measurement and the Impacts of Changes in the Data Center

Presenter: Brig Asay, Keysight

Perhaps no other technology will have a bigger change in the data center than memory over the next few years. With the move of the server to further disaggregate, memory must be faster with less latency. Faster memory means even bigger test and measurement challenges. Previously difficult tasks, such as probing and decoding, only get harder for everyone over the next several years. This discussion will focus on those challenges and some of the best ways to overcome them.


DDR5 RDIMM 6400Mbps Signal Integrity Analysis

Presenter: Brett Murdock, Synopsys

It is well known throughout the memory industry that the performance of a DDR5 based system is heavily impacted by the system configuration. Many designers are chasing the DDR5 speed target of 6400Mbps but want as high capacity of DRAM as possible which translates into more loading/ranks. This presentation will show a signal integrity analysis of a dual-rank, single socket, DDR5 RDIMM based system targeting 6400 Mbps. The presentation will discuss timing budgets and highlight the data eye improvement seen when enabling receiver decision feedback equalization (DFE).


Closing Remarks
Mian Quddus, JEDEC Board of Directors

Program, topics and speakers subject to change without notice.