Server/Cloud Computing/Edge Forum: SOLD OUT

Monday, May 23 • Santa Clara, CA

8:30-9:30AMOnsite check-in for registered attendees
9:30-9:35AM

Welcome & Opening Remarks

Mian Quddus, JEDEC Chairman

9:35-9:55AM
 

Memory Market Trends and Technology

Presenter: YongHo Cho, Samsung

Ever-increasing memory demand from server/cloud/edge computing and emerging applications like AI/ML and AR/VR drive the overall memory ecosystem to continue innovating technologies for memory bandwidth and capacity. In this presentation, we will talk about the recent and future DRAM market trends and requirements across the diverse application areas. We will also present the DRAM and related technologies to meet the industry demands.

9:55-10:15AM
 

Perspectives on Data Center DDR5 Trends for the Next 3-5 Years

Presenter: Jorge Pont, Google

More details coming soon.

10:15-10:35AM
 

Datacenter System Transformation: Memory-Driven Change

Presenter: Jonathan Hinkle, Lenovo

More details coming soon.

10:35-10:55AM
 

 

Server & Enterprise Applications Driving Memory Requirements for the Next Five Years

Presenter: Stuart Berke, Dell

Enterprise Applications and Workloads across Datacenter, Cloud/aaS, Edge/Telco, HPC, and AI/ML segments will drive new memory requirements in the coming years. Impacts and Trends will cover: Capacity and Bandwidth, RAS, Metadata, Persistence, Operating Temperature Range, CXL-based Memory, Module Form Factors, Memory Tiering, Cooling, and others.

10:55-11:05PMBreak
11:05-11:25AM
 

Choosing the right DRAM Memory for custom computing chips: bandwidth, capacity and power for DDR5, LPDDR5/5X, GDDR6 and HBM3

Presenter: Kos Gitchev, Cadence

DDR5 is a popular DRAM memory for new server/cloud and edge designs. DDR5 is capable of providing very high memory capacity while mounted on DIMMs, CXL™ or directly attached to the PCB, making DDR5 the obvious choice for compute-heavy and big data server designs. Meanwhile there is rapid growth in specialized server machines for artificial intelligence / machine learning, cryptography and media, as well as edge applications of all types that may benefit from different memories optimized for different tradeoffs of bandwidth, power, capacity and form-factor. In this presentation we’ll discuss where DDR5 is a strong choice, and where LPDDR5/5X, GDDR6 or HBM3 may provide a better tradeoff for particular types of Server/Cloud and Edge designs.

11:25-11:45AM

Adaptable/Programmable System Architecture and Applications Driving DDR5 to Meet the Demands of the Next 3-5 Years

Presenter: Thomas To, AMD

The explosion of data traffic makes data center/cloud computing workloads demand to grow exponentially. The data center processors are seeing mixture of file sizes, diversified data types and new algorithms for varying processing requirements. Adding to the challenge is the workload evolution, with cloud-based ML/AI (Hardware Machine Learning & Artificial Intelligence) being the first and foremost. The processing speed and bandwidth demand increase the data center burden. Example workloads targeted for acceleration are data analytics, networking application and cybersecurity. Adaptable system accelerator, such as implemented with FPGA, have bridged the computational gap by providing heterogenous acceleration to offload the burden. However, the new data path, such as in ML, is fundamentally different from the traditional CPU data path flow. This presentation will highlight the diverse applications of programmable system and contrast the different system memory (e.g., DDR5) requirement to traditional CPU system requirement. The discussion will stress on the balance among system cost, bandwidth and memory density requirement going forward.

11:45AM-12:15PM

Panel Discussion

12:15-1:30PMLunch
1:30-1:50PM
 

Architecture and Management of the Composable Memory 

Presenter: Alex Branover, AMD

The talk focuses on the runtime management of the composable memory. This includes combination of the HW and SW methods for memory tiering, sharing, and performance optimization across use-cases. Additionally, the talk addresses the cost optimization of the memory configurations involving volatile and non-volatile memory types.

1:50-2:10PM

Memory Scaling Challenges in the Era of Cloud and Edge Computing

Presenter: Dimitrios Ziakas, Intel Corporation

Cloud infrastructure scale represents unique compute and memory challenges to sustain performance growth and TCO within power and reliability boundaries. Edge computing on the other hand places different demands on the infrastructure. The fundamentals of memory scaling, power and reliability persist across both.

2:10-2:30PM

Computing SOCs for Cloud, Server & Edge

Presenter: Nagi Aboulenein, Ampere Computing

More details coming soon.

2:30-2:40PMBreak
2:40-3:00PM

What are the Directional Paths and Challenges of Memory in Future Server/Cloud Applications

Presenter: Keith Kim, SK Hynix

Demand for DRAM with lower cost, higher density, higher performance, and better RAS has never been greater with the exponential growth of data as the role of AI/ML, edge computing, and datacenters expands. DRAM suppliers are racing to achieve such industry requirements, but are running into a number of physical limitations that must be overcome. This presentation will review the key memory requirements in server/cloud applications; study the corresponding issues; and explore the innovative processes and new design approaches that will address each challenge to allow the technological breakthroughs for future memories to deliver higher speed/density, lower power, and better RAS memory solutions at competitive cost.

3:00-3:20PM

Evolution of Memory Devices for Edge Computing

Presenter: Dr. Sungwook (Sung) Ryu, Samsung

Computer system architecture has been changing due to new application requirements and evolution of hardware. Mainframe computer used to be a prevail solution but distributed workstations, cloud, and edge computing have been adapted. I will go over what have been driving the architecture changes and talk about the detailed requirements of edge computing. Samsung is developing memory devices for edge computing, which include Computational Storage, ZNS (Zoned Namespace) SSD, PIM (Processing In Memory), and MRAM. I will discuss how these devices can address the issues on edge computing for power, latency and form factor.

3:20-3:40PM

DDR5 DRAM Future Trends in the Data Center

Presenter: Frank Ross, Micron Technology 

More details coming soon.

3:40-4:10PMPanel Discussion

Program, topics and speakers subject to change without notice.