Server/Cloud Computing/Edge Forum

Wednesday, May 15 • San Jose, CA

 Program Moderator: Charles Furnweger, JEDEC 
8:30-8:35AM

JEDEC Welcome

Mian Quddus, JEDEC Board of Directors

Morning Session

8:35-8:55AM
Keynote
 

DRAM considerations in Google's At-Scale Deployments

Keynote Presenter: Jorge Pont, Google

Comprehensive presentation for compute memory's market analysis and relevant up-to-date memory technology's introduction.

8:55-9:15AM

 

Memory Trends and Technology Considerations for Hyperscale Data Centers

Presenter: Todd Farrell, Microsoft

Comprehensive presentation of compute memory used in today’s Data Centers and future challenges and considerations that impact both compute and AI memory subsystems.

9:15-9:35AM

The Evolution of Hyperscale Data Centers: From CPU-Centric to GPU-Accelerated AI Applications

Presenter: Manoj Wadekar, Meta

In recent years, hyperscale data centers have been optimized for scale-out stateless applications and zettabyte storage, with a focus on CPU-centric platforms. However, as the infrastructure shifts towards next-generation AI applications, the center of gravity is moving towards GPU/accelerators. This transition from "millions of small stateless applications" to "large AI applications running across clusters of GPUs" is pushing the limits of accelerators, network, memory, topologies, rack power, and other components. To keep up with this dramatic change, innovation is necessary to ensure that hyperscale data centers can continue to support the growing demands of AI applications. This keynote speech will explore the impact of this evolution on Memory use cases and highlight the key areas where innovation is needed to enable the future of hyperscale data centers.
The current memory hierarchy and solutions are limited to CPU-attached memory. However, CXL now opens up at least two new potential “Composable Memory Systems” in the next generation data center solutions. First, we have the potential to dramatically increase memory capacities in some platforms using memory expansion. Second, we can now build TCO-optimized memory tiers. This requires the industry to come together to develop HW/SW co-designed solutions. Meta will share its plans to enable Composable Memory Systems which are driving its future AI/ML and TCO-optimized memory servers.
 

9:35-9:55AM
Keynote

 

Intel Server Memory Trends

Keynote Presenter: Dimitrios Ziakas, Intel

Cloud infrastructure scale represents unique compute and memory challenges to sustain performance growth and TCO within power and reliability boundaries. Edge computing on the other hand places different demands on the infrastructure. The fundamentals of memory scaling, power and reliability persist across both.

9:55-10:05AMBreak

10:05-10:25AM

Diversifying Memory Solutions for AI Applications

Presenter: Sagmin Lee, Samsung

I will be covering five categories for my presentation: I will be covering five categories for my presentation: DDR5, DDR6, LP5, Standard HBM and cHBM. The main goal is to effectively communicate our company’s message for each category, highlighting our strengths, synergies, and promotional content. I am planning to prepare materials for each segment based on the key information we want to convey.

10:25-10:45AM

Methods for Improving DRAM Memory Performance

Presenter: Brett Murdock, Synopsys

DRAM memory performance can be characterized by average read latency as a function of throughput. Methods of improving DRAM performance will be shared as well as the impact of application workload characteristics.

10:45-11:05AM

Bringing CXL to the Motherboard

Presenter: Bill Gervasi, Wolley

CXL integration into big iron is well underway with focus on large and high power form factors such as E3.S, however less is said about bringing CXL to embedded environments like motherboards. Proposed here are smaller implementations of CXL in an M.2-style module with PCIe x8 interface. Called FleX, this socketed module enables new classes of innovation for next generation systems by exploiting CXL’s generalizing of interfaces to memory, storage, and accelerators.

11:05-11:25AM

Utilizing Chiplets for Hybrid Memory Expansion

Presenter: Kevin Donnelly, Eliyan

AI and other applications have been driving the need for higher memory capacity and bandwidth. Chiplets can enable system-level improvements using standard and customized memories.

11:25-11:55AMPanel Discussion

Panel Moderator: Mario Martinez, Netlist

12:00-1:00PMLunch Break

Afternoon Session

1:00-1:20PM
Keynote
 

Embracing the AI Boom: SK hynix’s Leadership in Scalable Memory Solutions.

Keynote Presenter: Hansuk Ko, SK hynix

In the era of AI expansion, SK hynix stands at the forefront of innovation, providing cutting-edge memory solutions tailored for AI applications. With a focus on scalability, performance, and energy efficiency, SK hynix delivers HBM, 3DS RDIMM, MCR DIMM, CXL products, enabling extended expansion of memory bandwidth and capacity while maintaining low power characteristics. As AI datasets continue to grow exponentially, the demand for scalable memory solutions with competitive power consumption becomes increasingly critical for efficient and reliable data processing at the system level and to meet environmental requirements. SK hynix’s commitment to addressing these needs ensures optimal performance, efficiency and sustainability in this rapidly advancing technology landscape.

1:20-1:40PM

Samsung Memory - Personalizing Your Edge AI Experiences

Presenter: Jim Elliott & Ted Moon, Samsung

Coming Soon

1:40-2:00PM

Data-Centric System Architecture: Meeting Workload Demands in Future System Designs

Presenter: Jonathan Hinkle, Micron

What are future directions for memory technology requirements as seen through the lens of hyperscale cloud server SOCs.

2:00-2:20PM

Future Memory Technology Needs for Hyperscale Cloud Servers

Presenter: Nagi Aboulenein, Ampere

What are future directions for memory technology requirements as seen through the lens of hyperscale cloud server SOCs.

2:20-2:30PMBreak
2:30-2:50PM

Server Module & Supporting Logic Chip, Challenges & Innovations

Presenter: DY Lee, ONE Semiconductor

This presentation tries to explain what have been improved by DDR5 SDRAM generation from DRAM, Module to supporting chip as a whole. Memory bottleneck becomes more critical as time goes, and memory industry try to respond it. This presentation covers key innovation items made at DDR5 generation.

2:50-3:10PM

DDR5 Protocol Validation Demystified: Conquering New Memory Frontiers

Presenter: McKinley Grimes, Future Plus

DDR5 has arrived, revolutionizing memory systems with dual high-speed sub-channels and diverse module formats: UDIMMs, RDIMMs, SODIMMs, and the emerging CAMM2. Its complexity surpasses DDR4, introducing PMICs, SPDs, HUBs, Temperature Sensors, and RCDs that communicate via the new DDR5 Sideband Bus. This presentation delves into the current DDR5 validation landscape, dissecting real-world challenges faced by engineers. Witness their innovative solutions in tackling physical layout variations, signal integrity complexities, and protocol nuances. We'll explore effective test methodologies, compliance verification, and emerging trends shaping the future of DDR5 validation.

3:10-3:30PM

DDR5 Interface Test and Validation Methodology

Presenter: Randy White, Keysight

It is well known throughout the memory industry that the system configuration heavily impacts the performance of a DDR5 based system. Many designers are chasing the DDR5 speed target of 6400Mbps but want as high capacity of DRAM as possible which translates into more loading/ranks. This presentation will show a signal integrity analysis of a dual-rank, single socket, DDR5 RDIMM based system targeting 6400 Mbps. The presentation will discuss timing budgets and highlight the data eye improvement seen when enabling receiver decision feedback equalization (DFE).

3:30-4:00PMPanel Discussion

Panel Moderator: Mario Martinez, Netlist

Program, topics and speakers subject to change without notice.