Mobile/Client/AI Computing Forum

Tuesday, May 14 • San Jose, CA

 Program Moderator: Charles Furnweger

JEDEC Welcome
Mian Quddus, JEDEC Board of Directors

Morning Session



Samsung Memory - Personalizing Your Edge AI Experiences

Presenter: Jim Elliott & Eyal Pnini, Samsung

Coming Soon



The CAMM2 Journey and Future Potential

Keynote Presenter: Dr. Tom Schnell, Dell

A short history of the CAMM journey since 2020 will be covered, followed details on LPDDR5 CAMM2 and DDR5 CAMM2. The future work and potential of CAMM2 will indicate new investments and efforts needed to continue the revolution.


The Evolution of Hyperscale Data Centers: From CPU-Centric

Presenter: Manoj Wadekar, Meta

In recent years, hyperscale data centers have been optimized for scale-out stateless applications and zettabyte storage, with a focus on CPU-centric platforms. However, as the infrastructure shifts towards next-generation AI applications, the center of gravity is moving towards GPU/accelerators. This transition from "millions of small stateless applications" to "large AI applications running across clusters of GPUs" is pushing the limits of accelerators, network, memory, topologies, rack power, and other components. To keep up with this dramatic change, innovation is necessary to ensure that hyperscale data centers can continue to support the growing demands of AI applications. This keynote speech will explore the challenges and opportunities of this evolution and highlight the key areas where innovation is needed to enable the future of hyperscale data centers.



Presenter: Brett Murdock, Synopsys

Coming Soon



LPDDR Impact from Edge AI Computing

Keynote Presenter: Osamu Nagashima, Micron

Large Language model impacted on edge device and memory requirement. Exploring memory capability for LLM Edge computing.


Graph Compiler for a Weight-Stationary Dataflow

Presenter: Sergey Ostrikov, Infineon

This presentation explores the use of NVMs as a backbone for a weight-stationary dataflow and provides an overview of graph compilation techniques that enable such architectures.


Divergence of Memory Technology Needs for Client/Mobile and Cloud Server SOCs

Presenter: Nagi Aboulenein, Ampere

We will discuss areas of divergence (and synergy) of client/mobile and server memory technology needs for future SOCs and platforms.




CXL for Automotive and AI Applications 

Presenter: Bill Gervasi, Wolley

Cars long ago became data centers on wheels, so it makes sense that trends in data center design would be followed by similar adaptations for automotive applications. This trend includes the adoption of artificial intelligence processing in cars as well as data centers. Similarities include the value of shared resources, while differences include the memory footprint to support levels of learning as well as inference as well as the value of non-volatile memory on CXL to support fast reboot.

11:25AM-11:55PMPanel Discussion
12:00-1:00PMLunch Break

Afternoon Session



CXL Solutions for Memory – Centric Computing

Keynote Presenter: Dr. Sung Ryu, Samsung

The demand of memory and processing power is increasing exponentially, and the new CXL protocol can be a solution for memory capacity and bandwidth problems. Traditional memory was designed to minimize data access latency for small amounts of data but new workloads such as AI demand substantial memory capacity and bandwidth. Disaggregated CXL memory solutions could introduce more latency but it has lots of attractive benefits including composability, scalability, TCO reduction and hardware-based memory management. In this talk, I will share various kind of CXL solutions and their performance data including use cases with real applications.


LPDDR's Enhanced Performance and Power Efficiency in Memory Solutions

Presenter: Mickey Choi, SK Hynix

As LPDDR memory emerged within the industry with the primary objective of minimizing power consumption, it has garnered significant attention not only for its energy-efficient attributes but also for its noteworthy performance advancements. LPDDR memories have rapidly elevated their speed and capabilities, surpassing the performance levels offered by traditional DDR memories. This increased prowess has positioned LPDDR as the preferred choice in the evolving landscape of various industries, extending beyond the realms of mobile and client applications to encompass domains such as AI, graphics, and server environments. During this presentation, we will delve into the ways in which LPDDR memories have adeptly adapted to meet the evolving requirements of the industry across diverse sectors.



Addressing Memory Challenges and Opportunities in Edge AI Computing

Presenter: Thomas To, AMD

The rapid expansion of machine learning is driving the proliferation of AI applications across various platforms, including cloud-based servers, edge devices and endpoint devices. However, the implementation of AI at the edge and end point devices presents unique challenges, particularly in system memory management. These challenges encompass limited memory resources, power consumption constraints, bandwidth limitations and the necessity for real-time processing capabilities. This presentation will commence by delineating the anticipated trends in the edge AI platforms. Subsequently, it will provide an overview of the key components crucial for edge AI computing, emphasizing their relative importance. Lastly, recent developments in system memory technology from JEDEC will be explored and analyzed in the context of addressing the challenges and requirements of edge AI computing.



New Age of AI Infrastructure and Accelerated Computing

Presenter: Vik Malyala, Supermicro

I will discuss some of the technology drivers and innovations in deployment of large accelerated computing clusters for AI, ML and LLM workloads. This includes high-performance GPUs designed for AI training and inferencing. Leveraging high-performance storage solutions and distributed file systems to access large datasets. Incorporating high-speed networking infrastructure to facilitate communication and data transfer of large data sets and burst of data between compute nodes within a cluster. Address some of the low-latency and high-bandwidth requirements crucial for AI workloads. I will touch on power, density and cooling challenges associated with scaling these high-powered AI clusters in large data center deployments.



LPDDR's Enhanced Performance and Power Efficiency in Memory Solutions

Presenter: Kenneth Wang, Samsung

We will discuss areas of divergence (and synergy) of client/mobile and server memory technology needs for future SOCs and platforms.



SONOS Flash for AI Computation

Presenter: Ravi Kumar, Infineon

Non-volatile memory technology is a promising candidate for edge AI applications. The ability to store weights and do matrix vector multiplications locally, provides energy savings from avoiding large-scale data movement. This results in significant improvement in energy efficiency for analog NVM based accelerators compared to GPU based digital networks. We discuss process features and architecture of SONOS flash memory for AI computation and demonstrate accuracy for deep neural network (DNN) inference comparable to digital accelerators for standard image classification tasks, with > 10x power advantage.



LPDDR5 Interface Test and Validation Methodology

Presenter: Randy White, Keysight

Over time, as LPDDR speeds have increased, the fundamental approach used to move data has had to change. Traditional high speed digital timing and noise with min/typ/max specifications has given way in LPDDR5 to high speed serial approaches based on eye masks with jitter specifications. LPDDR5 must go a step further to deal with distorted eyes using tunable equalization. At each point the need to characterize and measure what’s defined in the spec has made Measurement Science and DFT increasingly important in defining the LPDDR spec. This session will focus on the Measurement Science behind the LPDDR5 specification.


LPDDR5: A Deep Dive into In-System Protocol Validation

Presenter: McKinley Grimes, FuturePlus

The latest evolution in low-power memory technology, LPDDR5, is making waves in the embedded market. Its recent integration onto CAMM2 modules signifies a pivotal shift, offering increased flexibility and performance alongside compelling power efficiency. But harnessing its full potential requires rigorous validation. This presentation delves into the strategies engineers employ to ensure successful LPDDR5 protocol validation within their designs.

3:50-4:20PMPanel Discussion

Program, topics and speakers subject to change without notice.