Keynote | Page 2 | Kisaco Research

Keynote

Author:

RK Anand

Co-Founder & CPO
RECOGNI

RK Anand is the Co-founder and Chief Product Officer (CPO) of Recogni, an artificial intelligence startup based in San Jose specializing in building multimodal GenAI inference systems for data centers.

At Recogni, RK spearheads the company’s product development and Go-To-Market strategies within the data center industry.

With an unwavering commitment to customer needs and value creation, RK and the Recogni team are striving to deliver the highest performing and most cost and energy efficient multi-modal GenAI systems to the market.

RK brings over 35 years of leadership experience in data center compute systems, networking, and silicon development. His distinguished career includes engineering roles at Sun Microsystems and serving as Executive Vice President and General Manager at Juniper Networks. As one of the earliest employees at Juniper, RK played a pivotal role in the company’s growth from a startup to generating billions of dollars in revenue.

RK Anand

Co-Founder & CPO
RECOGNI

RK Anand is the Co-founder and Chief Product Officer (CPO) of Recogni, an artificial intelligence startup based in San Jose specializing in building multimodal GenAI inference systems for data centers.

At Recogni, RK spearheads the company’s product development and Go-To-Market strategies within the data center industry.

With an unwavering commitment to customer needs and value creation, RK and the Recogni team are striving to deliver the highest performing and most cost and energy efficient multi-modal GenAI systems to the market.

RK brings over 35 years of leadership experience in data center compute systems, networking, and silicon development. His distinguished career includes engineering roles at Sun Microsystems and serving as Executive Vice President and General Manager at Juniper Networks. As one of the earliest employees at Juniper, RK played a pivotal role in the company’s growth from a startup to generating billions of dollars in revenue.

The future of AI demands a revolution in infrastructure. As frontier AI models strain the limits of traditional silicon scaling and copper connectivity, a fundamental shift is needed. AI data centers already consume as much power as the largest cities on Earth and continue to grow at an exponential rate. As the industry turns to optics for connecting the next generation of XPU superclusters, we will need more than 100x the bandwidth and double the power efficiency of existing, shoreline-bound optical technologies.

This keynote will present the vision for 3D photonics, a technology that is transforming not only how XPUs and switches are interconnected but also the design of the underlying silicon and packaging. We will reveal how Passage, the world's first 3D co-packaged optics solution, enables a new edgeless I/O design paradigm. This paradigm delivers massive scale-up bandwidth, linking tens of thousands of XPUs with unprecedented energy efficiency. We will also detail the fundamental performance and operational breakthroughs, as well as the broad ecosystem partnerships that are enabling volume production and deployment in hyperscale data centers. Join us to learn how this technology is driving a new era of AI supercomputing.

Author:

Nick Harris

Founder & CEO
Lightmatter

Nick Harris is the founder and CEO of Lightmatter, a pioneering photonic-computing company that is redefining AI infrastructure. An MIT-trained engineer and scientist, he won the MIT Technology Review’s TR35 award and holds numerous patents on revolutionary photonic technologies. His prolific research—published in top-tier journals such as Nature—has seeded new fields in photonic AI interconnects, processor design, and quantum computing. Under his leadership, Lightmatter has rapidly become the industry benchmark for ultra-fast photonics for connecting AI supercomputers.

Nick Harris

Founder & CEO
Lightmatter

Nick Harris is the founder and CEO of Lightmatter, a pioneering photonic-computing company that is redefining AI infrastructure. An MIT-trained engineer and scientist, he won the MIT Technology Review’s TR35 award and holds numerous patents on revolutionary photonic technologies. His prolific research—published in top-tier journals such as Nature—has seeded new fields in photonic AI interconnects, processor design, and quantum computing. Under his leadership, Lightmatter has rapidly become the industry benchmark for ultra-fast photonics for connecting AI supercomputers.

Author:

Charles Alpert

AI Fellow
Cadence

Charles (Chuck) Alpert is Cadence’s AI Fellow and drives cross-functional Agentic AI solutions throughout Cadence’s software stack. Prior to this, has lead various pioneering teams in digital implementation, including Global Routing, Clock Tree Synthesis, Genus Synthesis, and Cerebrus AI.   Charles has published over 100 papers and received over 100 patents in the EDA space.  He is a Cadence Master inventor.  He has served as Deputy-Editor-in-Chief for IEEE Transactions on Computer-Aided Design, chaired the IEEE/ACM Design Automation Conference, and earned IEEE Fellow. He received a B.S. and B.A. Degree from Stanford University and a Ph.D. in Computer Science from UCLA.

Charles Alpert

AI Fellow
Cadence

Charles (Chuck) Alpert is Cadence’s AI Fellow and drives cross-functional Agentic AI solutions throughout Cadence’s software stack. Prior to this, has lead various pioneering teams in digital implementation, including Global Routing, Clock Tree Synthesis, Genus Synthesis, and Cerebrus AI.   Charles has published over 100 papers and received over 100 patents in the EDA space.  He is a Cadence Master inventor.  He has served as Deputy-Editor-in-Chief for IEEE Transactions on Computer-Aided Design, chaired the IEEE/ACM Design Automation Conference, and earned IEEE Fellow. He received a B.S. and B.A. Degree from Stanford University and a Ph.D. in Computer Science from UCLA.

Author:

David Glick

SVP, Enterprise Business Services
Walmart

David Glick serves as the senior vice president of Walmart’s Enterprise Business Services. He leads enterprise systems, including People Technology Modernization, Finance Tech, Associate Digital Experience (ADE) and Shared Services, that enable Walmart to spend smartly, act digitally and build trust with associates and shareholders. 

Before joining Walmart, David served as the chief technology officer for Flexe, a logistics and supply chain technology provider. There, he was responsible for building the foundational technology that allows for an open logistics network to optimize the delivery of goods. Prior to that, he was vice president of fulfillment and logistics tech for Amazon, where he was responsible for all the technology inside the walls of Amazon’s fulfillment centers, as well as the founding tech vice president of Amazon Logistics. 

David has over 20 years of experience in enterprise tech, product development, system architecture and logistics and fulfillment tech. 

 

He holds a Bachelor of Science in Physics from the University of Michigan and a Ph.D. in Physics from the University of South Carolina, Chapel Hill. 

David Glick

SVP, Enterprise Business Services
Walmart

David Glick serves as the senior vice president of Walmart’s Enterprise Business Services. He leads enterprise systems, including People Technology Modernization, Finance Tech, Associate Digital Experience (ADE) and Shared Services, that enable Walmart to spend smartly, act digitally and build trust with associates and shareholders. 

Before joining Walmart, David served as the chief technology officer for Flexe, a logistics and supply chain technology provider. There, he was responsible for building the foundational technology that allows for an open logistics network to optimize the delivery of goods. Prior to that, he was vice president of fulfillment and logistics tech for Amazon, where he was responsible for all the technology inside the walls of Amazon’s fulfillment centers, as well as the founding tech vice president of Amazon Logistics. 

David has over 20 years of experience in enterprise tech, product development, system architecture and logistics and fulfillment tech. 

 

He holds a Bachelor of Science in Physics from the University of Michigan and a Ph.D. in Physics from the University of South Carolina, Chapel Hill. 

Author:

Atif Rafiq

Founder
Ritual

Atif is the former Chief Digital Officer at McDonald’s and past President at Volvo and MGM Resorts. He’s now the founder of Ritual, an AI-powered platform helping organizations scale structured, intelligent decision-making. 

With a track record of leading digital transformation at the highest level, Atif brings valuable insight into how large enterprises adopt and scale AI effectively.

Atif has created over 500 billion dollars in enterprise value and advises cutting-edge companies like SpaceX and Anthropic. 

Atif Rafiq

Founder
Ritual

Atif is the former Chief Digital Officer at McDonald’s and past President at Volvo and MGM Resorts. He’s now the founder of Ritual, an AI-powered platform helping organizations scale structured, intelligent decision-making. 

With a track record of leading digital transformation at the highest level, Atif brings valuable insight into how large enterprises adopt and scale AI effectively.

Atif has created over 500 billion dollars in enterprise value and advises cutting-edge companies like SpaceX and Anthropic. 

Author:

Mark Lohmeyer

VP & GM, Compute and AI Infrastructure
Google

Mark Lohmeyer leads the Compute and AI Infrastructure business for Google Cloud. In this role, he is responsible for the Google Cloud Compute Engine, AI/ML infrastructure (Cloud TPU and GPU), Core ML services, block storage (Persistent Disk and Hyperdisk), and enterprise solutions (SAP on GCP, Google Cloud VMware Engine, etc.)
Mark’s background includes leadership roles in general management, product management, marketing, business development, and engineering management, across a wide range of core infrastructure technologies, including compute, storage, and networking.


Prior to joining Google, Mark was the SVP/GM of VMware’s Cloud Infrastructure Business Group. In this role, he led a large-scale, global organization, spanning engineering, operations,
product management, and product marketing for the VMware infrastructure portfolio across Private Clouds, Public Clouds, and Cloud Provider Partners / Sovereign Clouds.


Prior to VMware, Mark led the product team for Enterprise WAN and Routing at Cisco and was the GM for HA/DR and Storage solutions at Veritas Software. Earlier in his career, he worked on storage I/O hardware at Adaptec, and digital imaging research and hardware design at the Sarnoff Research Labs, and holds a patent based on this work.
Mark holds a Bachelors and Masters degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology, where he also served as the head teaching assistant for Computational Structures.

Mark Lohmeyer

VP & GM, Compute and AI Infrastructure
Google

Mark Lohmeyer leads the Compute and AI Infrastructure business for Google Cloud. In this role, he is responsible for the Google Cloud Compute Engine, AI/ML infrastructure (Cloud TPU and GPU), Core ML services, block storage (Persistent Disk and Hyperdisk), and enterprise solutions (SAP on GCP, Google Cloud VMware Engine, etc.)
Mark’s background includes leadership roles in general management, product management, marketing, business development, and engineering management, across a wide range of core infrastructure technologies, including compute, storage, and networking.


Prior to joining Google, Mark was the SVP/GM of VMware’s Cloud Infrastructure Business Group. In this role, he led a large-scale, global organization, spanning engineering, operations,
product management, and product marketing for the VMware infrastructure portfolio across Private Clouds, Public Clouds, and Cloud Provider Partners / Sovereign Clouds.


Prior to VMware, Mark led the product team for Enterprise WAN and Routing at Cisco and was the GM for HA/DR and Storage solutions at Veritas Software. Earlier in his career, he worked on storage I/O hardware at Adaptec, and digital imaging research and hardware design at the Sarnoff Research Labs, and holds a patent based on this work.
Mark holds a Bachelors and Masters degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology, where he also served as the head teaching assistant for Computational Structures.

AI is only as good as the foundation upon which it is built. Unstable infrastructure can turn even the most brilliant algorithms into expensive experiments that fail when you need them most. With proven security, massive scale, and unmatched reliability, AWS was built for the needs of tomorrow's most demanding workloads. We'll dive into the high-performance networking, innovative silicon design, and differentiated services that enable customers to push the boundaries of what's possible. Because when it comes to AI, this isn't just infrastructure—it's the foundation that powers the future.

Author:

Barry Cooks

VP, SageMaker AI
AWS

 

Barry Cooks is a global enterprise technology veteran with 25 years of experience leading teams in cloud computing, hardware design, application microservices, artificial intelligence, and more. As Vice President of Technology at Amazon, he is responsible for compute abstractions (containers, serverless, VMware, micro-VMs), quantum experimentation, high performance computing, and AI training. He oversees key AWS services including AWS Lambda, Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, and Amazon SageMaker. Barry also leads responsible AI initiatives across AWS, promoting the safe and ethical development of AI as a force for good. Prior to joining Amazon in 2022, Barry served as CTO at DigitalOcean, where he guided the organization through its successful IPO. His career also includes leadership roles at VMware and Sun Microsystems. Barry holds a BS in Computer Science from Purdue University and an MS in Computer Science from the University of Oregon.

Barry Cooks

VP, SageMaker AI
AWS

 

Barry Cooks is a global enterprise technology veteran with 25 years of experience leading teams in cloud computing, hardware design, application microservices, artificial intelligence, and more. As Vice President of Technology at Amazon, he is responsible for compute abstractions (containers, serverless, VMware, micro-VMs), quantum experimentation, high performance computing, and AI training. He oversees key AWS services including AWS Lambda, Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, and Amazon SageMaker. Barry also leads responsible AI initiatives across AWS, promoting the safe and ethical development of AI as a force for good. Prior to joining Amazon in 2022, Barry served as CTO at DigitalOcean, where he guided the organization through its successful IPO. His career also includes leadership roles at VMware and Sun Microsystems. Barry holds a BS in Computer Science from Purdue University and an MS in Computer Science from the University of Oregon.

Author:

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

Author:

Yee Jiun Song

VP, Engineering
Meta

Yee Jiun Song serves as Vice President of Engineering at Meta, leading the Infrastructure Foundation organization. In this role, he oversees the strategy and development of Meta's AI, compute, and storage hardware platforms, custom silicon, supply chain operations, and the management of global infrastructure capacity.  

Yee Jiun's tenure at Meta started in 2010 when he joined the company as a Research Scientist with a focus on improving system reliability and fault tolerance. He later became Vice President of Engineering for the Core Systems organization, where he played a key role in developing the software systems necessary for Meta's services to scale reliably to millions of servers.


Yee Jiun is committed to research and innovation and has published in various academic conferences, including SOSP, OSDI, and ISCA.

Yee Jiun holds a B.S. in Electrical Engineering and Computer Science and a B.A. in Economics from the University of California, Berkeley, an M.S. in Computer Science from Stanford University, and a Ph.D. in Computer Science from Cornell University.

Yee Jiun Song

VP, Engineering
Meta

Yee Jiun Song serves as Vice President of Engineering at Meta, leading the Infrastructure Foundation organization. In this role, he oversees the strategy and development of Meta's AI, compute, and storage hardware platforms, custom silicon, supply chain operations, and the management of global infrastructure capacity.  

Yee Jiun's tenure at Meta started in 2010 when he joined the company as a Research Scientist with a focus on improving system reliability and fault tolerance. He later became Vice President of Engineering for the Core Systems organization, where he played a key role in developing the software systems necessary for Meta's services to scale reliably to millions of servers.


Yee Jiun is committed to research and innovation and has published in various academic conferences, including SOSP, OSDI, and ISCA.

Yee Jiun holds a B.S. in Electrical Engineering and Computer Science and a B.A. in Economics from the University of California, Berkeley, an M.S. in Computer Science from Stanford University, and a Ph.D. in Computer Science from Cornell University.