September 18th, 2024
San Francisco
The Pearl
Globe icon

Robotics &
Embodied AI
Conference

Sold Out
Actuate is an exclusive one-day event focused on sharing advancements in autonomous robotics and embodied AI development from the brightest minds in the industry.
Futuristic element
LAUV
AMR
AMR
AMR
AMR

Speakers

Brad Porter

CEO & Founder
Collaborative Robotics
Details

Sergey Levine

Co-founder,
Physical Intelligence (π)
Professor
UC Berkeley
Details

Vijay Badrinarayanan

VP of Artificial Intelligence
Wayve
Details

Kalpana Seshadrinathan

Head of Deep Learning
Skydio
Details

Vibhav Altekar

Co-founder and VP of Software
Saronic
Details

Chris Lalancette

ROS 2 Technical Lead
Intrinsic

Details

Allison Thackston

Senior Manager Robotics
Blue River Technology
Details

Ryan
Cook

Director of Engineering
Agility Robotics
Details

Nick
Obradovic

Sr. Director of Software Engineering
Sanctuary AI
Details

Rohan
Ramakrishna

Senior Staff Engineer, Data Platform
Agility Robotics
Details

Kathleen Brandes

CTO & Co-founder
Adagy Robotics
Details

Michael
Laskey

CTO

Electric Sheep
Details

Kat 

Scott

Developer Advocate
OSRA
Details

Rajat Bhageria

Founder & CEO
Chef Robotics
Details

David Weikersdorfer

Head of Autonomy
Farm-ng
Details

Steven Macenski

Owner & Chief Navigator
Open Navigation
Details

Ilia
Baranov

CTO & Co-Founder
Polymath Robotics
Details

Simon
Box

Founder & CEO
ReSim.ai
Details

Karthik Keshavamurthi

Motion Planning Engineer
Scythe Robotics
Details

Adrian Macneil

CEO & Co-founder 

Foxglove
Details

James Kuszmaul

Senior Robotics Engineer
Blue River Technology
Details

Jeremy Steward

Senior Software Engineer
Tangram Vision
Details

Sponsors

Schedule

8:00 am

Arrival + Breakfast

Breakfast, coffee and juice
9:15 am

Adrian
Macneil

Opening Remarks
CEO & Co-founder, 

Foxglove

Opening Remarks

Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
9:30 am

Sergey
Levine

Robotic Foundation Models
Co-founder, Physical Intelligence (π), Associate Professor, Berkeley

Robotic Foundation Models

General-purpose models trained on large and diverse datasets can often outperform more specialized domain-specific systems, and have demonstrated remarkable capabilities in domains from natural language processing to computer vision. But what would such a generalist model look like in robotics? In this presentation, I will discuss steps toward building broadly applicable robotic foundation models, from assembling suitable datasets, to devising viable and general training objectives, to model architectures that can control a wide range of different robotic platforms, as well as the different ways such models can be used.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
10:00 am

Michael Laskey

Building Robot Caretakers for our Planet with End2End Learning
CTO,
Electric Sheep

Building Robot Caretakers for our Planet with End2End Learning

At Electric Sheep, we are scaling physical agents across the country to care for our parks and community spaces. Our robots perform tasks such as; mowing, string trimming, and weed treatment using a both foundational world model, ES-1, and Reinforcement Learning. ES-1 is a learned model that consumes time-series information and is capable of predicting both raw features and interpretable outputs useful for outdoor work; such as semantics, a birds-eye-view map, robot pose and traversability. Given this world representation, we can then extract embeddings to teach our agents how to act with RL. In this talk, I will detail how we leverage sim2real techniques to train our models and achieve surprisingly robust simulation transfer across thousands of real yards. I will then delve into how we can leverage language based feedback to improve performance in the real world. 
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
10:20 am

Chris Lalancette

Zenoh and ROS 2: Not a Paradox
ROS 2 Technical Lead,
Intrinsic


Zenoh and ROS 2: Not a Paradox

ROS 2 is a middleware based on a strongly-typed, anonymous publish/subscribe mechanism that allows for message passing between different processes. Since its inception, ROS 2 has used the industry standard DDS as the way to deliver messages both on the same computer, and on the wire to different computers. While DDS has worked for our users, it can be complex to properly configure and understand, and it has a number of shortcomings. To simplify deployment and debugging, the ROS 2 core team has spent a good part of the last year integrating Zenoh into ROS 2. Zenoh is a newer protocol built by some long-time DDS developers, which aims to simplify deployment of networking systems, both locally and across the larger internet. The integration of Zenoh shows great promise as a simpler way for users to configure and use ROS 2.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
10:40 am

Nick Obradovic

What Does it Take to Train a Robot's Mind?
Sr. Director of Software Engineering,
Sanctuary.ai

What Does it Take to Train a Robots Mind?

We will review the data, machine learning, and deployment pipeline necessary to successfully train and validate the models powering Phoenix. As we navigate the uncharted territory of training humanoids, we will share early success stories and hard lessons learned, along with our thoughts on what could accelerate our path to success.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
11:00 am

Break

20-minute break
11:20 am

Jeremy
Steward

Turning The Dial On Autonomous Calibration
Senior Software Engineer,
Tangram

Turning The Dial On Autonomous Calibration

Almost all modern robots and autonomous vehicles rely on sophisticated multimodal sensor arrays. Yet calibrating these complex arrays can be onerous or sometimes impossible without the right approach. For perception engineers, striking a balance among maintaining optimal sensor performance, minimizing operational downtime, and preserving onboard resources has become trickier than ever. In this talk, we explore different technical aspects of both "offline" and "online" (or fully autonomous) calibration configurations, and evaluate how each of these provide different technical, performance and operational advantages (or disadvantages) on both the production line and in the field.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
11:40 am

James
Kuszmaul

Logging for Robots with Distributed Compute Nodes
Senior Robotics Software Engineer,
Blue River Technology

Logging for Robots with Distributed Compute Nodes

Many modern robotics systems consist of multiple compute devices coordinating to deliver value. At Blue River Technology, the See & Spray system uses 10 synchronized computers with 36 cameras to identify and selectively spray weeds in 10’s of milliseconds. In order to develop & debug such a product, careful and effective data logging is critical. This talk will discuss how we make use of the AOS middleware to produce coherent logs that are robust to failure modes such as intermittent network connectivity or drifting clocks.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
12:00 pm

Kalpana
Seshadrinathan

Autonomous Navigation at Night: Skydio X10's NightSense
Head of Deep Learning,
Skydio

Autonomous Navigation at Night: Skydio X10's NightSense

Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
12:20 pm

Lunch

1:45 pm

Kat
Scott

Panel Discussion: The Role of Simulation in Robotics Development
Allison
Thackston
Senior Manager Robotics, Blue River Technology
Ilia
Baranov
CTO & Co-Founder, Polymath Robotics
Kathleen
Brandes
CTO & Co-Founder, Adagy Robotics
Rajat
Bhageria
Founder & CEO, Chef Robotics
Simox Box
Founder & CEO, ReSim.ai

Panel Discussion: The Role of Simulation

Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
2:20pm

Steven Macenski

Mastering Nav2: Techniques and Applications Powering an Industry
Owner & Chief Navigator, 

Open Navigation

Mastering Nav2: Techniques and Applications Powering an Industry

Steve Macenski, Open Navigation owner and lead of Nav2, will survey Nav2's techniques and features that set it apart as the world's most deployed mobile robotics autonomy framework. He will then discuss some of Nav2's prominent users and how they have utilized Nav2 in their products and services to rapidly position themselves as industry leaders in their respective domains across logistics, urban environments, agriculture, and much more.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
2:40 pm

Karthik Keshavamurthi

Feedback to Features
Motion Planning Engineer,
Scythe Robotics

Feedback to Features

Developing a cutting-edge product feature is a collaborative effort involving multiple stakeholders from initial conception to final release. In this presentation, we will explore the complete journey of creating the "Return to Home" feature for Scythe M.52, our fully autonomous commercial mower. We will discuss the entire lifecycle, starting with initial ideas inspired by vital customer feedback, moving through requirements definition, technical design, testing, and validation. The talk will emphasize the decision-making processes at each stage, the significance of cross-functional collaboration, and the iterative refinements guided by continuous customer feedback. The key takeaway intended from the talk is the translation of robotics into user facing benefits, emphasizing the importance of domain understanding to aid technical decision making.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
3:00 pm

David
Weikersdorfer

All-terrain Autonomy for Agriculture
Head of Autonomy,
Farm-ng

All-terrain Autonomy for Agriculture

Labor shortage is a major challenge for the cultivation of work intensive crops. At farm-ng we are developing a camera-based autonomy stack with the help of deep neural networks which aims to enable a variety of autonomous farming operations like scouting, weeding, and harvesting. The talk will give a hands-on overview of our stack and how we use a digital twin and visualization tools to accelerate our development cycle time.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
3:20 pm

Break

20-minute break
3:40 pm

Vibhav
Altekar

Building Ocean Autonomy: Overcoming challenges to develop reliable software for seafaring drones
Co-Founder & VP of Software,
Saronic

Building Ocean Autonomy: Overcoming Challenges to Develop Reliable Software for Seafaring Drones

Oceans are hostile, unpredictable environments that present infinite challenges when developing drones capable of autonomously navigating these open waters. This talk from Saronic – a leading developer and manufacturer of autonomous surface vessels for the U.S. Navy – will cover what makes it challenging to build software for seafaring robots, and how they are overcoming those obstacles with the help of simulation, data visualization, and MCAPs.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
4:00 pm

Rohan
Ramakrishna

Building a Robotics Data Platform for Robot & Fleet Level Analysis

Ryan
Cook

Senior Software Engineer, Data Platform &
Director of Engineering, Cloud Software,
Agility Robotics

Building a Robotics Data Platform for Robot & Fleet Level Analysis

Collecting, extracting, and analyzing data is a key component of improving robotics systems.  But how do you actually pull data from robots in bandwidth constrained remote environments? And how do you enable your users to analyze that data?  In this talk we will cover our data collection systems on our robots, the data ingestion pipelines to our cloud data lake including how we transform and merge together many different data streams, and the mechanisms we provide to our teams for individual robot debugging via Foxglove, fleet level analysis via SuperSet, product usage in Agility Arc and to support machine learning development for our robotics engineers.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
4:30 pm

Vijay Badrinarayanan

End2end Technologies to Accelerate Deployment of End2end Policies.
VP of Artificial Intelligence,
Wayve


End2end Technologies to Accelerate Deployment of End2end Policies

The road to embodiedAI is being paved with impressive end2end trained models. This is particularly remarkable with self driving where we are now transitioning the technology to robust and safe deployment. But what is needed to further accelerate the learning efficiency and universal deployment of end2end models? I will showcase Wayve’s recent research progress to address this broad question.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
5:00 pm

Brad Porter

From Prototype to Production: Mastering Robotics at Scale
CEO & Founder

From Prototype to Production: Mastering Robotics at Scale

Brad Porter, CEO and Founder of Collaborative Robotics, in conversation with Foxglove CEO Adrian Macneil, will provide a comprehensive exploration of the critical factors necessary for deploying robotics at scale. Drawing from Brad’s extensive experience leading Amazon’s global robotics operations, they will discuss the considerations and strategies that ensure reliability and scalability in complex environments. Brad will highlight key lessons learned in optimizing operational efficiency and the importance of robust infrastructure to support large-scale robotic systems. The session will also delve into innovative applications that drive industrial automation and provide a forward-looking perspective on emerging trends in the field. Attendees will gain actionable insights into overcoming common scaling challenges and how to apply these strategies to their own robotics deployments.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
5:30 pm

Happy Hour

Cocktails and light appetizers

Days

:

Hours

:

Minutes

:

Seconds
ac·tu·ate
/'ak(t)SHəˌwāt/

The Actuate Summit 

by Foxglove

A one day event designed to bring together the brightest minds in the robotics development industry to share their insights, experiences, and ideas on the latest trends in embodied AI and robotics development.

Robotics will have 
a massive positive impact

We're building powerful open source and commercial tools to accelerate the impact robotics will have on the global economy and human productivity. We want you to come and join us, along with  a room full of your peers, in a community that's dedicated to robotics developers and the advancement of embodied AI and robotics development.
This year's agenda is full of exciting presentations, tutorials, and insightful panel discussions, along with a casual happy hour to network with everyone in attendance.
get ticket