September 18th, 2024
San Francisco
The Pearl

Robotics &
Embodied AI
Conference

Actuate is an exclusive one-day event focused on sharing advancements in autonomous robotics and embodied AI development from the brightest minds in the industry.
GET TICKETS

Speakers

Vijay Badrinarayanan

VP of AI,
Wayve
Details

Sergey Levine

Co-founder,
Physical Intelligence (π)
Details

Chris Lalancette

ROS 2 Technical Lead,
Intrinsic

Details

Steven Macenski

Owner & Chief Navigator, 

Open Navigation
Details

Nick
Obradovic

Sir. Director of Software Engineering,
Sanctuary AI
Details

Kalpana Seshadrinathan

Head of Deep Learning,
Skydio
Details

Kat 

Scott

Developer Relations Lead,
Open Robotics
Details

Michael
Laskey

CTO, 

Electric Sheep
Details

Jeremy Steward

Senior Software Engineer,
Tangram Vision
Details

Jason Sprowl

Principal Software Engineer,
Agility Robotics
Details

Ryan
Cook

Director of Engineering,
Agility Robotics
Details

Karthik Keshavamurthi

Motion Planning Engineer,
Scythe Robotics
Details

James Kuszmaul

Senior Robotics Engineer,
Blue River Technology
Details

David Weikersdorfer

Head of Autonomy,
Farm-ng
Details

Adrian Macneil

CEO & Co-founder, 

Foxglove
Details

Sponsors

Schedule

8:00 am

Arrival + Breakfast

Breakfast, coffee and juice
9:30 am

Adrian
Macneil

Opening Remarks
CEO & Co-founder, 

Foxglove

Opening Remarks

Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
10:00 am

Sergey
Levine

Robotic Foundation Models
Co-founder, Physical Intelligence (π), Associate Professor, Berkeley

Robotic Foundation Models

General-purpose models trained on large and diverse datasets can often outperform more specialized domain-specific systems, and have demonstrated remarkable capabilities in domains from natural language processing to computer vision. But what would such a generalist model look like in robotics? In this presentation, I will discuss steps toward building broadly applicable robotic foundation models, from assembling suitable datasets, to devising viable and general training objectives, to model architectures that can control a wide range of different robotic platforms, as well as the different ways such models can be used.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
10:30 am

Chris Lalancette

Zenoh and ROS 2: Not a Paradox
ROS 2 Technical Lead,
Intrinsic


Zenoh and ROS 2: Not a Paradox

ROS 2 is a middleware based on a strongly-typed, anonymous publish/subscribe mechanism that allows for message passing between different processes. Since its inception, ROS 2 has used the industry standard DDS as the way to deliver messages both on the same computer, and on the wire to different computers. While DDS has worked for our users, it can be complex to properly configure and understand, and it has a number of shortcomings. To simplify deployment and debugging, the ROS 2 core team has spent a good part of the last year integrating Zenoh into ROS 2. Zenoh is a newer protocol built by some long-time DDS developers, which aims to simplify deployment of networking systems, both locally and across the larger internet. The integration of Zenoh shows great promise as a simpler way for users to configure and use ROS 2.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
11:00 am

Kalpana
Seshadrinathan

Autonomous Navigation at Night: Skydio X10's NightSense
Head of Deep Learning,
Skydio

Autonomous Navigation at Night: Skydio X10's NightSense

The challenge of low-light obstacle navigation has long prevented industries ranging from public safety to infrastructure inspection from benefiting from safe drone operations after dark. Skydio X10's NightSense system allows for fully autonomous around-the-clock drone operations including use in complete darkness, with advanced AI flight assistance and obstacle avoidance. NightSense uniquely combines enhanced navigation cameras, advanced software/image tuning, and illumination hardware into two distinct setups: one for visible light and another for infrared (IR) light, each tailored to specific operational needs. I will discuss Skydio NightSense, its development and usages in the real world in this talk.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
11:30 am

Jason
Sprowl

How to Build Robot Systems

Ryan
Cook

Principal Software Engineer &
Director of Engineering,
Agility Robotics

How to Build Robot Systems

Coming soon...
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
Noon

Kat 

Scott

Coming soon...
Developer Relations,
Open Robotics

Coming soon...

Coming soon...
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
12:30 pm

Lunch

1:30 pm

Vijay Badrinarayanan

Coming soon...
VP of Artificial Intelligence,
Wayve


Coming soon...

Coming soon...
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
2:00 pm

Steven Macenski

Mastering Nav2: Techniques and Applications Powering an Industry
Owner & Chief Navigator, 

Open Navigation

Mastering Nav2: Techniques and Applications Powering an Industry

Steve Macenski, Open Navigation owner and lead of Nav2, will survey Nav2's techniques and features that set it apart as the world's most deployed mobile robotics autonomy framework. He will then discuss some of Nav2's prominent users and how they have utilized Nav2 in their products and services to rapidly position themselves as industry leaders in their respective domains across logistics, urban environments, agriculture, and much more.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
2:30 pm

Michael Laskey

Building Robot Caretakers for our Planet with End2End Learning
CTO,
Electric Sheep

Building Robot Caretakers for our Planet with End2End Learning

At Electric Sheep, we are scaling physical agents across the country to care for our parks and community spaces. Our robots perform tasks such as; mowing, string trimming, and weed treatment using a both foundational world model, ES-1, and Reinforcement Learning. ES-1 is a learned model that consumes time-series information and is capable of predicting both raw features and interpretable outputs useful for outdoor work; such as semantics, a birds-eye-view map, robot pose and traversability. Given this world representation, we can then extract embeddings to teach our agents how to act with RL. In this talk, I will detail how we leverage sim2real techniques to train our models and achieve surprisingly robust simulation transfer across thousands of real yards. I will then delve into how we can leverage language based feedback to improve performance in the real world. 
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
3:00 pm

Jeremy
Steward

Turning The Dial On Autonomous Calibration
Senior Software Engineer,
Tangram

Turning The Dial On Autonomous Calibration

Almost all modern robots and autonomous vehicles rely on sophisticated multi-modal sensor arrays. Yet calibrating these complex arrays can be onerous or sometimes impossible without the right approach. For perception engineers, striking a balance among maintaining optimal sensor performance, minimizing operational downtime, and preserving onboard resources has become trickier than ever. In this talk, we explore different technical aspects of both "offline" and "online" (or fully autonomous) calibration configurations, and evaluate how each of these provide different technical, performance and operational advantages (or disadvantages) on both the production line and in the field.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
3:30 pm

Karthik Keshavamurthi

Feedback to Features
Motion Planning Engineer,
Scythe Robotics

Feedback to Features

Developing a cutting-edge product feature is a collaborative effort involving multiple stakeholders from initial conception to final release. In this presentation, we will explore the complete journey of creating the "Return to Home" feature for Scythe M.52, our fully autonomous commercial mower. We will discuss the entire lifecycle, starting with initial ideas inspired by vital customer feedback, moving through requirements definition, technical design, testing, and validation. The talk will emphasize the decision-making processes at each stage, the significance of cross-functional collaboration, and the iterative refinements guided by continuous customer feedback. The key takeaway intended from the talk is the translation of robotics into user facing benefits, emphasizing the importance of domain understanding to aid technical decision making.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
4:00 pm

James
Kuszmaul

Logging for Robots with Distributed Compute Nodes
Senior Robotics Software Engineer,
Blue River Technology

Logging for Robots with Distributed Compute Nodes

Many modern robotics systems consist of multiple compute devices coordinating to deliver value. At Blue River Technology, the See & Spray system uses 10 synchronized computers with 36 cameras to identify and selectively spray weeds in 10’s of milliseconds. In order to develop & debug such a product, careful and effective data logging is critical. This talk will discuss how we make use of the AOS middleware to produce coherent logs that are robust to failure modes such as intermittent network connectivity or drifting clocks.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
4:30 pm

David
Weikersdorfer

All-terrain Autonomy for Agriculture
Head of Autonomy,
Farm-ag

All-terrain Autonomy for Agriculture

Labor shortage is a major challenge for the cultivation of work intensive crops. At farm-ng we are developing a camera-based autonomy stack with the help of deep neural networks which aims to enable a variety of autonomous farming operations like scouting, weeding, and harvesting. The talk will give a hands-on overview of our stack and how we use a digital twin and visualization tools to accelerate our development cycle time.
Topics included:
image sensing, machine learning, implementation
image sensing
machine learning
implementation
5:00 pm

Happy Hour

Cocktails and light appetizars

Days

:

Hours

:

Minutes

:

Seconds
ac·tu·ate
/'ak(t)SHəˌwāt/

The Actuate Summit 

by Foxglove

A one day event designed to bring together the brightest minds in the robotics development industry to share their insights, experiences, and ideas on the latest trends in embodied AI and robotics development.

Robotics will have 
a massive positive impact

We're building powerful open source and commercial tools to accelerate the impact robotics will have on the global economy and human productivity. We want you to come and join us, along with  a room full of your peers, in a community that's dedicated to robotics developers and the advancement of embodied AI and robotics development.
This year's agenda is full of exciting presentations, tutorials, and insightful panel discussions, along with a casual happy hour to network with everyone in attendance.
get ticket