<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://murray.cds.caltech.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fbaldini</id>
	<title>Murray Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://murray.cds.caltech.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Fbaldini"/>
	<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/Special:Contributions/Fbaldini"/>
	<updated>2026-04-15T16:22:52Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.5</generator>
	<entry>
		<id>https://murray.cds.caltech.edu/index.php?title=Sarah_Dean,_11-12_Feb_2020&amp;diff=23374</id>
		<title>Sarah Dean, 11-12 Feb 2020</title>
		<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/index.php?title=Sarah_Dean,_11-12_Feb_2020&amp;diff=23374"/>
		<updated>2020-02-10T13:30:18Z</updated>

		<summary type="html">&lt;p&gt;Fbaldini: /* Tuesday (11 Feb) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sarah Dean, a PhD student working with Ben Recht, will visit Caltech on 11-12 Feb 2020.  If you would like to meet with her, please sign up for a slot  below (using your IMSS credentials to log in).  Please make sure to put the location where she should meet you.&lt;br /&gt;
&lt;br /&gt;
=== Schedule ===&lt;br /&gt;
&lt;br /&gt;
{| width=100%&lt;br /&gt;
|- valign=top &lt;br /&gt;
| width=50% |&lt;br /&gt;
==== Tuesday (11 Feb) ====&lt;br /&gt;
* 11:40 am arrival in BUR&lt;br /&gt;
* ~12:15 pm: arrival on campus&lt;br /&gt;
* 12:15 pm: John Doyle (210 Annenberg)&lt;br /&gt;
* 1:00 pm: Quick lunch (TBD)&lt;br /&gt;
* 1:30 pm: Richard Murray (109 Steele Lab)&lt;br /&gt;
* 2:00 pm: Open&lt;br /&gt;
* 2:45 pm: Francesca (230 ANN)&lt;br /&gt;
* 3:30 pm:  Anima Anandkumar (316 Annenberg)&lt;br /&gt;
* 4:00 pm: Seminar&lt;br /&gt;
* 5:00 pm: Katie Bouman (346 Annenberg)&lt;br /&gt;
* 6:00 pm: Dinner with Richard + grad students, postdocs&lt;br /&gt;
| width=50% |&lt;br /&gt;
&lt;br /&gt;
==== Wednesday (12 Feb) ====&lt;br /&gt;
* 8:45 am: Kamyar Azizzadenesheli (337 Annenberg)&lt;br /&gt;
* 9:30 am: Richard Cheng (205 Gates Thomas)&lt;br /&gt;
* 10:15 am: MJ Khojasteh (255 Moore Lab)&lt;br /&gt;
* 11:00 am: Ludwig Schmidt seminar&lt;br /&gt;
* 12:00 pm: Lunch with faculty or grad students&lt;br /&gt;
* 1:15 pm: Yisong Yue (303 Annenberg)&lt;br /&gt;
* 2:00 pm: Aaron Ames (266 Gates-Thomas)&lt;br /&gt;
* 2:30 pm: Meet with Ames&#039; Students (121 Gates Thomas)&lt;br /&gt;
* 3:00 pm: CDS tea&lt;br /&gt;
* 3:30 pm: Angie Liu (315 Annenberg)&lt;br /&gt;
* 4:00 pm: Sumanth (Steele Library, opposite 109 Steele  Lab)&lt;br /&gt;
* 4:30 pm: Wrap up meeting with Richard (109 Steele Lab)&lt;br /&gt;
* 4:45 pm: Depart campus&lt;br /&gt;
* 6:40 pm departure from BUR&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Seminar ===&lt;br /&gt;
&lt;br /&gt;
Safe and Robust Perception-Based Control&amp;lt;br&amp;gt;&lt;br /&gt;
Sarah Dean, UC Berkeley&lt;br /&gt;
&lt;br /&gt;
Tue, 11 February, 4 pm&amp;lt;br&amp;gt;&lt;br /&gt;
105 Annenberg&lt;br /&gt;
&lt;br /&gt;
Machine learning provides a promising path to distill information from high dimensional sensors like cameras -- a fact that often serves as motivation for merging learning with control. This talk aims to provide rigorous guarantees for systems with such learned perception components in closed-loop. Our approach is comprised of characterizing uncertainty in perception and then designing a robust controller to account for these errors. We use a framework which handles uncertainties in an explicit way, allowing us to provide performance guarantees and illustrate how trade-offs arise from limitations of the training data. Throughout, I will motivate this work with the example of autonomous vehicles, including both simulated experiments and an implementation on a 1/10 scale autonomous car. Joint work with Aurelia Guy, Nikolai Matni, Ben Recht, Rohan Sinha, and Vickie Ye.&lt;/div&gt;</summary>
		<author><name>Fbaldini</name></author>
	</entry>
	<entry>
		<id>https://murray.cds.caltech.edu/index.php?title=SURF_discussions,_Jan_2020&amp;diff=23309</id>
		<title>SURF discussions, Jan 2020</title>
		<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/index.php?title=SURF_discussions,_Jan_2020&amp;diff=23309"/>
		<updated>2020-01-23T00:46:10Z</updated>

		<summary type="html">&lt;p&gt;Fbaldini: /* 28 Jan (Tue) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Slots for talking with applicants and co-mentors about SURF projects.  Please sign up for one of the slots below.  All times are PST. __NOTOC__&lt;br /&gt;
&lt;br /&gt;
In preparation for our conversation, please do the following:&lt;br /&gt;
* SURF students should work with their co-mentors to find a time the meeting/Skype call.  (For Skype calls, co-mentors should initiate.)&lt;br /&gt;
* Please make sure you have read the material in the description of your project, so that you are prepared to talk about what the project is about and we can narrow in on the key ideas that will be the basis of your proposal&lt;br /&gt;
* Please take a look at the [[SURF GOTChA chart]] page, which is the format that we will use for the first iteration of your project proposal.&lt;br /&gt;
* Please read through the [[http:sfp.caltech.edu/students/proposal/surf_and_amgen_proposals|SURF proposal information page]] to see what the SURF office requires (and when)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=1 width=100%&lt;br /&gt;
|- valign=top&lt;br /&gt;
| width=25% |&lt;br /&gt;
==== 24 Jan (Fri) ====&lt;br /&gt;
* 2:00 pm PST: open&lt;br /&gt;
* 2:30 pm PST: open&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
* 4:30 pm PST: open&lt;br /&gt;
* 5:00 pm PST: open&lt;br /&gt;
| width=25% |&lt;br /&gt;
&lt;br /&gt;
==== 28 Jan (Tue) ====&lt;br /&gt;
* 1:30 pm PST: open&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
* 5:00 pm PST: Bruno and Francesca&lt;br /&gt;
* 5:30 pm PST: Ivy and Apurva&lt;br /&gt;
| width=25% |&lt;br /&gt;
&lt;br /&gt;
==== 30 Jan (Thu) ====&lt;br /&gt;
* 9:00 am PST: open&lt;br /&gt;
* 9:30 am PST: Tom and Josefine&lt;br /&gt;
| width=25% |&lt;br /&gt;
&lt;br /&gt;
==== 3 Feb (Mon, if needed) ====&lt;br /&gt;
* 9:00 am PST: Open&lt;br /&gt;
* 9:30 am PST: Chelsea, Katherine&lt;br /&gt;
&amp;lt;hr&amp;gt;&lt;br /&gt;
* 5:00 pm PST: Open&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Please only use these slots of none of the others work (it is a bit late in the timeline for proposals)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The agenda for the phone call is (roughly):&lt;br /&gt;
&lt;br /&gt;
# Description of the basic idea behind the project (based on applicant&#039;s understanding)&lt;br /&gt;
# Discussion about approaches, things to read, variations to consider, etc&lt;br /&gt;
# Discussion of the format of the proposal&lt;br /&gt;
# Questions and discussion about the process&lt;/div&gt;</summary>
		<author><name>Fbaldini</name></author>
	</entry>
	<entry>
		<id>https://murray.cds.caltech.edu/index.php?title=SURF_2020:_Social-Aware_Robot_Navigation&amp;diff=23229</id>
		<title>SURF 2020: Social-Aware Robot Navigation</title>
		<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/index.php?title=SURF_2020:_Social-Aware_Robot_Navigation&amp;diff=23229"/>
		<updated>2019-12-12T03:20:57Z</updated>

		<summary type="html">&lt;p&gt;Fbaldini: Created page with &amp;quot;Real-time navigation in dense human environments is a challenging problem in robotics.  In order to navigate through a dense crowd in a socially compliant manner, robots need...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-time navigation in dense human environments is a challenging problem in robotics.&lt;br /&gt;
&lt;br /&gt;
In order to navigate through a dense crowd in a socially compliant manner, robots need to understand human behavior and comply with their cooperative rules. &lt;br /&gt;
&lt;br /&gt;
We are interested in an autonomous robot design that can detect human intentions and interact safely with them to navigate itself in the crowd.  &lt;br /&gt;
&lt;br /&gt;
In this study, we may first use data-driven methods to detect the intention of the human agent on-board and also the intention of other agents nearby. &lt;br /&gt;
Then a higher-level controller can be designed to execute, modify, or override the human intention.&lt;br /&gt;
The autonomous robot design will be implemented and tested on a robot platform available in the lab. The testing scenarios may include crowd locations, e.g. campus cafeteria. &lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;Skills Required&#039;&#039;: &lt;br /&gt;
&lt;br /&gt;
For the controller design, general knowledge about planar rigid-body dynamics and PID control are desired. &lt;br /&gt;
&lt;br /&gt;
For the experimental implementation, programming experience with Python, Pytorch and ROS is needed.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;References:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[1] https://www.semanticscholar.org/paper/Robot-navigation-in-dense-human-crowds%3A-Statistical-Trautman-Ma/f5783d21492ceeec6cfcf42595d270dedbb91d5e&lt;br /&gt;
&lt;br /&gt;
[2] https://arxiv.org/pdf/1809.08835.pdf&lt;br /&gt;
&lt;br /&gt;
[3] https://arxiv.org/pdf/1903.00143.pdf&lt;/div&gt;</summary>
		<author><name>Fbaldini</name></author>
	</entry>
	<entry>
		<id>https://murray.cds.caltech.edu/index.php?title=Sandeep_Chichali,_16_Oct_2019&amp;diff=23040</id>
		<title>Sandeep Chichali, 16 Oct 2019</title>
		<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/index.php?title=Sandeep_Chichali,_16_Oct_2019&amp;diff=23040"/>
		<updated>2019-10-15T15:12:29Z</updated>

		<summary type="html">&lt;p&gt;Fbaldini: /* Schedule */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sandeep Chinchali, a Caltech alum and PhD students at Stanford, will visit Caltech onn  16 Oct (Wed).  If you would like to meet  with him,  sign up here.&lt;br /&gt;
&lt;br /&gt;
=== Schedule ===&lt;br /&gt;
&lt;br /&gt;
* 11 am: seminar&lt;br /&gt;
* 12 pm: lunch with Richard (and Joel, if he is around)&lt;br /&gt;
* 1 pm: Open&lt;br /&gt;
* 1:30 pm: Open&lt;br /&gt;
* 2:00 pm: Open&lt;br /&gt;
* 2:30 pm: Francesca &lt;br /&gt;
* 3 pm: CDS tea (optional)&lt;br /&gt;
* 3:45 pm: RMM group meeting (you are welcome to come if you like)&lt;br /&gt;
&lt;br /&gt;
=== Seminar ===&lt;br /&gt;
&lt;br /&gt;
Distributed Perception Between Robots and the Cloud: A Learning-Based Approach &amp;lt;br&amp;gt;&lt;br /&gt;
Sandeep Chinchali, PhD candidate, Stanford &amp;lt;br&amp;gt;&lt;br /&gt;
16 Oct (Wed), 11a-12p, 121  Annenberg&lt;br /&gt;
&lt;br /&gt;
Today’s robotic fleets are increasingly facing two coupled challenges. First, they are measuring growing volumes of high-bitrate video and LIDAR sensory streams, which, second, often leads them to use increasingly compute-intensive models, such as deep neural networks (DNNs), for downstream perception or control. To cope with these intertwined challenges, compute and storage-limited robots, such as low-power drones, can offload data to central servers (or “the cloud”), for more accurate real-time perception as well as offline model learning. However, cloud processing of large robotic sensory streams introduces acute systems bottlenecks ranging from  network delay for real-time inference, to cloud storage, human annotation, and cloud-computing costs for offline model learning. &lt;br /&gt;
&lt;br /&gt;
In this talk, I will present learning-based approaches for robots to improve model performance with cloud computing, but with minimal systems cost. For real-time inference, I will present a deep reinforcement learning based offloader that decides when a robot should exploit low-latency, on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, for continual learning, I will present an intelligent, on-robot sampler that mines real-time sensory streams for valuable training examples to send to the cloud for model re-training and specialization. Using insights from months of field data and experiments on state-of-the-art embedded deep learning hardware, I will show how simple learning algorithms can allow robots to significantly transcend their on-board sensing performance, but with limited cloud communication cost. Finally, I will conclude with future directions in data-driven networked control, informed by my industry collaborations.&lt;br /&gt;
&lt;br /&gt;
Bio:&lt;br /&gt;
Sandeep Chinchali is a final-year Computer Science PhD candidate at Stanford, advised by Marco Pavone and Sachin Katti. Previously, he was an early engineer at Uhana, a Stanford startup working on data-driven optimization of cellular networks, now acquired by VMWare. His &lt;br /&gt;
research on cloud robotics and data-driven control of wireless systems has led to successful proof-of-concept trials with major cellular network operators, and was a finalist for best student paper at Robotics: Science and Systems 2019. Prior to Stanford, he graduated from Caltech, where he worked on applications of formal methods in robotics.&lt;/div&gt;</summary>
		<author><name>Fbaldini</name></author>
	</entry>
	<entry>
		<id>https://murray.cds.caltech.edu/index.php?title=Thomas_Mohren,_2_Oct_2019&amp;diff=22956</id>
		<title>Thomas Mohren, 2 Oct 2019</title>
		<link rel="alternate" type="text/html" href="https://murray.cds.caltech.edu/index.php?title=Thomas_Mohren,_2_Oct_2019&amp;diff=22956"/>
		<updated>2019-09-30T03:19:31Z</updated>

		<summary type="html">&lt;p&gt;Fbaldini: /* Schedule */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thomas Mohren, a  PhD student working with Tom Daniel and Steve Brunton at U. Washington, will visit Caltech on 2-3 Oct 2019.  If you would ilke to meet with Thomas, please sign up here:&lt;br /&gt;
&lt;br /&gt;
=== Schedule ===&lt;br /&gt;
&lt;br /&gt;
2 Oct 2019 (Wed)&lt;br /&gt;
* 10:00 Group meeting presentation, 181 BBB&lt;br /&gt;
* 11:30 am: Richard&lt;br /&gt;
* 12:00 pm: Lunch with Petter, Yuxiao (pick up at Richard&#039;s office)&lt;br /&gt;
* 1:30 pm: Joel Burdick (room 245 Gates-Thomas)&lt;br /&gt;
* 2:15 pm: Karena Cai (331 Annenberg)&lt;br /&gt;
* 3:00 pm: CDS tea&lt;br /&gt;
* 3:45  pm: Amir (Dickinson lab) BBB 204&lt;br /&gt;
* 4:30 pm: Francesca (230 Annenberg)&lt;br /&gt;
&lt;br /&gt;
3 Oct 2019 (Thu)&lt;br /&gt;
* 10:00 am: Open&lt;br /&gt;
* 10:45 am: Open&lt;br /&gt;
* 11:30 pm: Open&lt;br /&gt;
* 12:15 pm: Lunch (open)&lt;br /&gt;
* 1:30 pm: Open&lt;br /&gt;
* 2:15 pm: Open&lt;br /&gt;
* 3:00 pm: Leave for airport&lt;br /&gt;
&lt;br /&gt;
=== Group meeting talk ===&lt;br /&gt;
&lt;br /&gt;
Neural-inspired sparse sensing for classification and control &amp;lt;br&amp;gt;&lt;br /&gt;
Thomas Mohren, U. Washington&lt;br /&gt;
&lt;br /&gt;
Sparse sensor placement is a central challenge in the efficient characterization of complex systems when the cost of acquiring and processing data is high. Leading sparse sensing methods typically exploit either spatial or temporal correlations, but rarely both. We use sparse sensor optimization in combination with neural-inspired sensory encoding to leverage the spatiotemporal coherence exhibited by many biological systems. Using flying insects as a model, we subject flapping plate with embedded strain gauges to inertial rotation. We show that nonlinear filtering in time is essential to detect rotation, whereas instantaneous measurements fail. My current project aims to understand how animals overcome temporal challenges such as sensory delays and time varying control effectiveness during locomotion. Using the inverted pendulum as a benchmark model, I study the effectiveness of timing-based controllers in dealing with these challenges.&lt;/div&gt;</summary>
		<author><name>Fbaldini</name></author>
	</entry>
</feed>