Profile Picture

Julius Arolovitch

Undergraduate, Electrical and Computer Engineering & Robotics, CMU

Email: juliusa@cmu.edu

Phone: (617) 992-8096

Address: 1612E Newell-Simon Hall, 5000 Forbes Ave, Pittsburgh, PA, 15213

CV: Download


About Me

Hello! I'm Julius, a junior at Carnegie Mellon studying ECE and Robotics. I conduct research at CMU's Robotics Institute under Professors Maxim Likhachev and Howie Choset on motion planning.

My current work focuses on developing algorithms that leverage parallelism for planning for manipulation (Search-based Planning Lab) and decentralized agent distribution methods for Ergodic search for agent teams with heterogenous sensing abilities (Biorobotics Lab).

I am excited by work to better enable robotic agents to plan in dynamic, stochastic, or partially-observable environments. More broadly, I am passionate about building intelligent systems that help people.

Previously, I spent the summer of 2023 at Tel Aviv University's Robotics Lab working on learning for object recognition with underactuated hands, and the summer of 2024 at Johnson & Johnson doing systems integration R&D for robotic manipulators on Ottava. Before I caught interest in robotics, I wanted to be a doctor; I spent my first year of undergrad pre-med at the Bioengineering program at the University of Pittsburgh.

In my free time, I enjoy exploring new corners of the world, hiking up tall things, playing classical piano, and compete in poker and hackathons. I'm actively involved in Jewish life at CMU, serving as President of CMU's Hillel and in various capacities in student government.

Publications

Publication: Learning Neural Priority Functions

Learning Neural Priority Functions for Search-based Planning using Sufficient Conditions for Bounded Suboptimality without Re-openings

Authors: Julius Arolovitch*, Itamar Mishani*, Ramkumar Natarajan, Maxim Likhachev

Under Review, International Conference on Automated Planning and Scheduling (ICAPS) 2025

Download PDF
Publication: Kinesthetic-based In-Hand Object Recognition

Kinesthetic-based In-Hand Object Recognition with an Underactuated Robotic Hand

Authors: Julius Arolovitch*, Osher Azulay*, Avishai Sintov

IEEE International Conference on Robotics and Automation (ICRA) 2024

Teaching

16-280 Intelligent Robotic Systems, Teaching Assistant, Carnegie Mellon, Spring 2025

18-100 Introduction to Electrical & Computer Engineering, Teaching Assistant, Carnegie Mellon, Spring 2025

PHYS0175 Physics 2 for Engineers, Teaching Assistant, University of Pittsburgh, Spring 2023

Projects

DAgger and Behavior Cloning in OpenAI Gym

For 16-831 Introduction to Robot Learning @ CMU, I implemented DAgger and Behavior Cloning for agents in OpenAI Gym and MuJoCo. Using a provided oracle policy, the agents achieved expert-level mean evaluation returns after 5 and 30 iterations for the Ant-v2 and Humanoid-v2 environments, respectively. The implementation uses an MLP for the NN. Links: GitHub and writeup.

Below are the results of training the Humanoid-v2 and Ant-v2 agents using DAgger, visualized over iterations:

Humanoid-v2

Humanoid DAgger Iteration 1
Iteration 1
Humanoid DAgger Iteration 5
Iteration 5
Humanoid DAgger Iteration 30
Iteration 30

Ant-v2

Ant DAgger Iteration 5
Iteration 5

SheepFusion

In my initial experimentation with diffusion, I set out to create an image diffusion model. The UNet-based architecture with 137 parameters took 3 hours of training on a dataset of 1,700 publicly available images of sheep. Considering the size of the dataset, even after augmentation, the results were promising!

Sheep Diffusion Model GIF 1 Sheep Diffusion Model GIF 2