top of page

Daniel J.
Mankowitz

20171120_081324.jpg

Staff Research Scientist @Deepmind
Email: daniel (dot) mankowitz (at) gmail (dot) com

scholar_edited.png
linkedin.png
Twitter-Logo-700x394_edited.png

About Me

I am a Staff Research Scientist at Google Deepmind.

I currently work on solving the key challenges preventing Reinforcement Learning algorithms from working on real-world applications at scale. This includes a focus on Reinforcement Learning from Human Feedback (RLHF) in the context of Large Language Models (LLMs).  

Some of my recent work has focused on using AI to find innovative ways of optimizing the computing stack from software to the underlying hardware - see our recent presentation at CogX:

You can find the corresponding blog post below:

 

 

 

 

 

 

 

 

 

Applications I have worked on include:

  • Code optimization 

  • Code generation

  • Chip Design

  • Video Compression

  • Recommender Systems

  • Controlling physical systems such as Heating Ventilation and Air-Conditioning (HVAC)

  • Plasma Magnetic Control

 

I have published works in Nature and Science which include:

I have also published a Google AI blog post detailing the various challenges of real world RL as well as a suite we have open-sourced to accelerate research toward solving these challenges.

In 2018, I completed my PhD in Hierarchical Reinforcement Learning under the supervision of Professor Shie Mannor, at the Technion Israel Institute of Technology. I am a recipient of the Google PhD Fellowship.

 

Screenshot 2023-11-13 at 16.03.22.png

Skills

Screen Shot 2022-03-03 at 1.10.14 AM.png

Work History

Work_edited_edited_edited.png

Education

Education Daniel_edited.png

Featured Work

google ai.png
new scientist logo.png
nature_logo.png
wdj.png
venturebeat.png
wired.png
mit.jpeg
Screenshot 2023-07-23 at 10.33.29.png
Screenshot 2023-07-25 at 11.04.46.png

Featured Papers

 Paper     |     Blog     |    News and Views

Fundamental algorithms such as sorting or hashing are used trillions of times on any given day. As demand for computation grows, it has become critical for these algorithms to be as performant as possible. Whereas remarkable progress has been achieved in the past, making further improvements on the efficiency of these routines has proved challenging for both human scientists and computational approaches. Here we show how artificial intelligence can go beyond the current state of the art by discovering hitherto unknown routines. To realize this, we formulated the task of finding a better sorting routine as a single-player game. We then trained a new deep reinforcement learning agent, AlphaDev, to play this game. AlphaDev discovered small sorting algorithms from scratch that outperformed previously known human benchmarks. These algorithms have been integrated into the LLVM standard C++ sort library. This change to this part of the sort library represents the replacement of a component with an algorithm that has been automatically discovered using reinforcement learning. We also present results in extra domains, showcasing the generality of the approach.

"Software Engineered
Game-playing AI speeds up
sorting in computer code"
 -
Nature Cover Feature

nature_cover.png

Video streaming usage has seen a significant rise as entertainment, education, and business increasingly rely on online video. Optimizing video compression has the potential to increase access and quality of content to users, and reduce energy use and costs overall. In this paper, we present an application of the MuZero algorithm to the challenge of video compression. Specifically, we target the problem of learning a rate control policy to select the quantization parameters (QP) in the encoding process of libvpx, an open source VP9 video compression library widely used by popular video-on-demand (VOD) services. We treat this as a sequential decision making problem to maximize the video quality with an episodic constraint imposed by the target bitrate. Notably, we introduce a novel self-competition based reward mechanism to solve constrained RL with variable constraint satisfaction difficulty, which is challenging for existing constrained RL methods. We demonstrate that the MuZero-based rate control achieves an average 6.28% reduction in size of the compressed videos for the same delivered video quality level (measured as PSNR BD-rate) compared to libvpx's two-pass VBR rate control policy, while having better constraint satisfaction behavior.

youtube.jpeg
L2e gif.gif
alphacode_science.jpg
science2022.webp

Paper     |     New Scientist

trane.jpg
fusion.png

Resource scheduling and allocation is a critical component of many high impact systems ranging from congestion control to cloud computing. Finding more optimal solutions to these problems often has significant impact on resource and time savings, reducing device wear-and-tear, and even potentially improving carbon emissions. In this paper, we focus on a specific instance of a scheduling problem, namely the memory mapping problem that occurs during compilation of machine learning programs: That is, mapping tensors to different memory layers to optimize execution time. We introduce an approach for solving the memory mapping problem using Reinforcement Learning. RL is a solution paradigm well-suited for sequential decision making problems that are amenable to planning, and combinatorial search spaces with high-dimensional data inputs. We formulate the problem as a single-player game, which we call the mallocGame, such that high-reward trajectories of the game correspond to efficient memory mappings on the target hardware. We also introduce a Reinforcement Learning agent, mallocMuZero, and show that it is capable of playing this game to discover new and improved memory mapping solutions that lead to faster execution times on real ML workloads on ML accelerators. We compare the performance of mallocMuZero to the default solver used by the Accelerated Linear Algebra (XLA) compiler on a benchmark of realistic ML workloads. In addition, we show that mallocMuZero is capable of improving the execution time of the recently published AlphaTensor matrix multiplication model.

Screenshot 2023-07-08 at 16.36.51.png

Reinforcement Learning (RL) has proven to be effective in solving numerous complex problems ranging from Go, StarCraft and Minecraft to robot locomotion and chip design. In each of these cases, a simulator is available or the real environment is quick and inexpensive to access. Yet, there are still considerable challenges to deploying RL to real-world products and systems....

perturbations.gif

Ventures

fitterli.jpg
Fitterli (Jan 2014- June 2015)

We at Fitterli are unleashing the power of the 3D depth camera. We are developing the first body digitization application that will allow you to digitize yourself with your tablet or laptop with a 3D-enabled camera. This app will automatically extract your exact body measurements and allow you to use a whole host of services which include tracking your body while following a diet or fitness regime, order bespoke clothes from tailors online, 3D virtual changerooms and much more!

Selected Publications

Paper     |      ICML 2019 RL4RealLife Workshop (2019) Best paper award

We present a set of nine unique challenges that must be addressed to productionize RL to real world problems....

realworldrl.png

Paper     |     Springer Special Issue on RL for Real Life (2021)

Our proposed challenges are implemented in a suite of continuous control environments called realworldrl-suite which we propose an as an open-source benchmark...

graphs.png

Paper     |     ICLR (2019)

In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one. We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies.

violation.gif

Paper     |     AAAI 2017

We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledge-base ... The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et. al. 2015) in sub-domains of Minecraft.

minecraft.jpg

Paper     |     ICLR 2020

We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms....

Screen Shot 2022-02-18 at 4.52.50 PM.png

Paper     |     RLDM 2019

We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.

Screen Shot 2022-02-18 at 4.35.50 PM.png
bottom of page