Academic
Academic
Home
Projects
Talks
Publications
Contact
Light
Dark
Automatic
Reinforcement Learning
Efficient Planning in a Compact Latent Action Space
Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, scaling such methods to high-dimensional action spaces remains challenging. We propose Trajectory Autoencoding Planner (TAP), which learns a compact discrete latent action space from offline data for efficient planning, enabling continuous control in high-dimensional control with a learned model.
Zhengyao Jiang
,
Tianjun Zhang
,
Michael Janner
,
Yueying Li
,
Tim Rocktäschel
,
Edward Grefenstette
,
Yuandong Tian
PDF
Cite
Code
Project
AutoCAT: Reinforcement Learning for Automated Exploration of Cache-Timing Attacks
Security can be seen as a competitive game between an attacker and a defender RL can be used to automatically explore attacks (and defenses) on a black-box system
Oct 28, 2022
New Hampshire
Mulong Luo
,
Wenjie Xiong
,
Geunbae Lee
,
Yueying Li
,
Xiaomeng Yang
,
Amy Zhang
,
Yuandong Tian
,
Hsien-Hsin S. Lee
,
and Edward Suh
Slides
Efficient Planning in a Compact Latent Action Space
Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, scaling such methods to high-dimensional action spaces remains challenging. We propose Trajectory Autoencoding Planner (TAP), which learns a compact discrete latent action space from offline data for efficient planning, enabling continuous control in high-dimensional control with a learned model.
Follow
Cite
×