Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, scaling such methods to high-dimensional action spaces remains challenging. We propose Trajectory Autoencoding Planner (TAP), which learns a compact discrete latent action space from offline data for efficient planning, enabling continuous control in high-dimensional control with a learned model.
Our paper introduces STAMP, an end-to-end 3-party MPC protocol for efﬁcient privacy-preserving machine learning inference. USTAMP combines MPC protocol with a lightweight TEE (LTEE) to reduce MPC overhead while avoiding challenges in a traditional TEE. STAMP achieves significantly lower inference overhead than state-of-the-art MPC protocols with either CPU or GPU, under either a WAN or LAN setting.
Ditto takes a hierarchical approach to application cloning, starting with capturing the dependency graph across distributed services, to recreating each tier’s control/data flow, and finally generating system calls and assembly that mimics the individual applications. Ditto does not reveal the logic of the original application, facilitating publicly sharing clones of production services with hardware vendors, cloud providers, and the research community. We show that across a diverse set of single- and multi-tier applications, Ditto accurately captures their CPU and memory characteristics as well as their high-level performance metrics, is portable across platforms, and facilitates a wide range of system studies.