diff --git a/README.md b/README.md index a790d34..9f90f3e 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,7 @@ # ACT: Action Chunking with Transformers ### *New*: [ACT tuning tips](https://docs.google.com/document/d/1FVIZfoALXg_ZkYKaYVh-qOlaXveq5CtvJHXkY25eYhs/edit?usp=sharing) +TL;DR: if your ACT policy is jerky or pauses in the middle of an episode, just train for longer! Success rate and smoothness can improve way after loss plateaus. #### Project Website: https://tonyzhaozh.github.io/aloha/ @@ -83,6 +84,6 @@ To enable temporal ensembling, add flag ``--temporal_agg``. Videos will be saved to ```` for each rollout. You can also add ``--onscreen_render`` to see real-time rendering during evaluation. - - +For real-world data where things can be harder to model, train for at least 5000 epochs or 3-4 times the length after the loss has plateaued. +Please refer to [tuning tips](https://docs.google.com/document/d/1FVIZfoALXg_ZkYKaYVh-qOlaXveq5CtvJHXkY25eYhs/edit?usp=sharing) for more info.