Update README.md

This commit is contained in:
Tony Z. Zhao
2023-05-16 22:45:41 -07:00
committed by GitHub
parent 25c79bc6c6
commit 606e7a0667

View File

@@ -1,6 +1,7 @@
# ACT: Action Chunking with Transformers # ACT: Action Chunking with Transformers
### *New*: [ACT tuning tips](https://docs.google.com/document/d/1FVIZfoALXg_ZkYKaYVh-qOlaXveq5CtvJHXkY25eYhs/edit?usp=sharing) ### *New*: [ACT tuning tips](https://docs.google.com/document/d/1FVIZfoALXg_ZkYKaYVh-qOlaXveq5CtvJHXkY25eYhs/edit?usp=sharing)
TL;DR: if your ACT policy is jerky or pauses in the middle of an episode, just train for longer! Success rate and smoothness can improve way after loss plateaus.
#### Project Website: https://tonyzhaozh.github.io/aloha/ #### Project Website: https://tonyzhaozh.github.io/aloha/
@@ -83,6 +84,6 @@ To enable temporal ensembling, add flag ``--temporal_agg``.
Videos will be saved to ``<ckpt_dir>`` for each rollout. Videos will be saved to ``<ckpt_dir>`` for each rollout.
You can also add ``--onscreen_render`` to see real-time rendering during evaluation. You can also add ``--onscreen_render`` to see real-time rendering during evaluation.
For real-world data where things can be harder to model, train for at least 5000 epochs or 3-4 times the length after the loss has plateaued.
Please refer to [tuning tips](https://docs.google.com/document/d/1FVIZfoALXg_ZkYKaYVh-qOlaXveq5CtvJHXkY25eYhs/edit?usp=sharing) for more info.