Dynamic Tensor Rematerialization(DTR)
Marisa Kirisame, Steven Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock
- Save memory for NN by dynamically discarding and recomputing intermediate result at runtime.
- By being smart about what to keep and discard, let you train larger model under tight budget.
- Save 3x memory for 20% extra compute time - enable training model 3x as large!
- Make no assumptions about the model, and does not require a static graph.
- So it can be easily implemented into different deep learning framework.
- Still achieve amazing space-time tradeoff!
- Gradient Checkpointing is a technique to save memory for Deep Neural Network training.
- Or more generally, for reverse-mode automatic differentiation.
- However, memory planning is np-complete.
- Checkpointing also has to deal with program with arbitary control flow.
- To combat this, previous work made different restriction which sacrifice performance or usability.
- Some works models the program as a stack machine with no heap...
- And suffers performance degradtion when the assumption is broken!
- (For example, NN with highway connection/branching).
- Other works use an ILP solver, which consume lots of time to find the optimal memory planning.
- And can only be used for program/framework without control flow, posing problem for real world adoption.
- Additionally, gradient checkpointing couple derivative calculation, with memory saving by recomputing.
- This add complexity and limit applications range.
- DTR tackles the problems above by planning the memory greedily at runtime, instead of as a compiler pass.
- This solves the control flow and stack machine issue, as we do not model the program in anyway!
- However, with a novel cache eviction policy, we are still able to achieve great performance.