Upgrade from 1.7 to the 2.0¶
Regular User¶
| If | Then | Ref | 
|---|---|---|
| have wrapped your loggers with  | directly pass a list of loggers to the Trainer and access the list via the  | |
| used  | access  | |
| used  | upgrade to the latest API | |
| used   | use   | |
| used   | use   | |
| used   | switch to general purpose hook  | |
| used   | switch to general purpose hook  | |
| used Trainer’s flag  | use directly  | |
| used Trainer’s property  | 
| If | Then | Ref | 
|---|---|---|
| used  | set  | |
| used  | call  | |
| imported  | import  | 
| If | Then | Ref | 
|---|---|---|
| used Python 3.7 | upgrade to Python 3.8 or higher | |
| used PyTorch 1.10 | upgrade to PyTorch 1.11 or higher | |
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | pass the path to the  | |
| used Trainer’s flag  | use  | |
| called the  | use Trainer’s flag  | |
| called the  | use Trainer’s flag  | |
| used Trainer’s flag   | use the   | |
| imported profiles from  | import from  | |
| used  | move to a standalone  | |
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | use callbacks  | 
Advanced User¶
| If | Then | Ref | 
|---|---|---|
| used  | switch to  | |
| used   | now use  | |
| used any   | rename them to   | |
| used  | rely on protected  | |
| used  | rely on protected   | |
| used  | switch to built-in https://github.com/pytorch/torchdistx support | |
| have implemented  | move your implementation to  | |
| have implemented the  | move your implementation to  | |
| have implemented the  | move your implementation to  | |
| have implemented the  | move your implementation to  | |
| have implemented the  | move your implementation  to  | |
| have implemented the  | move your implementation to  | |
| used  | use  | |
| used  | use  | |
| used Trainer’s attribute  | it was replaced by   | |
| used Trainer’s attribute  | it was replaced by   | |
| used Trainer’s attribute  | use  | |
| used Trainer’s attribute  | use   | |
| used Trainer’s attribute  | use  | |
| used   | switch to using  | |
| used  | it was removed | |
| logged with  | switch to  | |
| used   | log metrics explicitly | |
| used   | log metrics explicitly | |
| used   | rely on generic read-only property  | |
| used   | rely on generic read-only property  | |
| used   | rely on generic read-only property  | |
| rely on the returned dictionary from   | call directly  | 
| If | Then | Ref | 
|---|---|---|
| imported  | import  | |
| imported  | import  | |
| imported  | import   | |
| imported profiler classes from  | import  | |
| used  | use  | |
| used  | use  | |
| used the  | switch to  | |
| used the Lightning Hydra multi-run integration | removed support for it as it caused issues with processes hanging | |
| used  | use   | 
| If | Then | Ref | 
|---|---|---|
| used the  | switch to  | |
| used Trainer’s flag  | use DDP with  | |
| implemented  | port your logic to   | |
| implemented  | port your logic to   | |
| implemented  | port your logic to   | |
| used Trainer’s flag  | switch to   | |
| used Trainer’s flag  | implement particular offload logic in your custom metric or turn it on in  | |
| used Trainer’s flag  | overwrite  | |
| used Trainer’s flag  | use   | |
| relied on the  | switch to manual optimization | |
| relied on the  | switch to manual optimization | |
| were using  | switch to PyTorch native mixed precision  | |
| used Trainer’s flag  | use PyTorch native mixed precision | |
| used Trainer’s flag  | use PyTorch native mixed precision | |
| used Trainer’s flag  | use PyTorch native mixed precision | |
| used Trainer’s attribute  | use PyTorch native mixed precision | |
| used Trainer’s attribute  | use PyTorch native mixed precision | |
| used Trainer’s attribute  | use PyTorch native mixed precision | |
| use the  | consider using PyTorch’s native FSDP implementation or outsourced implementation into own project | |
| used  | use native FSDP instead | |
| used  | use native FSDP instead | |
| used  | use native FSDP instead | |
| used  | use native FSDP instead | |
| used  | use native FSDP instead | |
| used  | use native FSDP instead | |
| used  | pass this option and via dictionary of  | |
| used  | pass this option and via dictionary of  | |
| have customized loops  | implement your training loop with Fabric. | |
| have customized loops  | implement your training loop with Fabric. | |
| have customized loops  | implement your training loop with Fabric. | |
| used the Trainer’s  | implement your training loop with Fabric | |
| used the Trainer’s  | implement your training loop with Fabric | |
| used the Trainer’s  | implement your training loop with Fabric | |
| used the Trainer’s  | implement your training loop with Fabric | |
| used the  | being marked as protected | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used declaring optimizer frequencies in the dictionary returned from  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used  | use manual optimization | |
| used Trainer’s  | use manual optimization | |
| used  | ||
| used training integration with Horovod | install standalone package/project | |
| used training integration with ColossalAI | install standalone package/project | |
| used  | use Torch’s Quantization directly | |
| had any logic except reducing the DP outputs in   | port it to  | |
| had any logic except reducing the DP outputs in   | port it to  | |
| had any logic except reducing the DP outputs in   | port it to  | |
| used  | switch to general   | |
| used the automatic addition of a moving average of the  | use  | |
| rely on the  | access them via  | |
| need to pass a dictionary to  | pass them independently. | 
Developer¶
| If | Then | Ref | 
|---|---|---|
| Removed the legacy  | ||
| used the generic method  | switch to a specific one depending on your purpose  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | import it from  | |
| used  | switch it to  | |
| derived it from  | use Trainer base class | |
| used base class  | switch to use  | |
| set distributed backend via the environment variable  | use  | |
| used  | switch to   | |
| used  | switch to   | |
| used  | use  | |
| used  | rely on Torch native AMP | |
| used  | rely on Torch native AMP | |
| used Trainer’s attribute  | rely on loop constructor   | |
| used Trainer’s attribute  | it was removed | |
| derived from  | rely on  | |
| derived from  | rely on methods from  | |
| used Trainer’s attribute  | switch to the  | |
| used  | it was set as a protected method  | |
| used Profiler’s attribute   | it was removed | |
| used Profiler’s attribute   | it was removed | |
| used the   | ||
| used  | chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature:  | |
| used  | change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature:  | 
| If | Then | Ref | 
|---|---|---|
| derived from  | derive from  | |
| derived from  | derive from  | |
| derived from  | derive from  | 
| If | Then | Ref | 
|---|---|---|
| passed the  | passed the (required)  | |
| used  | use DDP or DeepSpeed instead | |
| used  | use DDP or DeepSpeed instead | |
| called  | use DDP or DeepSpeed instead | |
| used or derived from  | use DDP instead | |
| used the  | use PyTorch native mixed precision | |
| used the  | switch to the  | |
| used the  | implement your training loop with Fabric | |
| used the  | implement your training loop with Fabric | |
| used the  | check the same using  | |
| used any function from  | switch to  | |
| imported functions from   | import them from  | |
| imported functions from  | import them from  | |
| imported functions from  | import them from  | |
| used any code from  | use the base classes | |
| used any code from  | rely on Pytorch’s native functions | |
| used any code from  | it was removed | |
| used any code from  | it was removed | |
| used any code from  | it was removed | |
| were using truncated backpropagation through time (TBPTT) with  | use manual optimization | |
| were using truncated backpropagation through time (TBPTT) with  | use manual optimization | |
| were using truncated backpropagation through time (TBPTT) and passing  | use manual optimization | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| used  | switch to using  | |
| derived from  | switch to PyTorch native equivalent | |
| used  | customize your logger | |
| if you derived from mixin’s method  | rely on  | |
| used   | switch to  | |
| used  | implement own logic with Fabric | |
| used or derived from public  | it is set as protected | |
| used the  | use manual optimization | |
| used the  | use manual optimization | |
| used the  | use manual optimization | |
| used  | use   | |
| used  | rely on Trainer precision attribute | |
| used   | you shall pass the  | |
| relied on  | pass dataloders directly | |
| relied on  | pass dataloders directly | |
| used  | rename to  | |
| accessed  | rely on Trainer internal loops’ properties | |
| accessed  | rely on Trainer internal loops’ properties | |
| accessed  | rely on Trainer internal loops’ properties | |
| accessed  | rely on Trainer internal loops’ properties | |
| used  | rely on precision plugin | |
| used  | it was removed | |
| used  | it was removed |