When I train my PyTorch Lightning model on two GPUs on jupyter lab with strategy=“ddp_notebook”, only two CPUs are used and their usages are 100%. How can I overcome this CPU bottleneck?
Edit: I tested with PyTorchProfiler and it was because of old ssds used on the server
You must log in or register to comment.