Using Pytorch Autograd for Backpropagation

torch.autograd is PyTorch’s automatic differentiation engine that powers neural network training. In this section, you will get a conceptual understanding of how autograd helps a neural network train.

import os 

os.getenv('LD_LIBRARY_PATH')
'/workspaces/artificial_intelligence/.venv/lib/python3.11/site-packages/nvidia:/workspaces/artificial_intelligence/.venv/lib/python3.11/site-packages/tensorrt_libs'
import torch

from zmq import device

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

x = torch.tensor([2.], requires_grad=True).to(device)
y = torch.tensor([6.], requires_grad=True).to(device)
x
tensor([2.], device='cuda:0', grad_fn=<ToCopyBackward0>)