AI Dynamics

Global AI News Aggregator

torch.compile uses Triton kernels under the hood for optimization

So if you're using torch.compile you're already using a lot of triton under the hood, afaik PyTorch picks and chooses whether to call cuda kernels or triton for different ops / settings. Triton is really awesome, but of course you're staying in the Python / torch universe. Which

→ View original post on X — @karpathy,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *