Product reviews, deals and the latest tech news

Release of PyTorch 2.0 quickens open-source machine learning

The open-source PyTorch framework is one of the most popular ML technologies currently available.

With its 1.0 release in 2018, PyTorch had its beginnings at Facebook (then known as Meta) in 2016. Meta transferred the PyTorch project to the Linux Foundation-run PyTorch Foundation in September 2022. Developers of the Python framework PyTorch have now announced the first experimental release of PyTorch 2.0, marking a significant milestone in the project’s evolution. The new version promises to improve ML training and development times while remaining compatible with older versions of the PyTorch application framework.

In a recent interview with VentureBeat, Soumith Chintala, primary maintainer of PyTorch, said that the project had just implemented a new feature called “torch.compile,” which required users to manually contribute new code to their projects. We believe it represents a major improvement for our users, therefore we’re labelling it “2.0.”

As an example, in 2021 there was some debate over whether or not PyTorch 1.10 should be considered version 2.0. According to Chintala, there weren’t enough substantial changes between versions 1.9 and 1.10 of PyTorch to justify a big number increase to 2.0.

Most recently, at the end of October, PyTorch 1.13 was made available to the public. IBM’s code contribution was a major factor in that release, as it made it possible for the machine learning framework to function better with commodity ethernet-based networking for massively parallel workloads.

Since the torch.compile project is providing a new paradigm in the PyTorch user experience that provides substantial speedups to users that were not available in the default eager mode of PyTorch 1.0, Chintala argued that the timing was ideal for PyTorch 2.0.

He described how the 43% speedup and consistent performance of roughly 160 open-source models were achieved with the addition of a single line of code to the PyTorch 2.0 development pipeline.

With the release of PyTorch 2, Chintala says, “we expect that people will change the way they use PyTorch on a daily basis.”

He said that with PyTorch 2.0, developers would begin their experiments in eager mode and then switch to compiled mode for improved efficiency when it comes time to train their models over extended time periods.

According to Chintala, “data scientists will be able to accomplish with PyTorch 2.x the same things that they did with 1.x,” but with more speed and scalability. If your model was training for 5 days but now only takes 2.5 days thanks to 2.x’s compiled mode, you’ll have more time to try out new ideas or make a larger model that still trains in the same amount of time.

PyTorch 2.x will feature even more Python

The initial component of PyTorch’s name (Py) comes from the popular open-source language Python, which is utilised extensively in the field of data science.

However, sections of the latest PyTorch version are written in C++, so it’s no longer totally written in Python.

We’ve relocated several components of the torch throughout the years.

nn from Python into C++ to get that extra bit of speed in the home stretch,” Chintala explained.

According to Chintala, the PyTorch project plans to re-introduce torch.nn-related functionality into Python sometime in a later 2.x version (but not 2.0). He pointed out that C++ is usually more speedy than Python, but that the new compiler (torch.compile) actually ends up being quicker than executing the comparable code in C++.

In his opinion, “moving these pieces back to Python enhances hackability and reduces the barrier for code contributions,” as Chintala put it.

Python 2.0 development will continue for the next few months, with a release date of March 2023 being projected as the earliest possible release date. In tandem with the development work, PyTorch is emerging from under Meta’s stewardship and becoming its own entity.

‘The PyTorch Foundation is in its infancy, and you can expect to hear more about it in the not-too-distant future,’ Chintala assured. “The organisation is now carrying out a number of handoffs and setting objectives.”