Disney, Blue River Technology and Datarock preferred PyTorch over Google’s TensorFlow deep learning framework because of its relative ease of use.
Disney has been using PyTorch since 2019 to recognize characters’ faces in its cartoons. (credit: Disney) Deep learning is a sub-category of machine learning, which uses neural networks to automate historically complex tasks. Examples are image recognition and natural language processing. TensorFlow was released in 2015 by Google and has been widely used in research and manufacturing. But PyTorch, which left Facebook in 2016, quickly caught up thanks to community-made improvements in terms of ease of use and deployment and its ability to cover a wider range of scenarios.
PyTorch has found applications in autonomous driving systems like Tesla and Lyft Level 5, where it’s been adopted particularly widely. The framework is also used to categorize and recommend content in media companies or to operate robots in industrial applications.
Disney: Identifying Faces in Animated Film and Cartoons
Since 2012, engineers and data scientists at media giant Disney have been working to build what the company calls the “Content Genome,” a knowledge graph that pulls together content data to power machine learning-based search and personalization applications across Disney’s huge content library. Disney invested in a huge amount of content annotation work. They asked their data scientists to develop deep learning algorithms to identify millions of images of people, characters, and locations.
To start, Disney engineers tested different frameworks, including TensorFlow. But ultimately, in 2019, their choice fell on PyTorch. . After dealing with hundreds of faces, the new model has already been able to identify faces in all three types of use. Also, the object detector has been in production since January 2020.
To process such a large amount of video data to train and run the model in parallel, engineers wanted Expensive, high-performance GPUs to speed up training and updating models.
Additionally, the distribution of the results to Disney groups was sped up, and processing time for a feature film went from an hour to five or ten minutes. It is also made very simple for the engineering team since the adoption of PyTorch was simple and fast. Besides, Disney engineers were helped by the very active PyTorch community when they encountered certain issues or bottlenecks using PyTorch.
Blue River Technology: Robotic Weed Killers
The amazing robot developed by Blue River Technology combines digital tracking, integrated cameras and computer vision to spray weeds with a herbicide in near-real time without damaging crops. It allows farmers to save on the high price of herbicide while at the same time conserving the environment. Cotton plants can be a challenge, as they can sometimes be mistaken for weeds.
Agronomists have trained and tagged the images with PyTorch and have used the tagged images to train a CNN.
What are the differences between TensorFlow and PyTorch?
Analyzing TensorFlow vs. PyTorch
In general, a simple Neural Network has three layers. The input is provided to the Embedding Layer, the Global Average Pooling Layer provides the output, and the predictions are the Dense Layer’s output.
TensorFlow models are generally created by using Keras. Keras is used because Keras is an open-source library that, unlike TensorFlow, mostly uses high-level APIs.
Subclassing, Functional API or Sequential model API can be used to develop models in Keras.
Subclassing – the Keras. Model class, which is more extended than other classes, enables you to develop completely customizable models and in which the forward pass is implemented in the call method. In contrast, the layers are defined in the _init_ method.
Functional API – it is a very user-friendly approach compared to Subclassing as the developers’ community recommends. This approach requires less coding as the previous layer’s input is passed on as soon as the layer is defined. The model is instantiated through input and output tensor(s).
Sequential model API is a shortcut to a trainable model with only a few common layers, and thus it is a compact way to define a model. On the other hand, this approach performs extremely well when it comes to creating simple Neural Networks, but complex Neural Networks become very hard,
In comparison to TensorFlow or Keras, PyTorch only supports two modeling approaches: sequential and subclassing.
Subclassing – subclassing is done with this approach, much like TensorFlow (Keras). The layers are defined with the _init_() method, but the forward pass is created with the forward method instead of TensorFlow’s call (Keras). There is a need in PyTorch to have the exact kernel size so that it can be used as global average-pooling since there is only one average-pooling layer available.
Sequential – also very much similar to how it is done in TensorFlow (Keras) and done through the Sequential module.
Subclassing approach instead of Sequential approach is widely recommended for many recurrent layers, viz. RNN, LSTM that cannot be used with an.Sequential in PyTorch.
Datarock also stated a 4x increase in inference performance with PyTorch and Detectron2 when running the GPUs’ models and a 3x increase on CPUs.
Truong has mentioned the Python ecosystem’s growing community, well-designed interface, simplicity, and better debugging as reasons for switching to PyTorch and noted that although the interfaces are quite different, TensorFlow knowledge is quite sufficient to switch, especially if you know Python.”