Transfer Learning

By:Janani R May 03, 2023 | 11:30 AM Technology

Machine learning tasks are domain-specific and models are unable to generalize their learning. This causes problems as real-world data is mostly unstructured, unlike training datasets. Consequently, this affects the predictability of the trained models. However, many language models are able to share much of their training data using transfer learning to optimize the general process of deep learning. The application of transfer learning in natural language processing significantly reduces the time and cost to train new NLP models.[1]

Transfer learning is a machine learning technique that enables data scientists to benefit from the knowledge gained from a previously used machine learning model for a similar task. This learning takes humans’ ability to transfer their knowledge as an example. If you learn how to ride a bicycle, you can learn how to drive other two-wheeled vehicles more easily. Similarly, a model trained for autonomous driving of cars can be used for autonomous driving of trucks.[2]

Figure .1 Transfer learning

Figure 1 shows Transfer learning is a machine learning technique that involves using knowledge gained from one task or domain to improve performance on a different task or domain. In transfer learning, a pre-trained model is used as a starting point for a new task, rather than training a new model from scratch.

The pre-trained model is typically trained on a large dataset and optimized for a specific task, such as image classification or natural language processing. By starting with a pre-trained model, the new model can leverage the knowledge learned by the pre-trained model to improve its performance on the new task, even if the new task has a different set of features or characteristics.

There are several different types of transfer learning, including:
  1. Feature extraction:In this approach, the pre-trained model is used as a fixed feature extractor, and the output of the pre-trained model is fed into a new model that is trained on the new task.
  2. Fine-tuning:In this approach, the pre-trained model is modified by adding additional layers or modifying existing layers, and the entire model is then retrained on the new task.
  3. Domain adaptation:In this approach, the pre-trained model is adapted to a new domain or dataset, and the model is then fine-tuned on the new task.

Transfer learning has become a popular technique in machine learning, particularly in deep learning, where it has been used in a wide range of applications, including computer vision, natural language processing, and speech recognition. By leveraging pre-trained models, transfer learning can help to reduce the amount of training data required for a new task, speed up the training process, and improve the performance of the model on the new task.

References:

  1. https://www.startus-insights.com/innovators-guide/natural-language-processing-trends/ - transfer-learning
  2. https://research.aimultiple.com/transfer-learning/

Cite this article:

Janani R (2023),Transfer learning, Anatechmaz, pp.229

Recent Post

Blog Archive