Multi-task learning is a sub-field within Deep Learning. It is recommended that you familiarize yourself with the concepts of neural networks to understand what multi-task learning means.
What is Multi-Task Learning and how can it help you?
Multi-task learning is a subfield of Machine Learning. It aims to solve multiple tasks simultaneously by taking advantage the similarities between the tasks. This can increase the learning efficiency as well as act as a regularize, which we’ll discuss later. If there is a formal requirement, it will be stated as such. N Tasks (conventional deep-learning approaches are limited to solving a single task with a particular model), but these tasks can be solved using conventional deep learning techniques. Tasks or a subset thereof are related but not identical. Multi-Task Learning(MTL).This knowledge will be used to improve the learning of a specific model.
Multi-Task Learning (MTL), intuition behind it:
Deep learning models are used to help us predict specific values by obtaining a good representation from the input data. Formally, optimizing for a function means training a model and tuning the hyperparameters until the performance is not improved.
MTL can be used to improve performance by forcing the model (updates its weights) to learn a generalized representation of the task.
Humans learn biologically in the same way. Learning multiple tasks is more beneficial than focusing on one task at a time.
Read also ::Confusion matrix Read also :: Radiomic texture feature
MTL is a regularize:
Machine Learning’s lingo, MTL, can be seen as a method of creating bias. This is an inductive transfer technique that uses multiple tasks to induce a bias that favors hypotheses that are able to explain everything.
MTL acts to regularize the model by inducing inductive bias, as mentioned above. It reduces the chance of overfitting, and it also decreases the model’s ability for random noise to be accommodated during training.
Let’s now discuss the most common and important techniques for using MTL.
Sharing of Hard Parameters :
While a common hidden layer is used to represent all tasks, several task-specific layers are retained towards the end. This is a very useful technique as it allows us to learn a representation of various tasks using a common hidden layers. However, overfitting is less likely.
Soft Parameter Sharing:
Each model has its own set of biases, weights, and distances. The regularization ensures that these parameters are comparable and can be used to represent all tasks.
Assumptions & Consideration’s MTL is only useful when tasks are similar. However, if this assumption is not met, performance will be significantly affected.
MTL techniques are used in many ways.
- Facial recognition and object detection
- Self-Driving Cars: You can detect pedestrians, stop signs, and other obstacles together
- Multi-domain collaborative filtering of web applications
- Stock Prediction
- Language Modelling and Other NLP Applications