Single numbers that indicate the deviation between the expected and actual values for a single data point are called “scalars” when discussing loss functions. Predictions made by a model may be quantified using these values.
In a regression issue, when the aim is to make a continuous-value prediction, Mean Squared Error (MSE) is a typical loss function. The mean square error (MSE) is the average of the squared deviations between the predicted and observed values. The computation yields a numeric number that quantifies the model’s performance on the dataset.
Classification problems, in which each data point must be assigned a label, often use cross-entropy loss as one of their loss functions. The error between the model’s output probabilities and the actual labels is what this function calculates. It normalises the difference to a single number.
During training, reducing this scalar value is a priority for both scenarios. Methods like gradient descent do this by adjusting the model’s parameters to achieve a smaller loss.
Remember, the loss function is crucial in training a machine learning model because it guides the optimization process. The goal of the model is improved prediction on new data by reducing the loss.