The validation loss is the loss that is calculated on the validation set. To avoid overfitting, early stopping is implemented and training is stopped once the validation loss stops decreasing Continuing training further would increase the chances of overfitting on the training data which may pose a privacy risk.