Pre-trained transformer-based language models (PLMs) have revolutionised text classification tasks, but their performance tends to deteriorate on data distant in time from the training dataset. Continual supervised...
|< |
< |
1 |