Convolution Neural Networks have shown high magnitude of success while operating in the spatial domain. However, we claim that the long range capacity of spatial modelling is achieved by shifting to frequency domain with fewer number of parameters.
The concept of catastrophic forgetting has been the foundation of continual learning, however, this phenomenon is only attributed to the generalization capabilities of the neural network. We hypothesize that there is a strong trigonal relationship between Catastrophic Forgetting, Generalization and Robustness.
To achieve optimal lifelong learning without heavy retraining of large models, we propose a novel approach of unlearning aspects of previous trained on data.