We propose a novel blind super resolution pipeline aimed at improving consistency and reducing artefacts generation.

The goal is to given a semantic input like image and a query sentence containing blanks representing either objects or action, try to fill in those blanks.

We explore the regime of supporting multi-domain based multi-modal continial learning frameworks along with proving their robustness to domain targetted/ mode targetted attacks.


The concept of catastrophic forgetting has been the foundation of continual learning, however, this phenomenon is only attributed to the generalization capabilities of the neural network. We hypothesize that there is a strong trigonal relationship between Catastrophic Forgetting, Generalization and Robustness.

To achieve optimal lifelong learning without heavy retraining of large models, we propose a novel approach of unlearning aspects of previous trained on data.