a tiny research group on the edge of theoretical deep learning

Find out more

Convolution Neural Networks have shown high magnitude of success while operating in the spatial domain. However, we claim that the long range capacity of spatial modelling is achieved by shifting to frequency domain with fewer number of parameters.

Read more

The concept of catastrophic forgetting has been the foundation of continual learning, however, this phenomenon is only attributed to the generalization capabilities of the neural network. We hypothesize that there is a strong trigonal relationship between Catastrophic Forgetting, Generalization and Robustness.

Read more

To achieve optimal lifelong learning without heavy retraining of large models, we propose a novel approach of unlearning aspects of previous trained on data.

Read more

More Projects

Diganta Misra

Machine Learning Engineer, Weights & Biases

Himanshu Arora

Graduate Student, Mila, University of Montreal

Trikay Nalamada

Undergraduate Student, Indian Institute of Technology, Guwahati

Ajay Uppili Arasanipalai

Student, University of Illinois at Urbana-Champaign

Javier Ideami

CEO, Ideami Studios

Federico Andres Lois

Director of Quantitative Research, Epsilon Acquisition Services

Jaegul Choo (주재걸)

Associate Professor, Graduate School of Artificial Intelligence, KAIST

Sanghun Jung

Graduate Student, DAVIAN Lab, KAIST

Full team


In collaboration with:

Continual AI
Weights & Biases

Supported by contributions from:

OpenPOWER Foundation