Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation

Riyansha Singh1, Parinita Nema2, Vinod K Kurmi2

1IIT Kanpur, 2IISER Bhopal

Accepted in INTERSPEECH 2024!

In machine learning applications, gradual data ingress is common, especially in audio processing where incremental learning is vital for real-time analytics. Few-shot class-incremental learning addresses challenges arising from limited incoming data. Existing methods often integrate additional train able components or rely on a fixed embedding extractor post training on base sessions to mitigate concerns related to catastrophic forgetting and the dangers of model overfitting. However, using cross-entropy loss alone during base session training is suboptimal for audio data. To address this, we propose incorporating supervised contrastive learning to refine the representation space, enhancing discriminative power and leading to better generalization since it facilitates seamless integration of incremental classes, upon arrival. Experimental results on NSynth and LibriSpeech datasets with 100 classes, as well as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art performance.

Download

Code

GitHub

Dataset

Citation

If you use this work, please cite:

                    @misc{singh2024robustfewshotclassincremental,
                        title={Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation}, 
                        author={Riyansha Singh and Parinita Nema and Vinod K Kurmi},
                        year={2024},
                        eprint={2407.19265},
                        archivePrefix={arXiv},
                        primaryClass={cs.SD},
                        url={https://arxiv.org/abs/2407.19265}, 
                    }
                
                

Connect

Drop a query at VisDom