Abstract
Real-world images frequently exhibit multiple overlapping biases, including textures, watermarks, gendered makeup, scene-object pairings, etc. These biases collectively impair the performance of modern vision models, undermining both their robustness and fairness. Addressing these biases individually proves inadequate, as mitigating one bias often permits or intensifies others.
GMBM is a lean two-stage framework that needs group labels only while training and minimizes bias at test time. First, Adaptive Bias-Integrated Learning (ABIL) deliberately identifies the influence of known shortcuts by training encoders for each attribute and integrating them with the main backbone. Then Gradient-Suppression Fine-Tuning prunes those bias directions from the backbone’s gradients, leaving a single compact network that ignores all shortcuts.
We also introduce Scaled Bias Amplification (SBA), a robust test-time fairness metric that disentangles model-induced amplification from distribution shifts.

BibTeX
@inproceedings{rajeevr_ecai25,
author = {Dwivedi, Rajeev R} and {Kumar, Ankur} and {Kurmi, Vinod},
title = {Multi-Attribute Bias Mitigation via Representation Learning},
booktitle = {ECAI},
year = {2025}
}
Model & Method

Stage 1 — Adaptive Bias–Integrated Learning (ABIL)
- ABIL trains a dedicated encoder for each known bias attribute to capture its spurious features. It computes cosine-similarity-based attention weights between these bias features and the main image representation, then fuses them. By presenting the classifier with both the clean and bias-accentuated signals, ABIL forces the model to recognize and disentangle spurious cues from task-relevant information, building a bias-aware backbone representation.
Stage 2 — Gradient Suppression
- After ABIL, the bias encoders are discarded, and the backbone is fine-tuned using only the clean image features. Each bias vector is projected onto a space orthogonal to the image feature, and the model is penalized for gradient components in these directions. This suppresses any residual reliance on spurious features, ensuring the final network remains invariant to all known biases while preserving meaningful semantic information.
Scaled Bias Amplification (SBA)
- Scaled Bias Amplification (SBA) measures how much a model amplifies group attribute biases on unseen data using only test-set statistics. It compares predicted and actual co-occurrence proportions, applying a frequency-based scaling factor to balance rare and common subgroups. This design avoids train-test distribution shift issues, reduces noise from underrepresented groups, and produces a single, interpretable bias score that remains stable across imbalance levels.

Key Contributions
- First practical, end-to-end multi-bias mitigation framework - Handles multiple overlapping biases simultaneously using only group labels at training, with no extra modules at inference.
- Two-stage ABIL → Gradient-Suppression Fine-Tuning - ABIL Learns to expose and integrate bias features, then suppresses them via gradient orthogonalization for a single robust backbone.
- Robust SBA metric - A test-only, subgroup-aware measure of bias amplification that remains stable under imbalance and distribution shifts.