KAT-ReID – VeRi-776
Model Details
- Model name: KAT-ReID (VeRi-776)
- Architecture: Kolmogorov–Arnold Transformer (KAT) with GR-KAN channel mixers
- Task: Vehicle Re-Identification
- Dataset: VeRi-776
- Framework: PyTorch
- License: MIT
- Paper: KAT-ReID: Assessing the Viability of Kolmogorov–Arnold Transformers in Object Re-Identification
This model replaces the MLP blocks of a ViT-based ReID backbone with Group-Rational Kolmogorov–Arnold Networks (GR-KAN) while retaining self-attention and ReID-specific architectural components.
Model Description
The model is trained for vehicle re-identification, where the goal is to retrieve images of the same vehicle across different cameras and viewpoints.
Key architectural features:
- GR-KAN replaces standard MLP channel mixers
- Side-information embedding (camera/view conditioning)
- Local token rearrangement branch to preserve spatial cues
- Joint optimization with ID classification and metric learning losses
Training Data
- Dataset: VeRi-776
- Identities: 776 vehicles
- Images: 49,357
- Cameras: 20
- Views: 8
Training follows the official dataset split and evaluation protocol.
Training Procedure
- Pretraining: ImageNet-1K
- Input resolution: 256 × 128
- Patch size: 16 × 16 (overlapping stride 12)
- Optimizer: SGD (lr=0.008, momentum=0.9)
- Batch size: 64 (16 IDs × 4 images)
- Losses: Cross-Entropy (ID) + Triplet loss
- Augmentations: Random flip, random erasing
- Mixed precision: Enabled
Evaluation Results
| Metric |
Score |
| mAP |
59.5 |
| Rank-1 |
88.0 |
| Rank-5 |
95.8 |
| Rank-10 |
98.0 |
Results are reported under single-query evaluation without re-ranking.
Intended Use
This model is intended for:
- Academic research in ReID
- Benchmarking alternative transformer channel mixers
- Studying robustness under viewpoint variation
Not intended for real-world surveillance or deployment without further validation.
Limitations
- Underperforms strong ViT baselines on globally discriminative vehicle benchmarks
- Sensitive to training stability due to rational activation parameterization
- Performance may vary with different camera distributions
Citation
@inproceedings{umair2025katreid,
title={KAT-ReID: Assessing the Viability of Kolmogorov--Arnold Transformers in Object Re-Identification},
author={Umair, Muhammad and Zhou, Jun and Musaddiq, Muhammad Hammad and Muhammad, Ahmad},
year={2025}
}