Shifting the Paradigm: A Diffeomorphism Between Time Series Data Manifolds for Achieving Shift-Invariancy in Deep Learning (ICLR 2025, Official Code)
Berken Utku Demirel, Christian Holz
Deep learning models lack shift invariance, making them sensitive to input shifts that cause changes in output. While recent techniques seek to address this for images, our findings show that these approaches fail to provide shift-invariance in time series. Worse, they also decrease performance across several tasks. In this paper, we propose a novel differentiable bijective function that maps samples from their high-dimensional data manifold to another manifold of the same dimension, without any dimensional reduction. Our approach guarantees that samples--when subjected to random shifts--are mapped to a unique point in the manifold while preserving all task-relevant information without loss. We theoretically and empirically demonstrate that the proposed transformation guarantees shift-invariance in deep learning models without imposing any limits to the shift. Our experiments on six time series tasks with state-of-the-art methods show that our approach consistently improves the performance while enabling models to achieve complete shift-invariance without modifying or imposing restrictions on the model's topology.
(a) An input signal in the time domain and complex plane representation of its decomposed sinusoidal of frequency
- Datasets
Activity recognition
UCIHAR, HHAR, USC.Heart rate prediction
IEEE SPC22, DaLiA.Cardiovascular disease (CVD) classification
CPSC2018, Chapman.Sleep stage classification
Sleep-EDFStep counting
ClemsonLung audio classification
Respiratory@TR
- After downloading the raw data, they should be processed with the corresponding scripts, if there is any.
The command to run with the guidance network
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --controller
straightforward running without anything
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0
with blurring (low-pass):
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --blur
with polyphase sampling:
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --aps
with canonicalization:
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --cano
without the guidance network while including the introduced transformation, one of the ablations in the paper:
python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --phase_shift
...
If you find our paper or codes useful, please cite our work:
@inproceedings{
demirel2025shifting,
title={Shifting the Paradigm: A Diffeomorphism Between Time Series Data Manifolds for Achieving Shift-Invariancy in Deep Learning},
author={Berken Utku Demirel and Christian Holz},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=nibeaHUEJx}
}
Canonicalization is adapted from equiadapt library to make neural network architectures equivariant