Skip to content

The official implementation of ICLR2025 paper for shift-invariant neural networks

Notifications You must be signed in to change notification settings

eth-siplab/Shift-Invariant_Deep_Learning_on_Time_Series

Repository files navigation

Shift-invariant Deep learning for Time-series

Shifting the Paradigm: A Diffeomorphism Between Time Series Data Manifolds for Achieving Shift-Invariancy in Deep Learning (ICLR 2025, Official Code)

Berken Utku Demirel, Christian Holz


Deep learning models lack shift invariance, making them sensitive to input shifts that cause changes in output. While recent techniques seek to address this for images, our findings show that these approaches fail to provide shift-invariance in time series. Worse, they also decrease performance across several tasks. In this paper, we propose a novel differentiable bijective function that maps samples from their high-dimensional data manifold to another manifold of the same dimension, without any dimensional reduction. Our approach guarantees that samples--when subjected to random shifts--are mapped to a unique point in the manifold while preserving all task-relevant information without loss. We theoretically and empirically demonstrate that the proposed transformation guarantees shift-invariance in deep learning models without imposing any limits to the shift. Our experiments on six time series tasks with state-of-the-art methods show that our approach consistently improves the performance while enabling models to achieve complete shift-invariance without modifying or imposing restrictions on the model's topology.

Illustration of shift-invariant transformation

(a) An input signal in the time domain and complex plane representation of its decomposed sinusoidal of frequency $\omega_0 = \frac{2\pi}{T_0}$ with the phase angle $\phi_0$. (b) Guiding the diffeomorphism to map samples between manifolds. (c) The obtained waveform with a phase shift applied to all frequencies linearly, calculated by the angle difference without altering the waveform. (d) The loss functions for optimizing networks with the cross-entropy and the variance of possible manifolds.

Contents

Datasets

  1. Datasets
  1. After downloading the raw data, they should be processed with the corresponding scripts, if there is any.

Running

The command to run with the guidance network

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --controller

straightforward running without anything

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0

with blurring (low-pass):

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --blur

with polyphase sampling:

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --aps

with canonicalization:

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --cano

without the guidance network while including the introduced transformation, one of the ablations in the paper:

python main_supervised_baseline.py --dataset 'ieee_big' --backbone 'resnet' --block 8 --lr 5e-4 --n_epoch 999 --cuda 0 --phase_shift

TLDR

...

Citation

If you find our paper or codes useful, please cite our work:

@inproceedings{
demirel2025shifting,
title={Shifting the Paradigm: A Diffeomorphism Between Time Series Data Manifolds for Achieving Shift-Invariancy in Deep Learning},
author={Berken Utku Demirel and Christian Holz},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=nibeaHUEJx}
}

Credits

Canonicalization is adapted from equiadapt library to make neural network architectures equivariant

Releases

No releases published

Packages

No packages published

Languages