Publications by Axel Jantsch

Sorted by year

Sorted by topic

2026

[1] Jonas Janser, Matthias Wess, Dominik Dallinger, Matthias Bittner, Daniel Schnöll, and Axel Jantsch. Spring reverb emulation with hybrid gated convolutional networks and state space models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, Barcelona, Spain, May 2026. [ bib | .pdf ]
[2] Philipp Lehninger, Ardavan Elahi, Daniel Schnöll, Axel Jantsch, and Thilo Sauter. Hardware-efficient System State Detection for Embedded Condition Monitoring. IEEE Sensors Letters, pages 1--4, 2026. [ bib | DOI | .pdf ]
[3] Jan Timmer, Ardavan Elahi, Philipp Lehninger, and Axel Jantsch. A Low-Resource Hardware Design for Bearing Fault Detection using Support Vector Machines. IEEE Access, 2026. [ bib | DOI | .pdf ]
[4] Oscar Artur Bernd Berg, Eiraj Saqib, Axel Jantsch, Isaac Sánchez Leal, Irida Shallari, Silvia Krug, and Mattias O'Nils. BranchySplit: Dynamic Partitioning and Early Exits for Accelerated Edge Inference. IEEE Access, pages 1--1, 2026. [ bib | DOI ]
[5] Erik Bonek, Matthias Bittner, Daniel Hauer, Stefan Wilker, and Axel Jantsch. Benchmarking Recurrent Neural Networks for Efficient Load Forecasting in Low-Voltage Grids. In Gyu Myoung Lee and Pierluigi Siano, editors, Intelligent Technology for Power and Energy Systems, pages 47--58, Cham, 2026. Springer Nature Switzerland. [ bib | DOI | http ]
Accurate load forecasting is essential for efficient grid management, contributing to energy stability and the integration of renewable resources. Recurrent Neural Networks (RNNs) have become particularly popular due to their ability to capture temporal dependencies in load data. However, the optimal RNN architecture and hyperparameter choices for achieving both high accuracy and computational efficiency remain an open question. In this paper, we compare the performance of Long Short-Term Memorys (LSTMs), Gated Recurrent Units (GRUs), State Space Models (SSMs), MinLSTM, and MinGRU models across different hyperparameter settings ranging from 14 to around 365k parameters and test their performance on simulated load data for two Low-Voltage grid topologies. Results show that the LSTM performs best across all tested models. Furthermore, we observe that the benefits of increasing model size for the tested RNNs plateau relatively quickly after around 5k parameters. For further increasing the parameter efficiency of trained SSM models, we apply model order reduction via balanced truncation. This allows for reducing the parameter count of trained SSM models by up to 79% without any need for retraining, while maintaining the same level of accuracy and, in some cases, even improves the test loss by up to 6%.

2025

2024

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992


This file was generated by bibtex2html 1.99.

Saturday, 11 April 2026, 08:16:00