Dr. Christian Pehle

Short CV CV

I am a physicist, currently working as a PostDoc as part of the STRUCTURES Cluster of Excellence at Heidelberg University. I love working on challenging open ended questions, which can be approached theoretically but have practical constraints.

I am broadly interested in both algorithmic and theoretical aspects of physical computing and in particular spiking neurons, neuromorphic and quantum computing. My main research interests are

  • Gradient-based learning in Spiking Neural Networks
  • Differentiable Simulation of Neuron Dynamics and other Dynamical Nets
  • Physical Computation and Learning in Physical Systems
  • Optimal Control Theory applied to (Machine) Learning

One main result I obtained during my PhD, in collaboration with Timo Wunderlich, is an event-based analog of the backpropagation algorithm for spiking neural networks, that computes exact gradients for arbitrary network topologies and a large class of loss functions, overcoming the commonly held belief that spike discontinuities meant that no exact gradient with respect to parameters could be defined.

In collaboration with Christof Wetterich I worked on approximating quantum density matrices by correlations in classical spin systems. To do so we used a novel gradient-based end-to-end optimisation approach to neural sampling with networks of spiking neurons, without any assumption on an equilibrium distribution.

During my Master's thesis supervised by Timo Weigand I proposed a novel method to count massless matter in string theory (F-Theory). My bachelor's thesis supervised by Arthur Hebecker was on complex structure deformations and moduli spaces of elliptically fibered Calabi-Yau fourfolds.

For more details see my research statement.

A list of my publications and preprints can be found on Google Scholar.

Projects

Norse: ML library for Spiking Neural Networks

2019-today

Norse is a machine learning library I co-created for gradient-based optimization of spiking neural networks using PyTorch. It provides simple point-neuron primitives and abstractions for composing these primitives into networks. Norse was designed to be more accessible and reusable for machine-learning researchers and domain experts compared to previous work, which was often focused on abstractions familiar to computational neuroscientists or tied to a specific publication or model. For example, researchers at the European Space Agency are using Norse to train and evaluate a large number of spiking neural networks for satellite image processing. Norse has now been adopted by multiple research labs, and I am continuing to co-develop the library with Jens Pedersen.

BrainScales-2: Analog Neuromorphic Hardware System

[Research Article] [Demos and Documentation]

Hardware Development: 2015-2018

As part of the design team for the BrainScales-2 analog neuromorphic hardware system at Heidelberg University's Electronic Vision(s) group, I worked on digital hardware design and verification, with a focus on the plasticity processing unit - an embedded microprocessor with single-instruction multiple data capabilities that enables programmable plasticity. I also maintained and extended the prototype field-programmable gate array (FPGA) interface, including the spike router and I/O unit. The latter enabled most of the experiments performed with the prototype systems. My extensions to the plasticity processing unit were key to enable both hardware-in-the-loop gradient-based learning and on-chip automatic calibration of analog hardware parameters, crucial to ensure scalability. In total, I was involved in three successful tapeouts (2 prototypes and 1 full-scale system) of the design using TSMC's 65 nm technology.

Software & Experiments: 2018-today

More recently, I have contributed to the machine learning-related parts of the software stack, including the PyTorch-based API, and led the redesign of our lab course for graduate students interested in working with our neuromorphic hardware system.

I am also working on implementing an event-based gradient estimation algorithm (pdf) for hardware in-the-loop training, collaborating with Luca Blessing - resulting in a three orders of magnitude more information efficient gradient estimation. This is important because previous approaches would not have scaled to larger scale system architectures.

Neural Processing Elements

I have the long term goal to develop a framework based on category theory, which allows for the description of self-optimizing "machines", I call "Neural Processing Elements". Those machines can be composed and nested to form larger machines. I gave a description of this research direction in chapter 2 of my thesis.

Address:

Kirchhoff Institute for Physics

Heidelberg University

Im Neuenheimer Feld 227

69120 Heidelberg

Email:

christian.pehle@kip.uni-heidelberg.de

Github:

github.com/cpehle