Latest top stories
Start-ups
Technology

Zander Labs integrates mobile brain–computer interface stack in NAFAS milestone

27 January 2026

 

An exclusive interview with Dr. Laurens R. Krol, Co-Founder & Research Director at Zander Labs

On January 8, 2026, Zander Labs announced a milestone in non-invasive brain–computer interface (BCI) technology: the successful demonstration of a fully integrated, mobile system capable of capturing brain signals, decoding mental states in real time, and keeping all neural data processing local.

The demonstration was carried out within NAFAS (Neuroadaptivity for Autonomous Systems), a multi-year research project funded by the German Agency for Innovation in Cybersecurity GmbH (often referred to as the Cyber Agency or Cyberagentur).

We interviewed Dr. Laurens R. Krol, Co-Founder and Research Director at Zander Labs, to understand what “fully integrated” means in practice, which technical barriers were overcome, and what this milestone enables next.


What’s new: an end-to-end passive BCI system

BCIs have existed for decades, but most remain confined to laboratories. Systems are often bulky, require expert setup, rely on conductive gels, and demand long calibration sessions before they work for a single user.

Zander Labs’ January demonstration does not claim to solve all of these challenges at once, but it does mark an important systems-level step: for the first time within NAFAS, all components required for a mobile, end-to-end passive brain–computer interface (pBCI) were shown working together.

A passive BCI differs from more familiar “active” BCIs. Rather than allowing a user to consciously issue commands by thinking, a pBCI continuously infers latent mental states—such as cognitive load—while the user performs normal tasks. Those states can then be used by software systems to adapt their behavior without explicit user input.

As Krol explains, “fully integrated” refers to functional completeness rather than a single physical device:

“We have developed all necessary components for a complete, end-to-end, mobile BCI system, starting from the electrodes that capture brain activity down to producing numeric output representing decoded mental states. Note however that not all of these operations are performed by the same device: there are multiple physical components that are seamlessly connected.”


How it works: from brain signals to decoded mental states

The NAFAS system demonstrated in January consists of three tightly connected physical components.

1. Zypher electrodes

Brain activity is measured using electroencephalography (EEG), a non-invasive technique that records tiny electrical signals generated by neural activity. Traditional EEG systems are often rigid, gel-based, and uncomfortable.

Zander Labs’ Zypher electrodes are flex-printed patches designed to be nearly imperceptible to the wearer while still enabling high-quality EEG signal acquisition.

2. Zypher amplifier

EEG signals are extremely weak and must be amplified and digitized before analysis. This is handled by the Zypher amplifier, described by Zander Labs as the world’s smallest 24-channel EEG amplifier. Its role is to amplify the analogue signals from the electrodes and convert them into digital data suitable for processing, while remaining lightweight and mobile.

3. Passive BCI (pBCI) module

The digitized EEG data is transmitted wirelessly to a separate pBCI module. This processing unit runs Zander Labs’ decoding algorithms, translating ongoing brain activity into numeric representations of multiple mental processes in real time.

These outputs can then be consumed by downstream systems—for example, to adapt a user interface based on inferred cognitive load.


What made this hard: wearability, mobility, and calibration

Each component addressed a long-standing obstacle in real-world neurotechnology.

Wearability has historically limited EEG to controlled environments. Bulky headsets, gel-based electrodes, and expert setup restrict comfort and mobility. Krol notes that developing electrodes required rethinking materials, production methods, and form factors to maintain signal quality without those drawbacks.

Mobility posed similar challenges at the amplification stage. Conventional EEG amplifiers are often heavy and power-hungry. The task was to reduce size and weight while extending battery life, without compromising signal fidelity.

The most persistent challenge, however, lay in decoding algorithms. While it has long been possible to decode individual mental processes from EEG data, such algorithms typically require extensive user-specific calibration. According to Krol, recalibration can take up to an hour per mental process, effectively preventing practical multi-signal decoding.

“Our challenge here was to design and train algorithms that work in a ‘plug-and-play’ manner, allowing mental processes to be decoded from brain activity without requiring lengthy, individual calibration sessions.”

Zander Labs addresses this by training what it calls user-independent classifiers on thousands of EEG datasets collected as part of NAFAS. These classifiers are designed to generalize across users rather than being retrained from scratch for each individual.


Why privacy is built into the architecture

Decoding mental states raises obvious privacy concerns. EEG data—raw or processed—can contain sensitive information. Rather than relying on cloud-based processing, Zander Labs chose a different architectural path.

“Rather than having our classifiers run on cloud servers, requiring brain data to be sent over the internet, we designed a piece of hardware that can run our classifiers locally and allows the user to decide exactly what information is provided to downstream applications.”

All neural processing in the demonstrated system occurs locally on the pBCI module. Raw EEG data does not need to leave the device, and users retain control over what decoded information is shared. Privacy is therefore not an add-on, but a design constraint embedded at the hardware level.


What the system demonstrated—and what it hasn’t yet

During the January milestone demonstration, the integrated system detected levels of cognitive load and used that information to adapt a user interface in real time—illustrating how “brain-informed systems” could optimize human–computer interaction.

From a technical standpoint, Krol reports that the system met performance benchmarks across sensing, signal quality, and end-to-end inference. The printed electrodes reliably recovered event-related potentials (ERPs)—characteristic EEG responses to specific events—across multiple standard paradigms, with statistically significant class separation.

The amplifier supported stable, low-noise recordings suitable for single-trial analysis, meaning mental states could be inferred without averaging across repeated trials. Outputs from the modular processing stack were functionally identical to established PC-based software pipelines.

At the same time, Krol is explicit about the current evaluation scope. While the end-to-end system was demonstrated as an integrated whole, testing so far has occurred at the component level across different environments. Core algorithms were validated under controlled laboratory and simulated conditions, while sensing hardware and acquisition pipelines were tested under mobile and long-duration conditions outside the lab.


Who built it: a distributed innovation effort

NAFAS is a collaborative project involving several specialized partners:

  • Fraunhofer-Institut für Digitale Medientechnologie IDMT (Oldenburg, Germany): electrode development

  • Sencure BV (Enschede, Netherlands): amplifier development

  • KiviCore GmbH (Dresden, Germany): hardware development of the pBCI module

Together with Zander Labs, these partners cover materials science, electronics, signal processing, and embedded systems engineering.


What this enables: implicit, real-time adaptation

According to Krol, the integrated system makes it technically feasible to use mental states as real-time, implicit control signals in mobile human–machine and human–AI systems.

Plug-and-play classifiers standardize outputs across users, while local, privacy-preserving hardware enables long, uninterrupted sessions during natural movement and real work. Instead of relying on delayed behavioral signals—such as errors or explicit feedback—systems can adapt continuously based on latent cognitive states.

Crucially, this can now happen outside tightly controlled environments and without storing raw physiological data.


What’s next for NAFAS

This milestone does not alter the NAFAS timeline. The project will continue for two more years, focusing on expanding technological capabilities and demonstrating efficacy across multiple use cases, including the integration of quantified aspects of human intelligence into novel forms of artificial intelligence.

“The next concrete technical milestone is to expand the library of robust, plug-and-play mental-state classifiers and tightly integrate them with AI-driven and human-in-the-loop use cases, demonstrating that latent cognitive variables can be reliably inferred and used as actionable system inputs in real time.”

Success, Krol emphasizes, will be measured comparatively—by showing that mental-state-aware systems enable behaviors or decisions that cannot be achieved using behavioral, environmental, or conventional physiological sensors alone. Examples include improvements in timing, safety, learning efficiency, or interaction quality in deployed scenarios.

For Zander Labs, the January 2026 demonstration marks a transition from isolated components to an integrated neurotechnology stack—one designed from the outset for mobility, generalization, and user control. Whether those decoded mental states will prove indispensable in real-world adaptive systems is the question NAFAS now sets out to answer.

 

Liked this article? You can support our independent journalism via our page on Buy Me a Coffee. It helps keep MoveTheNeedle.news focused on depth, not clicks.