Today we are releasing two separate but related technologies: the SciFi wireless headstage and the Synapse protocol which is spoken by all of Science’s neural interface devices.
We make a wide range of neural interface devices, from in vitro multielectrode arrays to laser projection glasses for retinal stimulation. In order to facilitate both rapid experimentation and the ability to get the most out of these devices, it’s important that they all work with a single, scalable toolchain that represents a lot of development work in its own right.
To make this possible, we’ve developed Synapse, a standard API for communicating with these devices. Synapse is expressive enough to support a huge range of use cases, from closed-loop neuromodulation to retinal visual protheses to motor or speech decoding.
At a high level, Synapse Devices contain Peripherals and are flexibly configured with a Signal Chain made up of Nodes. Peripherals are hardware like Nixel or Pixel chips, which can be used by Nodes such as Electrical Broadband, Spectral Filter and Stream Out to process and transmit the data.
Synapse has three main elements:
- Control Plane: gRPC-based API for managing and configuring Synapse devices
- Streaming Data: low latency communication using UDP multicast for online data with many subscribers
- Autodiscovery: uses UDP multicast to automatically find Synapse devices on the network without needing to manually keep track IP addresses and other metadata
Synapse devices are typically what we call headstages: the interface between the probe and the network. Since probe bandwidth is typically vastly higher than network bandwidth, Synapse supports configurable limited compute capabilities at the edge — or as an electrophysiologist might say, “on head.”
There are eight types of nodes in the Synapse 1.0 official capability: four transducers, two compute stages, and input/output data flows.
When a Synapse device is configured with a StreamOut node, the associated data stream is sent to a multicast group to be picked up by clients, which can enumerate these streams via the Info
request.
Streaming neural data is sent using our Neural Data Transport Protocol (NDTP), which uses variable encoding to maximize lossless bandwidth.
By building these abstractions over the neural interface hardware, our goal is to enable end users to think more at the level of their neuroscience and less in terms of implementation details like device communication and file formats. Idiomatic language clients for Python, Typescript, and C++ make building on these technologies straightforward from an application developer perspective, and Synapse-compatible software like Nexus is designed to make many major workflows simply point-and-click.
The neural engineering technology stack is evolving fast, and our mission at Science is to both expand the frontiers of what’s possible as well as ensure these technologies are widely available. Despite arguably being over fifty years old now, this field remains very young, and we are confident that what is going to unfold over the coming decade is going to be surprising. We’re couldn’t be more excited about these technologies and their potential to translate to patient impacts and important new discoveries.
To learn more, see: