The Sensor Fusion Guide by Ryan Sternlicht

Jan. 24, 2023.
25 min. read. 11 Interactions

In this review, Ryan explains in detail the kind of data that different types of biosensors collect.

About the writer

Ryan Sternlicht

4.13292 MPXR

Ryan Sternlicht is a San Francisco-raised educator, researcher, advisor, and maker, who has advised a number of startups in the fields of AR, VR, self driving cars, flying cars, and 3D manufacturing. He is involved with the setup and organization of numerous makerspaces and hackerspaces, including San Francisco’s Noisebridge hackerspace.

Credit: Tesfu Assefa

This guide goes with this article “The World of Sensors” by Alex Milenkovic


Whether we know it or not, humanity is quickly entering into an era of technology, where biosensing, biometrics, and the quantified self are combining into something transformative. These techniques will allow us to gain profound insight into ourselves, how we do things, how we think, how we are similar, and how we differ as well. Have you ever wanted to control a robot with your mind, or interact with virtual characters that respond to your emotions, what about developing a health and exercise plan specific to you that is on par with the trainers that professional athletes get. What about being able to predict the future of your own health?  Biosensor fusion can allow all of that and much more.   

In our World of Sensors article, we learned about biosensors, their data, and how they will shape our world.  This accompanying device chart explains in detail the kind of data that all of the different types of biosensors collect, and also provides a comprehensive “device guide” for comparison.

So let’s take a look at how each of these sensors work individually.


Our bodies produce a lot of different signals, which contain distinct markers and those are measured by the different sensors. These sensors give us an understanding of our body that would be hidden from us otherwise.

An exciting recent development is that many of these sensors are starting to become available to consumers and developers, bypassing the need to spend significant amounts of money, be associated with a tech company or be a part of an educational institution.

Before we can understand biosensor fusion in detail we should describe the biosensors themselves that are available, including the pros and cons of various sensors. We will discuss what each one brings to the table, the drawbacks they present, how they are best used, and how they can be combined with each other.

Technology: Eye Tracking sensors

Desktop camera based eye tracking
Desktop camera based eye tracking 2
Glasses or XR device camera based eye tracking

This sensor is probably the easiest to understand as well as implement for analysis. It has been incorporated into a number of AR and VR headsets, and is available as an accessory for desktop computers as well. Most Eye Trackers use infrared emitters to shine light into your eyes. The pupils absorb the light, the rest of your eye reflects. This allows the eye tracker to accurately calculate your eye position. Some eye trackers use regular Cameras to measure and compute eye position, and although rare, you can also use an array of electrodes around the eye to track where the eye is looking, called EOG (electrooculography).

Current expectations is that Valve/OpenBCI’s galea headset will have both EOG and built in camera based eye tracking, which would be an easy form of sensor fusion. This is because EOG can have much higher temporal resolution then current eye tracking cameras, which makes the data more accurate and precise, and the EOG data can be used to further enhance the performance of foveated rendering which is highly time sensitive.  This is then coupled with the benefits of camera based eye tracking, which offers decent accuracy in a passive collection method. Further, camera based eye trackers can measure your pupil size and changes which is an extremely valuable measurement as it can be used to determine your current cognitive load. Also camera based eye trackers can be further away from the user, such as on a desk, or attached to a monitor.  Or mounted to a pair of glasses or around a VR headset’s lenses.  

OpenBCI / Valve Galea Headset prototype
OpenBCI / Valve Galea (Render)

Eye tracking presents us with a great example of how Sensor fusion can lead to an improved and more accurate picture of a person. HP recently released the HP Omnicept G2 VR headset, offering a combination of PPG based Heart Rate tracking and camera based eye tracking. This device offers a custom metric in the SDK, a cognitive load calculation. This metric represents how much of the user’s cognitive capacity is in use at a given moment – on a scale of 0 to 100. This metric is computed using both pupil size and heart rate variability, which can provide a higher accuracy metric of cognitive load than either sensor alone.  This calculation is also thanks to a machine learning algorithm to look at the data and give a prediction of what the current cognitive load is based on that training.  

(more on the hp research can be found here

HP Reverb G2 Omnicept Edition
Varjo Aero
Varjo XR3

Technology: Heart Rate sensor

SparkFun Single Lead Heart Rate Monitor – AD8232
HP Reverb G2 Omnicept Edition
SparkFun Pulse Oximeter and Heart Rate Sensor – MAX30101 & MAX32664 (Qwiic)

This is probably the most likely sensor that you might be wearing right now. If you are wearing a recent Apple watch, fitbit, smart watch,  or any fitness tracker that says it has heart rate functionality, then it probably has one of two types of sensors: a PPG (Photoplethysmogram) sensor or a EKG (electrocardiogram) sensor. EKG measures electrical impulses on the skin, usually the chest, using two electrodes to measure the pulses of electricity. PPG measures heart pulses by measuring the changes in color of the blood using an LED light and measuring the reflection. One can verify which sensor their device has as if there is light coming from the sensor, it is probably a PPG sensor, and if there are small metal contacts it is probably a EKG sensor. These each have slightly different information that they can tell you on top of your basic heart pulse rate. 

A PPG sensor can tell you a number of interesting things such as SpO2 (Blood oxygen saturation), which is how well your blood is taking up oxygen, and can be very useful in exercise or in understanding . Pi (Perfusion index) is the strength of the pulse at the location of the sensor, this is what gives form to the pleth waveform. PVi (Pleth Variability Index) is the measurement of the variability of the perfusion index, which is currently being looked at as an interesting indicator of how hydrated you are. And RRp (Respiration rate from Pleth) which measures your breathing rate based on an analysis of the pleth waveform, this is because your breathing actually affects the pressures your heart is feeling from your left lung. 

Polar variety sense optical PPG wearable (in wristband housing)
Oura Rings

An EKG sensor can tell you a lot about the hearts movement on top of just pulse rate, it can especially tell you if there are any issues with your heart, as if the waveform from an EKG shows any out of ordinary patterns it probably means that your heart is doing something out of the ordinary. Generally you wear an EKG sensor around your chest. The problem is that outside of true clinical conditions these changes are hard to check, and verify. Though sensor fusion should definitely help with that in the future.  But adding EKG will eventually be one of the most likely sensors to save peoples lives   

Polar h10 wearable EKG chest strap
Common EKG signal pattern

All of these measurements are extremely valuable and often are the backbone of many of the other sensors, especially as your breathing and pulse themselves affect almost every other sensor in some way. It is often critical to have this data to correlate with other sensor data.  A great example is in combination with eye tracking which I talked about before.  For those who want a picture of what it might look like from the user side would be something like the “bullet circle” in the anime Sword Art Online as the reticle changes based on your heart rate.  Another great use is to tell if someone is excited or scared as an elevated heart rate is good at detecting those, the problem is that being able to understand which one it is (fear or excitement) requires other sensors to be accurate.   

“Bullet circle” in Sword Art Online season 2, episode 2 and episode 4. Sword Art Online is owned by A-1 Pictures, Aniplex USA, and Reki Kawahara.
OpenBCI / Valve Galea Headset prototype
OpenBCI / Valve Galea (Render)

Emotibit (sensor side up)
Emotibit worn on arm + data GUI

Technology: Galvanic Skin Response (GSR) (a form of EDA(Electrodermal activity) sensor

Emotibit (sensor side up)
Emotibit worn on arm + data GUI
OpenBCI / Valve Galea Headset prototype
OpenBCI / Valve Galea (Render)

Galvanic Skin Response is a well established technology, and offers unique advantages and drawbacks. In terms of negatives, many GSR devices require 10-20 minute wait time for the sensors to accumulate sweat and be able to measure changes. Additionally, while GSR can measure general levels of emotional intensity, it is not possible to distinguish whether the experience is positive or negative. Lastly, GSR tends to be slow to change and is mainly used to show trends over time. It is possible to measure spikes which denote interesting events, but it can be difficult to isolate what stimulus is driving the overall GSR readings. In terms of advantages, GSR measures emotional state in a fairly unique way. Once correlated with the particular task it can be a reliable indicator of emotional experience.

As GSR is mostly correlated with emotional state, it can allow for another very interesting understanding of your current state. This might sound familiar if you like spy or detective movies or books, as this is one of the pieces of tech that made the polygraph (lie detector) work the way it did.  Polygraphs were usually a combination of blood pressure, pulse, respiration and galvanic skin response sensors.  And although the stated purpose of use as a lie detector, which can easily be debated, the polygraph was a perfect example of a biosensor fusion device, and thanks to the polygraph, all of the sensors used by it, are quite well documented, as well as how they behave in relation to each other. 

Technology: Skin temperature sensor

Emotibit (sensor side up)
Emotibit worn on arm + data GUI

Skin temperature is another interesting sensor that a few different devices (such as the emotibit, whoop, and Oura ring) have started to use. Skin temperature and body temperature do not always line up, and that can lead to some interesting insights.  Also body temperature can be a very good indicator of your immune activity, and overall health. It can also be affected by your emotional state.

Skin temperature can be measured directly by having the sensor touch the skin, which is the typical method used. More advanced temperature sensors use far infrared (FIR) to measure the body state remotely, increasing reliability and reducing the need for another sensor to make contact with the user.

Local skin temperature can change very fast and quite often.  Think about when your hands are really hot or really cold.  Skin temperature has a direct effect on your internal body temperature and your overall metabolic activity, and body’s hemodynamics, as your skin is in essence your largest organ and has massive amounts of blood flowing through it and muscles right under it.  Skin temperature trends are currently being looked into for many different short and long term health metrics related to disease, sickness, recovery period, and exercise.   

Oura Rings

Technology: humidity / temperature sensor – less Common

Emotibit (sensor side up)
Emotibit worn on arm + data GUI

When talking about the emotibit, we have to mention that it also has a humidity / temperature sensor that is for measuring skin perspiration, and as another sensor to measure body temperature. Perspiration is interesting because it is currently not something many biosensor devices currently measure, but can have some interesting data.  Especially when active, or when you are hot or cold, your perspiration changes quite a bit and can be useful to understand how activities are changing your perspiration amount.   

Technology: EEG (electroencephalogram) sensor

OpenBCI / Valve Galea Headset prototype
OpenBCI / Valve Galea (Render)

Now for the sensor that probably gets the most interest in biosensor fusion and in general when people talk about biosensors is EEG. When people start talking about controlling things with their minds this is often the device people think of.  

EEG is a technology that has been around for a while and allows the measurement of brain state through electrical signals. By placing electrodes on the scalp, the system will generate an electrical reading for the brain activity present at that location. By processing that electrical reading, or interpreting it directly, users and researchers can learn about the current brain state. There are a lot of subject specific characteristics with EEG, so it can be hard to generalize – EEG will exhibit consistent readings with the same subjects and the same tasks. EEG will measure significant differences between subjects that are doing the same tasks, and these differences will remain consistent on repeated tests. People’s brain signals are unique to them, and these differences are consistent, almost like a brain fingerprint that is unique for each person.

Most current consumer EEG devices have only a few electrodes, the Muse has 4 electrodes, and the low cost OpenBCI Ganglion also has 4 electrodes. Though this is starting to change as the electronics that are used for amplifying and recording the output of the electrodes are falling in price quite a bit every year, soon consumer devices will have many more electrodes which give much more accurate data, and a better understanding of how the different parts of our brain are working.  Soon 8 and 16 will be quite common, and down the line many developers hope that 64 or more will become much less expensive. This increase in the number of channels is important because EEG of different parts of the brain give insight into vastly different parts of what your body is doing. Good examples are that if you do eeg on the back of the head, you can get data that is related to your visual cortex (what you are currently seeing), while the top of the head can give you data related to your motor cortex (how your body moves).

Although EEG is really cool, it has a big issue and that is that on its own it is often not very useful as it becomes very hard to correlate the brainwave data with anything you are doing or thinking at the time, as there is to much data from our body that our brain is processing and it ends up overlapping.  And there can be many things that interfere or affect the signal.  That also means it is often required to do sensor fusion to get good data from EEG.  Or to use AI and machine learning constantly to improve accuracy.  When sensor fusion of EEG and almost any other sensor is done you usually can find quite interesting correlation of data.  

Neurosity Crown
Muse 2
Muse S
Starstim fNIRS
Unicorn Hybrid Black
Gten 200

OpenBCI Ganglion (4 channel board)
OpenBCI Cyton (8 channel board)
OpenBCI Cyton + Daisy (16 Channel board)

Technology: fNIRS (Functional Near Infrared Spectroscopy) 

Starstim fNIRS + EEG wearable (uses artinis Brite fNIRS package)
NIRX nirsport
NIRS working principle diagram

An interesting piece of technology that is slowly becoming accessible to consumers is fNIRS (Functional Near Infrared Spectroscopy). This tech is all about imaging blood flow.  In some aspects it is similar to an array of PPG sensors, but it has a lot more complexity in how it works so that it can be used as an array and create images. Right now a very common use is to be used for imaging brain blood flow, though it can also be used elsewhere on the body. For sensor fusion this can work alongside EEG quite well to give a better picture of what is currently happening in the brain. Blood Flow through the brain has been shown to be linked with brain activity on multiple occasions. Though sometimes the blood flow is connected with other things then brain activity, such as emotional activity.

A good low cost version of one of these devices is the HEGduino, but it can not really create images as it is only a single sensor (basically an advanced PPG sensor device). If you have 100k burning a hole in your wallet, you can buy 50 sensors to fill up a whole Kernel flow device.  And if you have university backing there are many more options like the Kernel Flux.    

Kernel Flow

Technology: ultrasound imaging sensor

Another type of imaging that is getting easier to access is ultrasound imaging.  There has been a lot of recent research into using ultrasound in combination with other biosensors. Especially for brain imaging as (TCD (Transcranial doppler Ultrasound)), it gives a much deeper image then fNIRS, but the technology is a bit less portable at the moment and often requires a large amount of area for the sensor, that could instead fit a large number of other sensors.  

Transcranial Doppler Holter

Technology: EIT (electrical impedance tomography) sensor

Spectra EIT kit

Though out of all imaging technologies probably my favorite up and coming one has to be EIT. Although in its early stages, this real time medical imaging technology has a lot of potential, especially as it has fewer downsides then MRI (bad temporal resolution, big device), CT scans (radiation), and ultrasound (currently hard to wear).  It also has great temporal resolution. The problem is that it currently lacks spatial resolution.  A really cool use case for this is to do cross sectional analysis of a part of your body while you are doing an activity. If you do a cross section of your wrist you could figure out quite accurately the position of your fingers. This might even have better resolution and accuracy then using EMG, or Myoelectric sensors which will be talked about later in this article.

Technology: CT scans and MRI 

Now a bit more on the absurd and excessive side, I do have to mention that CT scans and MRI can be very helpful for sensor fusion even if they can’t be done in real time.  This is because each person’s body is different enough that bone and body structure can have noticeable impact on the accuracy of different sensors. So having an MRI or CT scan can allow for you to tune sensors to be more accurate to your body. This is extremely helpful for any form of brain related sensor fNIRS, EEG, etc, as the shape of your skull, and brain’s placement inside your skull does affect the signal that the sensor is able to receive quite a bit. The hardest part about getting an MRI other than cost itself, is that for use with EEG, you actually want to get a standing MRI scan so that it accurately places your brain in your skull in the same way that it would be when you are active in daily life. This is because your brain is in a different spot in your skull when you are lying down vs sitting or standing.   

Technology: environmental sensors

Emotibit (sensor side up)
Emotibit worn on arm + data GUI

Another set of sensors that have only recently been adopted into biosensor fusion, are environmental sensors to track the state of the environment around the user.  These sensors are useful in understanding how our body reacts to a changing environment. Although these sensors are not in very many devices yet, they are some of the easiest to add yourself, because many of these sensors are quite cheap and easy to get ahold of. You might even be able to use the temperature and humidity sensors on the emotibit this way if you have an extra and flip it around to have the sensors not on the skin.  

Technology: Full body tracking

UltraLeap 170 hand tracker
Perception Neuron Studio

A somewhat common biosensor that people might have used is body motion tracking.  Although this is not always considered a biosensor itself, it is tracking your physical body in a way that other sensors can’t. And provides very important information when you connect it to other biosensors, since body movement can affect a lot of different aspects of your body, including, heart rate, brain activity, metabolism, skin temperature, etc. If you have ever used a recent gen VR headset then you have experienced at least a bit of this through controller and headset tracking which ends up tracking your hand position and head position. If you have used oculus or windows mixed reality headsets, you have used camera + IMU based tracking, if you have used a steamvr tracked headset then you have used a lidar + IMU tracked headset.        

In the near future full body tracking will become much more common outside of the motion tracking they use in movies. A big part of this has been the improvement in two things, camera based movement analysis, and IMU drift.  It was not very long ago that identifying how our body is moving in three dimensions from just 2D camera images was considered really difficult, but thanks to AI and machine learning we can now quite accurately understand the position and movement of a person’s body from a single or very few cameras. And then there is IMU drift, up until about 5 years ago using an IMU based motion capture suit was considered an unreliable option, and this is because after a very short period of time, the sensor readings would drift out of alignment with the actual position that your body was in. And if you have a suit with 20 or 30 IMU sensors, the drift would make you need to recalibrate the suit position very often.  Each type has some very good use cases, as cameras do not need to be worn on the body, but IMU tracking suits can be used while out in the everyday world.  

Perception Neuron Pro,37096.html
OWO Suit
TeslaSuit Gloves
Rokoko SmartSuit Pro 2
SparkFun Qwiic Flex Glove Controller
SparkFun VR IMU Breakout – BNO080 (Qwiic)

Technology: EMG (Electromyography) and Myoelectric sensors

MyoWare Muscle Sensor

When talking about body movement it is also important to consider why exactly our body is moving in the first place, and a great sensor for doing that is contact based Myoelectric sensors, which measure electrical impulses that trigger your muscle to expand or contract  using a pair of electrodes.

There are three different focuses for detecting things with EMG usually.  First is detection of The motor neurons firing, which sends very quick spike signals to the muscle.  This is quite difficult depending on the size of the muscle, and if it is hard to reach without needle based EMG sensors.  This can be used to predetermine if you are going to move a part of your body.  An example of a device focused around using this quick spike is the Brink Bionics Impulse Neurocontroller. Second is the detection of a muscle twitch which will happen from the firing of the motor neurons. Good examples of this sensor in use are the Myo armband, and the armband Meta (Facebook) has recently shown, that is based off the hardware they got when acquiring Ctrl Labs. And third is monitoring of tetanic muscle activation from continued motor neurons firing. This can be great for helping figure out how much power, or force you are putting into any one movement during something like full body tracking or while doing sports, or how fatigued your muscles are after a workout. 

Another interesting use case is to map facial position, by having the sensors on the face, the Galea will have EMG, it will only be for the face.   

brink bionics impulse neurocontroller (Render)
OpenBCI / Valve Galea Headset prototype

OpenBCI / Valve Galea (Render)

As we can see, the ability to measure people has grown to a capacity that was never possible before. We can passively observe what someone finds interesting in a given scene or encounter, measure the brain state that experience generates, and record the intensity and enjoyability of the experience. This will allow quantification of many things that are currently invisible to us. Imagine a couple recording their biometric responses to various TV shows, so that they can make better decisions on what to watch together based on how enjoyable it will be for both parties. Lecturers will be able to monitor student volunteers during presentations to see how it makes the students feel and tune their material accordingly. Car manufacturers will be able to passively observe driver fatigue and attention and design safer cars that notify the driver when their attention is lacking or needed to deal with an urgent issue. Companies will be able to design easier to use and understand packaging based on consumer response. A doctor could observe a patient’s reactions to their explanation of some diagnosis, note a place where confusion occurs, and revisit that aspect. All of these will finally allow us to get past the current difficulties presented by placebo, self report, and an inability to quantify and compare people’s experiences.

On a fundamental level, something worth noting is that many people don’t really trust or believe that people are having the experience they claim to have, as much of the time we don’t have a way to quantify or verify whether this is the case. If someone says they are scared, or angry, or happy, or frustrated, or confused – there’s often an element of doubt present as the only information available is what the person claims is happening to them. If someone says they enjoyed a movie, we might assume they are praising the movie for some other reason other than liking the film. If someone says they enjoyed a meal, perhaps they enjoyed the company and social setting but didn’t enjoy the food one bit. With sensor data, we’d be confident in what they were saying – or they could volunteer to share the data with others – which has profound implications for our society and the way we see each other. 

These sensors will provide a wealth of data about us, our lives, and our patterns. We will need strong laws and motivated legislatures to ensure the protections necessary to make these sensors safe and ubiquitous. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article


One thought on “The Sensor Fusion Guide by Ryan Sternlicht

  1. Thank you so much! This was a fascinating read, I learnt a lot. Regarding EEG and other brain recording I'm thinking about what might be the near future large scale opportunities to capture complementary activity data. Right now these applications are oriented for example towards meditation and sleeping for their simplicity. For instance, gaming and virtual reality certainly provide a nice setup for combining activities to sensor data since the activity itself is easy to record. Something I happen to be currently really interested in are all the physiological and especially brain physiological reactions when people are having conversations and other social interactions both online and face to face. How useful it would be for the person herself to know how her brain and body react to different situations while learning to become a better communicator. And correspondingly, having some kind of understanding of your conversation partner's mental state and your actions' impact on it could help greatly in choosing proper ways of interacting in each situation and finding a common tone. Implications of advancements of this industry are truly exciting and deserve a lot of responsible attention. Thanks again!
    1 Like






💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice