Building a Real-Time Neurofeedback Video Player in Python with OpenBCI

GitHub link for this project:
https://github.com/jackparsons93/OpenBCI_Projects/blob/main/gemini_video.py

Building My Python Neurofeedback Video Player with OpenBCI

Lately I’ve been really interested in building neurofeedback programs in Python that do more than just show raw EEG traces on a screen. I wanted something interactive, something where brain activity could directly affect an experience in real time instead of just being measured and logged.

That led me to build a neurofeedback video player using an OpenBCI headset, BrainFlow, Pygame, OpenCV, and VLC.

The basic idea is simple: I stream EEG data from OpenBCI into Python, calculate a live neurofeedback metric from the incoming brainwave data, and then use that metric to control the quality of a video while it is playing. When the signal moves in the direction I want, the video becomes clearer and more rewarding to watch. When the metric drops, the video quality degrades. It turns passive video watching into a closed-loop neurofeedback experience.

Why I built it

A lot of EEG and neurofeedback software feels either too clinical or too limited. Sometimes it is great for data collection, but not very engaging. Other times it looks nice but doesn’t give much flexibility if you want to experiment with your own training logic.

I wanted to make something that felt more like a real interactive system. I didn’t want to just stare at a bar graph that went up and down. I wanted my brain activity to actually control part of the environment.

Using video quality as the reward was appealing to me because it is immediate and intuitive. If the video gets sharper, I know I’m doing better. If it gets blocky and degraded, I know my neurofeedback metric has drifted away from the target.

Choosing a video and starting the session

The program starts by letting me choose a local video file. I like this because it makes the program flexible. I can swap in different videos without changing the rest of the software.

def choose_video_file() -> Optional[str]:
try:
import tkinter as tk
from tkinter import filedialog
root = tk.Tk()
root.withdraw()
root.update()
path = filedialog.askopenfilename(
title="Choose a video file",
filetypes=[
("Video files", "*.mp4 *.avi *.mov *.mkv *.m4v *.wmv"),
("All files", "*.*"),
],
)
root.destroy()
return path if path else None
except Exception:
return None

That is a small detail, but it makes the whole project feel more like an actual application instead of a hard-coded experiment.

Streaming EEG from OpenBCI with BrainFlow

The heart of the program is the OpenBCI client. It connects to the EEG board, starts the data stream, and pushes power-band information into a queue that the rest of the app can use.

class OpenBCIClient:
def __init__(self, out_queue: queue.Queue, window_seconds: float = 1.0, step_seconds: float = 0.10):
self.out_queue = out_queue
self.real_serial_port = os.getenv("OPENBCI_SERIAL_PORT", "COM5")
self.real_board_id = int(os.getenv("OPENBCI_BOARD_ID", str(BoardIds.CYTON_BOARD.value)))
self.synthetic_board_id = int(BoardIds.SYNTHETIC_BOARD.value)

I also built in a synthetic fallback mode. That was important to me because I wanted to be able to test the whole neurofeedback pipeline even when the real headset was not connected.

def _start_synthetic(self):
self.out_queue.put(("status", "Starting BrainFlow synthetic board"))
board = self._make_board(self.synthetic_board_id, None)
board.prepare_session()
board.start_stream()

That makes development much easier. I can still test the UI, threshold logic, reward logic, and video playback without being blocked by hardware setup every single time.

Using named EEG sites instead of raw channel numbers

One thing I like in this program is that the channel map uses actual scalp-site names. That makes the rest of the code much easier to read.

self.channel_map = {
"F3": 0,
"Fz": 1,
"F4": 2,
"C3": 3,
"Cz": 4,
"C4": 5,
"Pz": 6,
"Oz": 7,
}

Instead of constantly thinking in terms of channel index 4 or channel index 1, I can reason about the code using real sites like Cz and F3.

Computing EEG band powers in real time

Once the signal is streaming, the program computes band powers from short rolling windows of EEG. This is the signal-processing core of the whole application.

b, a = butter(4, [1.0 / (fs / 2.0), 45.0 / (fs / 2.0)], btype="band")
x = filtfilt(b, a, x)
freqs, psd = welch(x, fs=fs, nperseg=min(len(x), fs))

After that, the code integrates power inside different bands:

return {
"theta": OpenBCIClient._bandpower(freqs, psd, 4.0, 8.0),
"alpha": OpenBCIClient._bandpower(freqs, psd, 8.0, 12.0),
"smr": OpenBCIClient._bandpower(freqs, psd, 12.0, 15.0),
"betaL": OpenBCIClient._bandpower(freqs, psd, 15.0, 20.0),
"betaH": OpenBCIClient._bandpower(freqs, psd, 20.0, 30.0),
"gamma": OpenBCIClient._bandpower(freqs, psd, 30.0, 45.0),
}

This is where the raw EEG starts becoming something useful for neurofeedback instead of just a noisy signal trace.

Turning EEG into a neurofeedback metric

I built the neurofeedback engine so I can switch between a few different modes instead of locking everything into one interpretation of “better.”

self.site = "Cz"
self.metric_mode = "theta_beta"
self.board_mode = "unknown"

The program supports three main modes:

  • theta / beta ratio
  • SMR ratio
  • focus blend

The actual formulas are built from the band powers for the selected site.

tbr = theta / (beta_for_tbr + eps)
smr_ratio = smr / (theta + self.beta_h_smr_penalty_weight * beta_h_capped + eps)
focus_blend = (smr + 0.35 * beta_l) / (
theta + 0.6 * alpha + 0.5 * gamma + 0.25 * beta_h_capped + eps
)

I really like this part because it turns the program into a sandbox for trying different feedback ideas instead of a one-metric toy.

The neurofeedback loop

The part I find most exciting is the actual closed-loop behavior. The program is not just measuring EEG in the background. It is constantly asking whether the current brain state is moving in the desired direction.

If it is, the reward rises. If it is not, the reward drops. That reward then controls the visible quality of the video.

I also added smoothing so the experience does not feel jittery or random.

if not self.metric_initialized:
self.metric_smoothed = float(raw_metric)
self.metric_initialized = True
else:
a = self.metric_ema_alpha
self.metric_smoothed = (1.0 - a) * self.metric_smoothed + a * float(raw_metric)

That makes a big difference. Raw EEG features can bounce around a lot, so smoothing helps the reward feel intentional instead of chaotic.

Why adaptive thresholding matters

One of the main design decisions I cared about was adaptive thresholding. A static threshold often feels wrong pretty quickly. If it is too strict, the user rarely gets rewarded. If it is too loose, the reward becomes meaningless.

So I built the system to keep recalculating the threshold from recent performance data.

recent_target = self._threshold_target_from_recent_window(higher_is_better)
if recent_target is None:
return
self.threshold = max(self.min_metric_for_threshold, recent_target)

That helps keep the challenge in a useful range instead of turning the session into either frustration or fake success.

Signal quality and artifact awareness

One thing I’ve learned working with EEG is that the signal is never as clean as I wish it were. Eye movements, jaw tension, electrode contact issues, and general movement can distort the data. So I wanted this project to have at least some awareness of artifact and bad signal.

if self.board_mode == "synthetic":
return False, ""
if site_vals["gamma"] > 10000.0:
return True, "Extreme gamma / EMG blast"
if site_vals["beta_h"] > 10000.0:
return True, "Extreme betaH / clipping"

It is not a perfect artifact rejection system, but it is an important step. A serious neurofeedback prototype should not blindly reward every fluctuation as if it were meaningful brain activity.

Why I used OpenCV and VLC together

On the media side, I ended up using OpenCV for video frames and VLC for audio. That combination worked well because OpenCV gives me frame-level access to the image, while VLC handles audio playback much better than trying to do everything through one graphics stack.

self.cap = cv2.VideoCapture(video_path)
self.vlc_instance = vlc.Instance(
"--no-video",
"--aout=directsound",
"--quiet",
)

The program uses VLC as the timing reference and OpenCV for fetching the matching frame, which keeps the audio and video synchronized while still letting me manipulate the visual quality.

Turning reward into visible video quality

This is one of my favorite parts of the whole project. Instead of giving feedback as just a number or bar graph, the program changes the quality of the video itself.

@staticmethod
def degrade_quality(frame_bgr: np.ndarray, quality_scale: float) -> np.ndarray:
q = float(max(0.10, min(1.0, quality_scale)))
if q >= 0.995:
return frame_bgr
h, w = frame_bgr.shape[:2]
sw = max(2, int(w * q))
sh = max(2, int(h * q))
small = cv2.resize(frame_bgr, (sw, sh), interpolation=cv2.INTER_AREA)
restored = cv2.resize(small, (w, h), interpolation=cv2.INTER_NEAREST)
return restored

When the reward is high, the video looks clearer. When the reward falls, the video becomes more degraded and blocky. That makes the neurofeedback feel immediate and intuitive.

The interface and live feedback display

I also wanted the program to be usable in real time, not just technically functional. So I built a live overlay in Pygame that shows the current metric, smoothed metric, threshold, reward, video quality, artifact state, and band values.

lines = [
f"File: {os.path.basename(video_path)}",
f"Board mode: {nf_snap.get('board_mode', 'unknown')}",
f"Site: {nf_snap['site']}",
f"Metric: {nf_snap['metric_name']}",
f"Metric value: {nf_snap['metric_value']:.4f}",
f"Threshold: {nf_snap['threshold']:.4f}",
f"Reward: {nf_snap['reward']:.3f}",
f"Video quality: {100.0 * nf_snap['quality_scale']:.1f}%",
]

I like this because it gives me both the subjective experience of the video feedback and the objective numbers behind it at the same time.

Logging everything for later analysis

Another part I really wanted was logging. If I run a neurofeedback session and do not save what happened, I lose a lot of what makes the experiment useful.

The script has a CSV logger that saves metrics, thresholds, reward, quality, artifact state, band values, and session information over time.

class NFCSVLogger:
FIELDNAMES = [
"row_idx",
"timestamp_iso",
"session_time_s",
"board_mode",
"site",
"metric_mode",
"metric_name",
"metric_formula",
"phase",
"metric_value",
"metric_raw",
"metric_smoothed",
"threshold",
"reward",
"quality_scale",
]

That turns the program from a live demo into something I can actually study afterward. I can ask better questions later about whether the threshold adapted well, whether reward was stable, and whether certain sites or metrics worked better than others.

What I like most about this project

What I like most is that it feels like a real system. It is not just a graphing program. It is not just a video player. It is not just an EEG script. It is all of those things connected together in a loop:

brain activity → signal processing → neurofeedback metric → reward logic → video quality → user experience

That loop is what makes neurofeedback interesting to me. The brain is influencing the software, and the software is shaping the experience in return.

Final thoughts

This program takes live data from an OpenBCI headset, processes it in Python, turns it into a neurofeedback score, and feeds that score back into a video experience in real time. That is exactly the kind of project I like: technical, experimental, imperfect, but very alive.

It also reminds me why I like Python so much for this kind of work. With the right libraries, I can combine hardware input, digital signal processing, visualization, media playback, and data logging in one environment. That makes it possible to move quickly from idea to experiment.

For me, this project is not just about building a neurofeedback player. It is about building tools that let me explore how brain activity can interact with software in a direct and meaningful way.

And honestly, that is where this starts to get really exciting.

Leave a comment