GitHub link for this project:
https://github.com/jackparsons93/OpenBCI_Projects/blob/main/gonogo.py
Lately I’ve been interested in combining EEG, neurofeedback, and simple cognitive tasks into one Python program instead of treating them as separate projects. I wanted something interactive that would let me measure attention, inhibitory control, reaction time, and EEG-derived features all in one session.
That led me to build a go / no-go task in Python using OpenBCI, BrainFlow, Pygame, NumPy, and SciPy.
The program presents a simple task where I sometimes need to click and sometimes need to withhold a click. At the same time, it streams EEG from an OpenBCI Cyton board, computes band power features in real time, turns those features into neurofeedback metrics, and logs everything out to CSV for later analysis.
What the program does
At a high level, this script combines four things into one system:
- real-time EEG streaming from OpenBCI
- a go / no-go behavioral task
- a neurofeedback engine that computes training metrics
- trial-by-trial logging and simple analysis
That combination is what makes it interesting to me. It is not just a reaction-time task, and it is not just a neurofeedback app. It is both at once.
The basic structure
The program is organized into a few main parts:
OpenBCIClienthandles EEG acquisition and band power extractionNFEnginecomputes neurofeedback metrics like theta/beta ratio and SMR ratioGoNoGoTaskruns the actual task logicTrialLoggersaves each trial and can do some quick analysis latermain()ties the whole thing together in a Pygame window
A simple example of the task engine looks like this:
class GoNoGoTask: def __init__(self): self.reset()
That class is what controls the current trial state, timing, outcomes, and counters like hits, misses, false alarms, and correct rejects.
Streaming EEG from OpenBCI
The script uses BrainFlow to connect to an OpenBCI Cyton board. It reads the serial port from an environment variable, creates a board session, starts streaming, and then repeatedly pulls recent EEG data into a rolling analysis window.
A small part of that setup looks like this:
self.serial_port = os.getenv("OPENBCI_SERIAL_PORT", "COM3")self.board_id = int(os.getenv("OPENBCI_BOARD_ID", str(BoardIds.CYTON_BOARD.value)))
I like this because it makes the script flexible. I do not have to hard-code one exact setup forever. I can change the board port through the environment if I need to.
The program also maps the incoming channels to named scalp locations:
self.channel_map = { "F3": 0, "Fz": 1, "F4": 2, "C3": 3, "Cz": 4, "C4": 5, "Pz": 6, "Oz": 7,}
That makes the rest of the code much easier to read because I can work with meaningful names like F3 and Cz instead of just raw channel indices.
Computing EEG band powers
One of the most important parts of the program is the EEG feature extraction. The code takes a recent chunk of signal, removes the mean, applies a bandpass filter, estimates the power spectral density with Welch’s method, and then integrates power across different frequency bands.
The bandpass part looks like this:
b, a = butter(4, [1.0 / (fs / 2.0), 45.0 / (fs / 2.0)], btype="band")x = filtfilt(b, a, x)
Then it computes spectral power and divides it into bands such as theta, alpha, SMR, low beta, high beta, and gamma.
The code returns them in a simple dictionary:
return { "theta": OpenBCIClient._bandpower(freqs, psd, 4.0, 8.0), "alpha": OpenBCIClient._bandpower(freqs, psd, 8.0, 12.0), "smr": OpenBCIClient._bandpower(freqs, psd, 12.0, 15.0),}
This is the core signal-processing layer that everything else depends on. Without this step, there is no meaningful neurofeedback metric to drive the task.
Turning EEG into neurofeedback metrics
The NFEngine class is where the raw band powers get turned into actual training metrics.
The program supports three modes:
theta_betasmr_ratiofocus_blend
That part is set up like this:
self.site = "Cz"self.metric_mode = "theta_beta"
I like this design because it lets me switch between different EEG targets without rewriting the task. I can run the same go / no-go structure while experimenting with different neurofeedback ideas.
The code computes the actual metrics from the current site’s band values. For example, theta/beta ratio is built from theta divided by a beta-related denominator, and SMR ratio rewards stronger SMR relative to slower activity and a penalty term for high beta.
A small part of that logic looks like this:
tbr = theta / (beta_for_tbr + eps)smr_ratio = smr / (theta + self.beta_h_smr_penalty_weight * beta_h_capped + eps)
That means the program is not just plotting EEG. It is transforming the signal into a feature that can be used as feedback.
Adaptive thresholds and reward
One thing I wanted was for the neurofeedback to feel stable instead of random. Raw EEG changes a lot from moment to moment, so I built in smoothing, baseline history, threshold updates, and reward shaping.
For example, the program keeps a history of recent metric values:
self.raw_metric_buffer = deque(maxlen=10)self.baseline_metric_history = deque(maxlen=480)
Then it updates the threshold over time instead of keeping it fixed forever:
self.threshold = ( (1.0 - self.threshold_blend) * self.threshold + self.threshold_blend * candidate_threshold)
That matters because a static threshold can quickly become too easy or too hard. The adaptive approach makes the training feel more personalized and less arbitrary.
The reward itself is smoothed as well:
self.reward = 0.82 * self.reward + 0.18 * target_reward
That keeps the feedback from flickering wildly every time the EEG shifts a little bit.
Artifact awareness
Another thing I cared about was not rewarding obvious junk signal. EEG is messy, and muscle activity can easily contaminate higher frequencies.
The code has a basic artifact check that flags suspicious high gamma relative to the rest of the signal:
if site_gamma > (1.25 * max(site_beta_total, eps)) and site_gamma > 0.20 * site_total_useful: return True, "High gamma / probable EMG"
That is not a perfect artifact-rejection system, but it is an important step. It means the task is at least trying to distinguish between meaningful EEG changes and probable muscle contamination.
The go / no-go task itself
The behavioral side of the program is straightforward, which is part of why I like it. The task has a few different states:
IDLEWAIT_PREPSTIMFEEDBACK
The state changes happen automatically over time based on the Pygame clock. The code randomly chooses whether the next trial is a GO or NOGO trial:
def _choose_trial(self): return "GO" if random.random() < 0.70 else "NOGO"
That means most trials are GO trials, with fewer NOGO trials mixed in. When the stimulus appears, I either click or withhold clicking depending on the trial type.
The outcome logic is handled cleanly. If I click during a GO trial, it counts as a hit. If I click during a NOGO trial, it counts as a false alarm. If I fail to click on GO, it becomes a miss. If I successfully avoid clicking on NOGO, it becomes a correct reject.
The response handling looks like this:
if self.current_trial_type == "GO": self.hits += 1 self.last_result = "HIT"else: self.false_alarms += 1 self.last_result = "FALSE ALARM"
I like this because it gives me a simple experimental structure that can still produce useful behavior data.
Audio neurofeedback
The script also uses music volume as a feedback signal. If a track is loaded successfully, the program continuously adjusts the playback volume based on the current reward value from the neurofeedback engine.
That part looks like this:
pygame.mixer.music.set_volume(0.04 + 0.55 * reward)
This is one of my favorite parts of the program because it makes the EEG feedback feel less abstract. Instead of just showing numbers, the software changes the audio environment in real time.
Logging every trial
The TrialLogger class writes every trial to a CSV file called gonogo_openbci_log.csv. Each row includes not just the behavioral outcome, but also the EEG-derived features and neurofeedback values captured around the time of the stimulus.
The logger is initialized like this:
class TrialLogger: def __init__(self, filepath="gonogo_openbci_log.csv"): self.filepath = filepath
And rows are appended to disk immediately:
with open(self.filepath, "a", newline="", encoding="utf-8") as f: writer = csv.DictWriter(f, fieldnames=self.fieldnames)
That is important because it turns the task into more than just a live demo. It becomes a dataset generator. After a session, I can go back and inspect reaction times, accuracy, neurofeedback values, and EEG band power relationships.
Built-in analysis
The script even includes a basic analysis function that calculates summary statistics like:
- number of task trials
- overall accuracy
- number of GO hits
- number of NOGO false alarms
- mean GO reaction time
- correlations between RT and TBR
- correlations between RT and SMR ratio
- correlations between RT and reward
If scikit-learn is installed and there are enough balanced samples, it also trains a simple logistic regression classifier to separate faster and slower GO trials.
A small example of that looks like this:
model = make_pipeline(StandardScaler(), LogisticRegression(max_iter=1000))model.fit(X, y)acc = model.score(X, y)
I like that this is built into the same script. It makes it easy to go from data collection to quick first-pass analysis without switching into a completely different workflow.
The on-screen interface
The Pygame UI shows the current state of the task, reaction time, running accuracy, current EEG site, active neurofeedback metric, reward value, threshold, and a few band-power summaries. It also displays signal quality and any artifact warnings.
The help text at the bottom shows the controls:
"Left click=start/respond | 1=F3 2=F4 3=AVG 4=Fz 5=Cz | Q=TBR@Cz W=SMR@Cz E=Focus@Fz | R=reset M=analyze ESC=quit"
That means I can switch sites and training modes live while the task is running, which makes the program feel more like an EEG experiment platform than a fixed single-purpose app.
Why I built it this way
What I like most about this project is that it combines several things I’m interested in:
- cognitive-task design
- EEG signal processing
- neurofeedback
- real-time interaction
- post-session analysis
A lot of projects focus on only one of those pieces. This one pulls them together in a way that feels more complete.
Instead of just measuring reaction time, I can look at reaction time together with EEG-derived features. Instead of just showing neurofeedback, I can embed that feedback inside a behavioral task. And instead of just collecting data, I can immediately run some analysis on it afterward.
Final thoughts
For me, this script is a good example of why Python is so useful for neurotechnology experiments. In one file, I can handle hardware streaming, signal processing, task presentation, audio feedback, logging, and even a little machine learning.
The result is a go / no-go neurofeedback task that is simple on the surface but actually pulls together a lot of moving parts underneath. That is exactly the kind of project I like building: interactive, experimental, and flexible enough to grow into something bigger later.
Leave a comment