Calcium-Bridged Temporal EEG Decoder (Vibe coded amateur stuff)
At github: https://github.com/anttiluode/CalciumBridgeEEGConstraintViewer/tree/main
This project explores the idea of decoding EEG brain signals by modeling perception as a sequential process. Instead of treating the brain's response as a single event, this system breaks it down into distinct temporal windows, attempting to model the "chain of thought" as a visual concept crystallizes in the mind.
The project consists of two main components:
- A trainer (
pkas_cal_trainer_gemini.py) that builds a novel neural network model using the Alljoined1 dataset. - A viewer (
pkas_cal_viewer_gemini2.py) that loads a trained model and provides an interactive visualization of its "thought process" on new EEG samples.
Core Concept: The "Vibecoded" System
The central idea of this project is a system inspired by neuromorphic computing and constraint satisfaction, which we've nicknamed the "vibecoded" system.
Here’s how it works simply:
Thinking in Moments: The brain's response to an image (e.g., from 0 to 600ms) is not analyzed all at once. It's sliced into four distinct "thinking moments" or time windows based on known ERP components.
A Solver for Each Moment: Each time window is processed by a special
CalciumAttentionModule. This module's job is to look at the EEG clues in its slice and find the best explanation that satisfies all the "constraints" in the signal.The Calcium Bridge: This is the key. The "hunch" or "focus" (
Calciumstate) from one thinking moment is passed to the next. This creates a causal chain of thought, allowing the model to refine its predictions over time from a general gist to a more specific concept.
A Note on Data Filtering to Reduce Bias
A key challenge when decoding brain signals from natural images is that certain concepts are omnipresent. For example, person appears in a vast number of images in the COCO dataset.
To prevent the model from simply learning to predict these common categories and to create a more focused decoding task, this project intentionally filters the data:
Selective Training: The model is only trained to recognize a specific subset of 26 object categories (defined in the code as
TARGET_CATEGORIES). Common but potentially confounding categories likeperson,cell phone, orbookwere deliberately excluded from this training set.Clean Visualization: The viewer script (
pkas_cal_viewer_gemini2.py) performs an additional, stricter filtering step. It only selects test images that contain at least one of the target objects and zero of the excluded objects.
This ensures that when you visualize the model's performance, you are seeing its attempt to decode distinct object concepts, rather than it relying on the statistical likelihood of a person being in the scene.
The thought was that a person computer etc are present in the lab where the alljoined data was recorded and may bias the results.
Requirements
- Python 3.x
- PyTorch
datasets(from Hugging Face)tkinter(usually included with Python)matplotlibpillow
You can install the main dependencies with pip: pip install torch datasets matplotlib pillow code Code
Setup and Usage
1. Download Data and Model
Data:
- COCO Images: Download the 2017 training/validation images from the COCO Dataset official site. You will need
train2017.zipand/orval2017.zip. Unzip them into a known directory. - COCO Annotations: On the same site, download the "2017 Train/Val annotations". You only need the
instances_train2017.jsonfile. - Alljoined1 EEG Data: This will be downloaded automatically by the scripts on their first run.
Pre-trained Model (Recommended):
- You can download the pre-trained V2 model directly from its Hugging Face Repository. Click on
calcium_bridge_eeg_model_v2.pthand then click the "download" button.
2. Viewing the Results (Using the Pre-trained Model)
Run the V2 viewer script:
python pkas_cal_viewer_gemini2.py
In the GUI:
- Select the COCO image and annotation paths you downloaded.
- Click "Load V2 Model" and select the
calcium_bridge_eeg_model_v2.pthfile you downloaded from Hugging Face.
Once the model is loaded, click "Test Random Sample" to see the model's analysis of a new brain signal.
3. Training Your Own Model (Optional)
Run the V2 training script:
python pkas_cal_trainer_gemini.py
In the GUI, select your COCO image and annotation paths.
Click "Train Extended Model (V2)".
A new file named
calcium_bridge_eeg_model_v2.pthwill be saved with the best-performing model from your training run. You can then load this file into the viewer.
A Note on Interpretation
This is an exploratory research tool. The model's predictions should not be interpreted as literal "mind-reading."
Instead, the results reflect the complex statistical associations learned from the multi-subject Alljoined dataset. When the model associates a "horse trailer" with "horse," it is because this is a strong, common conceptual link found in the aggregate brain data. The viewer is a window into the "cognitive gestalt" of an "average mind" as represented by the dataset.
