← Back to Projects

PROJECT ID: OSC-2025 // NEUROTECHNOLOGY

Real-Time Emotion Detection & Music Therapy

Built in 24 hours at the Heroes of The Brain 2025 Hackathon: a complete brain-computer interface platform that reads real-time EEG brainwave data, classifies emotional states using a weighted ensemble of deep learning and gradient boosting models, and automatically curates Spotify playlists to guide users toward calm. Designed for pre-operative anxiety reduction and therapeutic applications.

24h Hackathon Brain-Computer Interface Music Therapy
0Hz
EEG Sample Rate
0
Frequency Bands
0
EEG Channels
0Hz
Prediction Rate
[01]

System Overview

The Problem

Pre-operative anxiety affects up to 80% of surgical patients, leading to increased anesthesia requirements, longer recovery times, and worse outcomes. Traditional interventions like sedatives have side effects. We built a non-invasive, personalized solution using neurofeedback and music therapy.

Target Users Pre-operative patients, therapeutic applications, stress management
Input Real-time EEG brainwave data via BrainAccess headset
Output Emotion classification + adaptive music recommendations

Our Solution

Oscillate continuously monitors brain activity, detects emotional states in real-time, and responds with curated music designed to guide the user toward a neutral, calm state. The system creates a closed feedback loop: detect → intervene → measure → adapt.

Detection Ensemble AI model classifies 4 emotional states every 100ms
Intervention Spotify API delivers emotion-appropriate therapeutic playlists
Visualization 3D brain model with real-time emotion color mapping
[02]

Signal Processing Pipeline

EEG Acquisition

Raw brainwave data is captured from 4 electrode channels (AF3, AF4, O1, O2) at 125Hz via the Lab Streaming Layer (LSL) protocol. The frontal channels capture emotional valence while occipital channels provide arousal indicators.

Delta Band 0.5-4 Hz — Deep sleep, unconscious processing
Theta Band 4-8 Hz — Drowsiness, light meditation
Alpha Band 8-13 Hz — Relaxed wakefulness, calm
Beta Band 13-30 Hz — Active thinking, focus, anxiety
Gamma Band 30-45 Hz — High-level cognition, perception

Feature Engineering

Relative band powers are computed for each channel across all 5 frequency bands, yielding 20 base features. These undergo log transformation and polynomial expansion before standardization, creating a rich feature space for the ensemble model.

Base Features 4 channels × 5 bands = 20 relative band powers
Preprocessing log1p transformation + polynomial expansion
Smoothing Temporal averaging over 10 predictions (~1 second)
[03]

Technical Architecture

Deep Learning

WideResNet Classifier

1024-dimensional hidden layers with 3 residual blocks and dropout regularization (0.3-0.4). Contributes 60% weight to the ensemble prediction.

Classical ML

Gradient Boosting

Scikit-learn implementation with optimized hyperparameters. Provides robustness and contributes 40% weight to the final ensemble output.

Backend

FastAPI + WebSocket

High-performance async Python server handling LSL stream ingestion, real-time inference, and WebSocket broadcasting at 10Hz.

Frontend

React + Three.js

Interactive 3D brain visualization using React Three Fiber with real-time emotion-based color mapping and Spotify Web Playback SDK integration.

Research Team

Contributors

  • Iwo Smura
  • Iwo Wojtakajtis
  • Karina Leśkiewicz
  • Wiktoria Malinowska
  • Paweł Litwin