Using Brainwaves for creative output // PCOMP Midterm

Oct 20, 2024

Inspiration

Our project emerged from a curiosity about visualizing mental states and emotions. We were inspired by the notion of capturing a "portrait" of someone's mind, using technology to convert intangible feelings into a tangible, visual form. The core concept was to create an artistic representation of a person's internal mental state using EEG sensor data from the Muse 2 headset.

Turning EEG into Art – Refik Anadol’s “Melting Memories”, Brain Wave Drawing Machine - Juanifico

Designing the Experience

In creating this project, the environment became a key factor. The Muse 2 headset requires a quiet and calm space to accurately capture EEG values, so we carefully designed the experience to ensure minimal noise or distraction. We wanted participants to be fully engaged with their thoughts during the process.

Experience Flow

A persons experience follows these steps:

  1. The participant wears the Muse 2 headset.

  2. They read a series of questions focused on their past, present, and future.

  3. As they think about the questions, the headset records their EEG values for three minutes.

  4. At the end of the session, the system automatically saves an image representation of their mental states.

  5. When the participant presses a button, a printout is generated for them to take home.

This process allows participants to engage in deep, reflective thought while the EEG headset captures subtle shifts in their mental states. These variations in brain activity translate into a unique visual sketch.

Technical Flow

From a technical perspective, the process involves:

  1. EEG data collection from the Muse 2 headset

  2. Data processing in p5.js

  3. State determination using ML5, trained on Jason's dataset

  4. Sketch generation based on the determined mental state

  5. PNG download triggered by an automator

  6. Signal sent to Arduino IDE

  7. Arduino IDE communicates with the thermal printer

  8. Thermal printer produces the final receipt

Through this design process, we learned that while EEG captures brainwave patterns, it was important to focus on mental states—such as calm, focus, or meditation—rather than the raw data. These states represent an abstraction from the waves themselves, providing a more meaningful and interpretable output for the participant.

The Receipt Concept

Dan B O'Sullivan, a faculty member we consulted, introduced us to an alternative concept: instead of simply producing a sketch, what if the printout resembled a receipt—a kind of "accounting" of the participant's emotions? This added a playful element, blending the quantification of feelings with the formal structure of a receipt. The use of a thermal receipt printer for this purpose added humor to the notion of "auditing" someone's feelings.

This printed artwork would act as a keepsake, allowing participants to reflect on their mental state through abstract art. The tangible nature of the receipt provides a unique way for participants to take a piece of their experience with them, encouraging further reflection and introspection beyond the initial session.

By designing the experience in this way, we aimed to create a seamless interaction between the participant's thoughts, the technology capturing those thoughts, and the final artistic representation. This approach allows for a deeply personal and introspective experience, while also producing a tangible artifact that encapsulates a moment in the participant's mental journey.

Coding the Experience

We based our project on Jason Snell's MuseML tool, which simplifies the process of interpreting brainwave data from the Muse 2 headset. However, our journey to successfully implement this tool was not without challenges.

Jason Snells Initial Code and Sketch
Understanding EEG Data Processing

Initially, we struggled with capturing and interpreting the EEG values. To overcome this, we arranged a meeting with Jason, who explained the concept behind the whole process.

Initial meeting with Jason

Jason walked us through how raw brainwave values (beta, theta, etc.) are mapped and then converted into mental states. This understanding was crucial for us to move forward with the code implementation.

1. Initialize EEG, PPG, and accelerometer sensors
2. Set up neural network for brainwave classification
3. Load trained model or train new model with JSON data
4. Main loop:
   a. Collect data from Muse headband (EEG, PPG, accelerometer)
   b. Process EEG data:
      - Convert time-domain signal to frequency spectrum using FFT
      - Calculate power in different frequency bands (delta, theta, alpha, beta, gamma)
   c. Process PPG data:
      - Detect heartbeats
      - Calculate heart rate (BPM)
   d. Classify brainwave state using neural network
   e. Update global state variables (noise, muscle, focus, clear, meditation, dream)
   f. Draw visualization:
      - EEG spectrum chart
      - PPG waveform
      - Display current brain state, EEG band powers, heart rate, and accelerometer data
5. Handle user input for adding new training data or saving the model
Code Implementation Challenges

After gaining a theoretical understanding, we began modifying the p5.js sketch. However, we encountered difficulties due to the multiple files within the sketch and variable naming conflicts. To address this, we requested a second code walkthrough with Jason.

During this session, Jason reviewed our code and helped us understand the interconnections between different parts of the sketch. To avoid variable conflicts, we adopted a naming convention that incorporated our names into the variables. For example:

Second meeting with Jason where he helped us write the code and implement changes
function surayGetDominantState() {
  // Function implementation
}
Iterative Development with AI Assistance

To further refine our code and understand its nuances, we utilized Cursor, an AI-powered coding tool. This allowed us to provide context to the AI and receive real-time assistance as we iterated on the sketch. Through this process, we were able to resolve issues and finally get the sketch working as intended.

Final Implementation

Here's the pseudo-code for our final implementation:

Initialize global variables and constants

Function setup():
    Create canvas
    Set up Muse connection
    Initialize machine learning model
    Create UI buttons (Connect, Start Recording)

Function draw():
    If not connected to Muse:
        Display "Please connect to Muse" message
        Return
    
    If not recording and not ready to save:
        Display "Press Start Recording" message
        Return
    
    If recording:
        Update elapsed time
        If one second has passed:
            Update Muse value history
    
    Draw visualization elements:
        Draw oval
        Draw segment markers
        Display EEG data visualization
        Draw legend
    
    If recording time is complete:
        Set ready to save
        Update UI

Function connectToMuse():
    Attempt Bluetooth connection to Muse device
    If successful:
        Set up data streaming
        Enable "Start Recording" button

Function startRecording():
    If connected and not recording:
        Start recording
        Reset elapsed time
        Clear previous history
    Else if recording:
        Stop recording
    Else if ready to save:
        Save canvas as PNG
        Reset for new recording

Function updateMuseValueHistory():
    Get dominant mental state
    Add to history array
    Trim history if too long

Function processEEGData(raw_data):
    Decode raw data
    Apply FFT to get frequency spectrum
    Calculate average values for each brainwave type
    Update global EEG object

Function classifyMentalState(eeg_data):
    Use machine learning model to classify mental state
    Update global state object with classification results

Function displayEEGData():
    For each segment around the oval:
        Calculate position
        Get mental states for this segment
        Draw shapes representing mental states

Function drawShapesForState(state, position):
    Choose shape based on mental state
    Draw shape at given position

Function handleButtonClicks():
    If "Connect" clicked:
        Call connectToMuse()
    If "Start Recording" clicked:
        Call startRecording()

Main loop:
    Call draw() repeatedly
    Handle any button clicks

And here's a link to the actual code for the main visualization function: <LINK>

Visualization Evolution

Our visualization approach evolved significantly throughout the development process. We initially considered using a spiral graph with different stroke styles to represent different mental states:

Spiral graph visualisation idea iteration

However, we found this approach lacked legibility, especially given the small print size of our thermal printer. Inspired by the work of information designer Giorgia Lupi, we pivoted to using distinct symbols to represent different mental states:

Date visualisation precedents from Hiorgia Lupi
This oval visualization tracks mental states over 3 minutes. Each third represents 1 minute, with vertical symbol sets showing 6-second intervals. Symbols indicate mental states per the legend. The oval shape suits thermal printer output, allowing easy reading of continuous data.

This symbol-based approach maintained a level of abstraction above language while still being interpretable, allowing users to connect with their own thoughts and feelings about the prompts.

Through this iterative journey, we not only created a functional EEG visualization tool but also gained valuable insights into working with complex datasets, real-time data processing, and creating meaningful visual representations of abstract concepts.

Iterating with the Printer

We began by watching a YouTube tutorial on how to use a thermal printer with Arduino and the Arduino IDE. Our first task was to print simple text, and we wrote code that was supposed to print "Hello World" when the number 1 was pressed.

#include <SoftwareSerial.h>

SoftwareSerial mySerial(10, 11); // RX, TX

// Button pin
const int buttonPin = 2; // Change as needed

// Print barcode instruction, including information “DFR0503”
char bar_code[24] = {
  0x1b, 0x40,
  0x1d, 0x48, 0x02,  // Print address for HRI character
  0x1d, 0x68, 0x64,  // Set barcode height to 100
  0x1d, 0x77, 0x03,  // Set horizontal width to 3
  0x1d, 0x6b, 0x49, 0x09, 0x7B, 0x42, 0x44, 0x46, 0x52, 0x7B, 0x43, 0x05, 0x03
};

// Print QR code instruction, including information “www.dfrobot.com”
const char QRx[60] = {
  0x1b, 0x40,
  0x1d, 0x28, 0x6b, 0x03, 0x00, 0x31, 0x43, 0x05,  
  0x1d, 0x28, 0x6b, 0x03, 0x00, 0x31, 0x45, 0x30,
  0x1d, 0x28, 0x6b, 0x12, 0x00, 0x31, 0x50, 0x30,
  0x77, 0x77, 0x77, 0x2E, 0x64, 0x66, 0x72, 0x6F,
  0x62, 0x6F, 0x74, 0x2E, 0x63, 0x6F, 0x6D,
  0x1b, 0x61, 0x01,  // Center the graphic
  0x1d, 0x28, 0x6b, 0x03, 0x00, 0x31, 0x52, 0x30,
  0x1d, 0x28, 0x6b, 0x03, 0x00, 0x31, 0x51, 0x30
};

bool hasPrinted = false; // To track if the print action has been performed

void setup() {
  mySerial.begin(9600);
  Serial.begin(9600);  
  pinMode(buttonPin, INPUT_PULLUP); // Set button pin as input with pull-up resistor
}

void loop() {
  // Check if the button is pressed (LOW when pressed)
  if (digitalRead(buttonPin) == LOW && !hasPrinted) {
    mySerial.println();
    mySerial.print("---------------------------------"); 
    mySerial.println();
    
    // Send print barcode instruction to printer
    mySerial.write(bar_code, sizeof(bar_code));     
    delay(5);
    
    mySerial.println();
    // Send print QR code instruction to printer
    mySerial.write(QRx, sizeof(QRx));        
    mySerial.println();
    mySerial.print("---------------------------------"); 
    
    hasPrinted = true; // Set the flag to true to prevent further prints
    
    // Optional: add a short delay to avoid bouncing
    delay(500); // Adjust this delay if needed
  }
  
  // Reset the hasPrinted flag if the button is released
  if (digitalRead(buttonPin) == HIGH) {
    hasPrinted = false;
  }
}

However, when we pressed 1, the printer outputted the number 1 instead of the intended text. Uncertain why this was happening, we sought help from David Rios during office hours, and with his guidance, we managed to resolve the issue to some extent, even successfully printing a QR code.

Printing errors with our first test
Challenges with Image Printing

The real challenge arose when we attempted to print an image. We were saving the sketches from p5.js as PNG files, but printing a PNG through the Arduino setup with the thermal printer proved to be far more complicated than expected. Despite several attempts and different iterations of the code, we reached a dead end on this front.

Trouble with the printer
Pivoting and Adapting

Given the challenges we faced, we decided to pivot our approach. We focused instead on refining the button functionality within our p5.js sketch based on info from this Github page. We successfully implemented two key features:

  1. A button in the p5.js sketch that, when pressed, saved the generated sketch as a PNG.

  2. A second button that triggered the printer to output something when pressed.

Here's some snippets of the p5.js code for saving the sketch and sending it to the prtiner:

<script src="libraries/p5.js"></script>

    <script src="libraries/ml5.min.js"></script>
     <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/addons/p5.dom.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/p5.serialserver@latest/lib/p5.serialport.js"></script>

    <link rel="stylesheet" type="text/css" href="style.css">
    <meta charset="utf-8" />
serial = new p5.SerialPort();
  serial.open("/dev/cu.usbmodem141301"); // 替换为你的Arduino的串口端口名

  // 监听串口事件
  serial.on("open", onSerialOpen);
  serial.on("data", onSerialData);
function onSerialOpen() {
  console.log("Serial Port Opened");
}

// 当有数据从 Arduino 返回时
function onSerialData() {
  let data = serial.readLine();
  console.log(data);
  if (data == "SAVE_IMAGE") {
    console.log("Received:", data);
    saveImage();
    // saveAndPrintImage();
  }
}

function saveImage() {
  // 通过串口按下按钮时保存图片
  saveCanvas("eeg_visualization", "png");
   console.log("Image saved");

  // serial.write('printing image\n');
  let tmpImage = get();
  tmpImage.loadPixels();

  let imageData = [];

  for (let y = 0; y < image.height; y++) {
    let lineData = [];
    for (let x = 0; x < image.width; x++) {
      let index = (x + y * image.width) * 4;
      let r = image.pixels[index];
      let g = image.pixels[index + 1];
      let b = image.pixels[index + 2];
      let avg = (r + g + b) / 3;
      lineData.push(avg > 128 ? 0 : 1); // 黑白二值化
      // serial.write(avg > 128 ? 0 : 1);
    }
    imageData.push(lineData);
  }

  // 将二进制数据发送给 Arduino
  serial.write(imageData);
  serial.write("\n");
  serial.write("image completed\n");
  console.log("Save Image signal sent");
}

Although the thermal printer wasn't able to print the actual PNG image, these buttons automated the process of capturing and printing, maintaining some of the interactivity we initially envisioned.

Final Solution

As a workaround to our printing challenges, we opted to use a larger printer for the final output. This allowed us to print the captured sketches accurately. While this change made the setup more imposing and resulted in A4-sized prints instead of small receipts, it allowed us to move forward with the project given our time constraints.

Since we could not solve the problem of Arduino recognizing the signals of the picture, we changed the idea and proceeded with the automatic printing function of Apple's computer. We used a computer USB connection to the printer. The principle is that when a person presses the button, p5.js will download the picture and save it in the "picture" folder, and then the file is automatically sent to print through the computer's automatic printing function.

Printing with a large format colour printer

The button presses still automated the process of capturing the sketch in p5.js, saving it as a PNG, and sending it to the larger printer for the final printout. This iteration helped us refine the interaction process and provided a functional, albeit adjusted, version of our original idea.

While our final setup deviated from our initial vision of a small, mysterious box with a thermal printer, it still maintained the core concept of creating tangible artifacts from participants' mental states. The larger prints, while less portable, offered greater detail and clarity in the visualization, potentially enhancing the reflective experience for participants.

This iterative process taught us valuable lessons about hardware integration, the importance of flexibility in design, and how to adapt our vision to technical constraints while still preserving the essence of our concept. It also highlighted the importance of considering alternative solutions when facing technical challenges, as our shift to using the computer's built-in printing capabilities ultimately allowed us to realize our project's core functionality.

The Final Experience

The culmination of our project is best demonstrated through video documentation of the entire process.

Jasons final Sketch

This video walkthrough showcases:

  1. The quiet, meditative environment we created

  2. A participant putting on the Muse 2 headset

  3. Interactions with the p5.js sketch mapping EEG values in real-time

  4. The final printing process

Each session lasted 3-5 minutes. Participant feedback often centered around how to interpret the data, highlighting a need for clearer guidance in future iterations.

Below are readings from our testing with Bethany, Julia & Callie -

Three additional users final sketch

Learning & Takeaways

Ethical Considerations

As we look to expand this project, it's crucial to consider the ethical implications of using brainwave data for interactive experiences. Privacy concerns, data security, and the potential for misuse of such intimate information must be carefully addressed. There's also the question of how prolonged use of such systems might affect a person's self-perception or mental state. As we continue to develop these technologies, we must prioritize user well-being and informed consent.

Looking Ahead

This project marks the beginning of our exploration into how EEG data and mental states can serve as inputs for creative and interactive experiences. One of the key insights we gained was the distinction between brainwave diagnostic data—such as alpha, beta, delta, theta, and gamma waves—and the abstraction of these waves into states of mind that people can control. By mapping these states (like calm or focus) to interactive systems, we realized that thinking itself can become a form of input.

This opens up fascinating possibilities. With machine learning models, we could train neural data patterns to respond to specific thoughts or repeated mental actions. Imagine training your brain to repeatedly think "forward," using that mental action to control movement in a video game. The brain becomes not just a source of diagnostic data, but an active, trainable input mechanism.

One idea we’re eager to pursue is a cell simulation game where one cell’s behavior is dictated by the game, while the other is controlled by the user’s brain activity. When the user focuses, the cell behaves as expected, creating a simple yet powerful connection between the user's mental state and the game.

A cell simulation concept experiment

Looking ahead, we see a multitude of applications for EEG data in both artistic and functional realms. Beyond games, this technology could be used to investigate and reflect on emotions, where brainwaves are transformed into physical or digital artifacts. For example, by turning someone’s emotional readings into a tangible object—like a 3D-printed sculpture—we can give people something real to hold, representing their internal thoughts and feelings. This idea of making the intangible tangible is a powerful way to bring people closer to understanding their mental and emotional states.

Overall, this process has been both challenging and deeply rewarding. We’ve learned a great deal about working with EEG data, understanding brainwave states, and integrating these concepts into creative technology. Looking forward, we are excited to continue working with EEG as a tool for introspection, interaction, and expression, exploring more ways to turn thought into action, and brain activity into art.

References
  1. Jason Snell - for giving us his initial code, then helping us edit it (4 times). We really should list him as a contributor to this project.
  2. Nikolai
  3. ChatGPT
  4. Claude
  5. Cursor
  6. An Kai Cheng
  7. Haoren Zhong

©2019-2025 SURYA NARREDDI.

©2019-2025 SURYA NARREDDI.