ORCID

Abstract

Decoding of brain activity with machine learning has enabled the reconstruction of thoughts, memories and dreams. In this study, we designed a methodology for reconstructing visual stimuli (digits) from human brain activity recorded during passive visual viewing. Using the MindBigData EEG dataset, we preprocessed the signals and cleaned them from noise, muscular artifacts and eye blinks. Using the Common Average Reference (CAR) method and past studies’ results we reduced the available electrodes from 14 to 4 keeping only those containing discriminative features associated with the visual stimulus. A convolutional neural network (CNN) was then trained to encode the signals and classify the images. A 92% classification performance was achieved post-CAR. Three variations of an auxiliary conditional generative adversarial network (AC-GAN) were evaluated for decoding the latent feature vector with its class embedding and generating black-and-white images of digits. Our objective was to cr eate an image similar to the presented stimulus through the previously trained GANs. An average 65% reconstruction score was achieved by the AC-GAN without a modulation layer, a 60% by the AC-GAN with modulation layer and multiplication, and a 63% by the AC-GAN with modulation and concatenation. Rapid advances in generative modeling promise further improvements in reconstruction performance.

Publication Date

2025-02-20

Event

BIOSTEC: 18th International Joint Conference on Biomedical Engineering Systems and Technologies: HEALTHINF: 18th International on Health Informatics

Publication Title

Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 1: BIOSIGNALS: BIOSTEC 2025

ISBN

978-989-758-731-3, 2184-4305

ISSN

2184-4305

Acceptance Date

2025-01-01

Deposit Date

2026-02-05

First Page

868

Last Page

877

Share

COinS