WORKS  Lucid Dream
Experiment 5
Voice, Space, and Dream Control



In this experiment, we explored how voice commands and body rhythms could manipulate digital environments, simulating the surreal experience of lucid dreaming. Inspired by films like Inception and Ant-Man, we used speech recognition to alter scanned 3D spaces, dynamically distorting familiar environments like studios and bedrooms. 


Tools: Scanniverse, TouchDesigner, p5.js


p5.js Link

Experiment5_Audio_Recognition
Experience It Now!






I was mainly responsible for implementing the speech recognition system in p5.js and ensuring seamless video playback and dynamic interactions between voice commands and visual transformations. To achieve this, I adapted and refined an initial speech recognition script provided by my supervisor, Andreas, modifying it to better integrate with our project’s requirements.




Interactive Dreamscapes: 
Sound, Control, and Visual Transformation


By integrating subtle body rhythms—pulse, breath, and blink—we captured the subconscious movements that occur in dreams, driving immersive, surreal transformations. 

This experiment highlighted the deep connection between sound, control, and visual manipulation, offering insight into how abstract data can shape digital landscapes in real time.






Altering Familiar Environments 
/ Speech Recognition


We scanned real-world spaces such as bedrooms and design studios using Scanniverse and Polycam, then processed them in TouchDesigner to introduce dreamlike distortions based on voice commands:





/ Bedroom


Room
Scatter
Swell




/ Studio


Split
Distort
Surround

Studio
Noise




Each transformation was designed to reflect the fluid, 
shifting nature of lucid dreams, where control and 
perception blend seamlessly.

  • Scatter – Fragments the space, dispersing elements into floating clusters.
  • Swell – Expands the environment as if it’s breathing.
  • Split – Divides the room into multiple shifting sections.
  • Distort – Warps structures, twisting them into surreal forms.
  • Surround – Encloses the viewer, enhancing immersion.








Visual Sound Score

Expressing data as a narrative


I designed a Totem Phrases Chart to represent the corresponding actions triggered by voice commands. This score maps the totem words to their visual changes and serves as a guide for interaction.
































© 2025 Choi Yerin. ALL RIGHTS RESERVED.