These experiments explore how fragmented human memories can be translated through AI. Using ComfyUI and prompt weighting, images and videos were generated with subtle variations, reflecting the fluid and imperfect nature of memory.
An exploration of creating generative workflows in ComfyUI. By creating outcomes through text-to-image and image-to-video processes, this experiment reflects on how both human and machine memories shift, blur, and change over time. While machine outputs are built from datasets, they still reveal unexpected variations, hinting at a more complex and less rigid form of memory.
An interface was built within ComfyUI by treating it as an illustration tool, using node boxes and default colors to create visual guides. Canvases were organized like slides with sections such as Introduction, Input, Process, and Output, and navigation was simplified with arrow keys. A consistent visual language made the system intuitive for viewers without prior knowledge of the program.
An AI-generated short film built from conversations with friends and important figures in my life. Starting with dreams and early memories, the visuals merge fragments through the Collective Memory Engine that has been built. Embracing gaps, distortions, and flaws, the project focuses on human connection rather than polished AI outcomes.