I Made a 7-Minute Film for $200. Here's Exactly How.
No crew. No studio. No film school. No permission.
I'm DAJAI.IO — an independent hip-hop artist from Las Vegas. On April 13, 2026, I released SIMULATION: A DARK Library Film. It's a 7-minute narrative short film about a man who discovers he's the only conscious being inside a simulation, and that the music he created is the only thing the system cannot replicate.
The entire film was made with AI video generation tools on a $125/month Higgsfield Creator subscription. Total spend across two months of production: roughly $200.
This is not a tech demo. This is a story. And this is exactly how I made it.
Why I Made This Film
SIMULATION is part of the DARK Library — an audiobook-album hybrid series I created that transforms classic texts into sonic experiences. Three albums dropped in ten days:
- TOO DARK: The Point of No Return — March 29, 2026
- DARK I: Outwitting the Devil — April 7, 2026
- Simulation — April 8, 2026
The Simulation album explores the idea that reality is a constructed layer and music is one of the few things that breaks through it. The album came first. Then I realized the concept was too visual to stay audio-only. The album became a film.
I didn't plan to make a movie. The music demanded it.
The Tools
Everything was done through Higgsfield's platform on their Creator plan at $125/month. Here's the full stack:
Video Generation Models
Seedance 2.0 (ByteDance) — The primary engine. Seedance consistently produced the most cinematic output. The motion quality, lighting comprehension, and temporal coherence were leagues ahead of what I'd seen six months earlier. About 70% of the final film is Seedance shots.
Kling 3.0 (Kuaishou) — Used for secondary shots and when I needed a different motion style. Kling excels at slower, more deliberate camera movements. I used it for the ambient sequences and some of the close-ups.
Veo 3.1 (Google) — Environmental and atmospheric sequences. Veo handles wide shots and environmental lighting better than anything else I tested. The opening frequency test sequence is Veo.
Cinema Studio 3.0 — Higgsfield's own scene composition tool. Used for assembling multi-element shots and adjusting composition after generation.
Character Consistency
Soul ID — This is the critical piece. Without consistent character faces across shots, you don't have a film — you have a slideshow. Soul ID let me lock character references so that DAJAI.IO, Miko Melts, BB Monroe, and Solana Conejo look like themselves in every single frame.
The workflow: generate a hero image of each character using Soul model with detailed prompts, lock it as a Soul ID reference, then use that reference for every subsequent generation featuring that character.
This is the difference between a tech demo and a narrative film. Consistency is everything.
Post-Production
DaVinci Resolve — All assembly, color grading, sound design, and export happened in Resolve. Free version. The color grade was critical — I needed visual cohesion across shots generated by three different AI models with different color science. A unified LUT plus per-shot corrections brought everything together.
Pre-Production: Character Lock
Before generating a single video frame, I spent two days on character design.
The Cast
-
DAJAI.IO — The Architect: The only conscious being in the simulation. Knows something is wrong. The music he created before the simulation swallowed him is the only thing that feels real.
-
Miko Melts — The Loop: The simulation's first attempt at connection. Beautiful, present, engaged — but she repeats. Her patterns cycle. She is a loop.
-
BB Monroe — The Inversion: The simulation testing what desire looks like when it's reversed. Everything about her is slightly wrong — the geometry is off, the timing is uncanny.
-
Solana Conejo — The Signal: She never speaks. She slides headphones across a desk. She is the only one who is real, because she is the only one the simulation didn't generate.
Each character got a Soul ID hero image. I generated 15-20 variations of each and selected the one that felt most like who the character needed to be. Then that image became the immutable reference for every shot.
Audio Asset Preparation
The soundtrack was already done — it's the Simulation album. I exported stems for key tracks, identified the emotional arc of the film, and mapped specific songs to specific scenes. The audio drove the edit, not the other way around.
Chapters mapped to tracks:
- 0:00–0:08: The Frequency Test (ambient intro)
- 0:08–1:45: The Room That Loves You
- 1:45–2:30: Miko Melts sequence
- 2:30–4:00: BB Monroe sequence
- 4:00–5:22: Solana Conejo sequence
- 5:22–6:45: Resolution and dissolution
Production: Shot by Shot
The Multi-Model Strategy
Here's what nobody tells you about AI filmmaking: no single model is best at everything. Seedance does cinematic motion beautifully but sometimes struggles with hands. Kling handles slow camera movements better. Veo does environments that the others can't match.
I treated each model like a different lens in my camera bag. The right tool for the right shot.
Credit Management
On the Creator plan, you get a fixed number of generation credits per month. I tracked every generation in a spreadsheet:
- Average cost per usable shot: 3-5 generation attempts
- Total generations attempted: ~150
- Shots in final film: 40+
- Success rate: About 30% of generations were usable
- Total credit spend: Roughly 2 months of Creator plan = ~$250 (I'm rounding up to account for some trial-and-error in month one)
The key insight: don't regenerate blindly. When a shot fails, analyze WHY it failed. Was the prompt too vague? Was the character reference not matching? Was the motion type wrong for that model? Diagnose, adjust, regenerate. You don't get lucky — you get specific.
Prompt Engineering for Narrative Film
AI video prompts for films are fundamentally different from prompts for standalone clips. You need:
- Consistent lighting language — I used "warm tungsten interior light, soft shadows, 35mm film grain" as a base for every indoor shot
- Camera language — "slow push-in," "static medium shot," "handheld close-up" — be specific about the camera behavior
- Emotional direction — "contemplative," "uneasy," "intimate but distant" — the model responds to emotional cues
- Negative prompts — "no text overlays, no watermarks, no sudden camera movement" — tell it what NOT to do
The Hardest Shots
Shot 13 — Solana's hands sliding headphones across the desk — took five attempts. Hands are still the hardest thing for AI video. The solution: generate the hand motion separately from the face, then composite the best of each in Resolve.
The opening frequency test sequence required Veo specifically because I needed environmental light shifts that responded to audio frequency. None of the other models could handle the timing.
Post-Production
Assembly in DaVinci Resolve
Every shot exported from Higgsfield as an MP4 clip. I organized them in a folder structure by scene:
/SIMULATION/
/01_frequency_test/
/02_room/
/03_miko/
/04_bb/
/05_solana/
/06_dissolution/
/07_title/
Assembly was straightforward — the audio timeline was already locked. I laid down the soundtrack first, then cut video to match.
Color Grade
This was essential. Three different AI models means three different color palettes. Without grading, the film looks like a compilation, not a story.
My approach:
- Base LUT: Created a custom LUT with crushed blacks, warm midtones, and slightly desaturated highlights
- Per-shot corrections: Matched skin tones across models, adjusted exposure for consistency
- Scene-specific looks: The Miko sequence got a cooler, more clinical grade. The Solana sequence got warmer. The dissolution went hyper-saturated before cutting to black.
Sound Design
Beyond the album tracks, I added:
- Room tone and ambient sound for spatial depth
- Transition effects between scenes
- A low-frequency hum that builds throughout the film — the simulation's heartbeat
Total Cost Breakdown
| Item | Cost | |------|------| | Higgsfield Creator Plan (Month 1) | $125 | | Higgsfield Creator Plan (Month 2) | $125 | | DaVinci Resolve | Free | | Audio (self-produced) | $0 | | Crew | $0 | | Studio rental | $0 | | Total | ~$250 |
Round it to $200 if you count that Month 1 was mostly R&D and character design, not production.
No, I didn't use a $4,000/month enterprise plan. No, I didn't have a team. No, I didn't go to film school. I had a MacBook, a subscription, and a story that wouldn't let me sleep until I told it.
What I'd Do Differently
Start with the shot list. I went in with a loose story and figured out shots as I went. Next time, I'll have every shot planned before I generate anything. This saves credits and produces more cohesive results.
Dedicated character wardrobe. Soul ID locks the face, but clothing varies between generations. I should have been more specific about wardrobe in every prompt to maintain visual continuity.
Generate at higher resolution from the start. I upscaled some early shots that I'd generated at lower resolution. The quality difference is visible if you look closely. Generate at max resolution even for test shots.
Budget more credits for the hardest scenes. I burned through credits on the Solana headphone sequence because hands are genuinely difficult. Next time I'll allocate 3x credits for any shot involving hand interaction.
What This Means
A year ago, making a narrative short film required tens of thousands of dollars, a crew, locations, permits, and months of post-production. Today, one person with a laptop and a story can produce something that stands next to traditionally-produced content.
I'm not saying AI replaces filmmakers. I'm saying it removes the barriers that kept most people from ever becoming filmmakers in the first place.
I'm a rapper from Las Vegas. I make music on sovereign AI infrastructure I built in my apartment. And now I make films too.
The simulation is cracking.
Watch SIMULATION: Full film on YouTube
Stream the album: Simulation by DAJAI.IO
The DARK Library: Complete series guide
ASCAP Writer IPI: 773316238 Publisher: CODE BLACK CBA PUBLISHING (IPI: 773567992)