Changelog
Latest product updates
Remotion Video Composition, Seedance 2.0 & ElevenLabs Voice Picker
Remotion Video Composition, Seedance 2.0 & ElevenLabs Voice Picker
The biggest single-day update in Wireflow history. We shipped Remotion video composition with distributed parallel rendering, added Seedance 2.0 (the highest-quality cinematic video model on the market), expanded the model catalog with the Flux LoRA family + Seedream V5, and built an ElevenLabs voice picker with the full 1000+ voice library. To stress-test it all, we built a complete short anime film inside one workflow — chaining Flux LoRA → Nano Banana Pro → Seedream V5 → Kling 3.0 → ElevenLabs → Remotion in a single canvas. <a href="/flow/cmnrwkj100009xgb6lyh4pjar" class="text-blue-400 hover:underline">Duplicate the anime workflow here</a> and remix it.
📦 New
- Remotion Video Composition Node — drop a "Compose Video" node, wire in any combination of generated images, videos, and audio, and render a finished branded MP4. Built on
@remotion/lambdawith distributed rendering across ~30 parallel Lambda workers. A 20-second 1080p video renders in ~80s. Live progress bar, render state persists across reloads, and a sidebar tab shows your full render history. - Seedance 2.0 — ByteDance's new flagship video model, currently the highest-quality cinematic generator on FAL. Both endpoints supported:
image-to-video($0.30/s) with director-level camera controls (dolly, pan, tracking shots), andreference-to-video($0.18/s) with multi-image tagging (@image1,@image2) for character consistency. - Flux LoRA Family — five new image models added: Flux LoRA (text-to-image with custom LoRAs from CivitAI/HuggingFace), Flux LoRA i2i, Flux Kontext LoRA i2i (multi-ref editing), Flux 2 LoRA Edit (multi-ref + LoRA combined). Use these to lock in custom styles like "90s anime," brand looks, or trained character LoRAs.
- Seedream V5 Lite Edit — reference-based image editing with strong style adherence. The best alternative to Nano Banana when you need to combine character references with style direction.
- ElevenLabs Voice Picker — searchable voice browser with audio preview. Type to search across the full 1000+ voice library (not just hardcoded presets), hit play to preview, click to select. Replaces the old generic dropdown.
- Render History — every Remotion render is now saved to your account. New "Renders" tab in the editor sidebar shows resolution, duration, timestamp, and a direct download link for every video you've ever made.
🧠 Improved
- Auto-Duration Credit Settlement — for variable-duration models like Seedance, we now pre-charge the maximum and refund the difference after FAL returns the actual duration. Saves users 30-50% credits on shorter generations.
- Distributed Rendering Architecture — switched from single-Lambda serial rendering to
@remotion/lambdaparallel chunks. Solves the previous "render times out at 5 min" issue and unlocks longer videos. - Frame-Accurate Video Component — adopted
@remotion/media's newVideocomponent (2026 best practice). Frame extraction on the server, buffered playback in the browser preview. No more black first frames in rendered output. - Render State Persistence — start a render, close the tab, reload the page, reopen the editor — the progress bar picks up exactly where you left off. Same FAL-style pattern used elsewhere in the platform.
🔄 Changed
- Fixed double-credit settlement race between webhook and poll routes — users were occasionally getting refunds twice on the same job.
- Remotion node aspect ratio now matches the actual scene graph dimensions instead of being hardcoded to 9:16 (was letterboxing 16:9 videos).
- Killed autoplay-with-audio on canvas video previews — they now show the first frame and require a click to play.
- Pricing fixes: Kling 3.0 Standard pricing was stale from February (now correct), Seedance/PixVerse use proper "seconds" unit, weekly cron now does safe price-only updates instead of destroying the registry.
Compositor Text Layers, Workflow API & Video Intelligence
Compositor Text Layers, Workflow API & Video Intelligence
A massive creative + developer release. The Compositor now supports text layers with full typography controls — turning Wireflow into a design tool. The new Workflow API lets you call any workflow programmatically, and we shipped a Video Intelligence Analyzer that breaks down any TikTok frame-by-frame with AI vision.
📦 New
- Compositor Text Layers — add text directly onto your compositions with full control over font family, size, weight, color, alignment, letter spacing, line height, and opacity. Supports Google Fonts, drag positioning, and pixel-perfect preview. Similar to Ideogram's text layers but built for workflow automation.
- Workflow API (
/runendpoint) — turn any visual workflow into a REST API.GET /rundescribes inputs,POST /runexecutes with just the values you want to override. Poll for results. Build visually, call programmatically. - Video Intelligence Analyzer — paste any TikTok URL and get a complete scene-by-scene breakdown. Extracts frames via scene detection (with smart fixed-interval fallback), transcribes audio, then runs Claude Vision on every frame for visual composition, content strategy, and recreation instructions.
- Magic Link Sign-In — email-based passwordless authentication alongside Google OAuth.
🧠 Improved
- Execution Engine Refactor — the execute route was reduced from 3,792 to 2,432 lines. Six modules extracted (node execution, iterator helpers, FAL polling, input resolution, workflow updates). Single completion authority via
__executeInProgressflag eliminates poll/resumer race conditions. - Iterator Deferred Resolution — workflows with dynamic frame counts (frame extractor → iterator) now correctly discover item counts at runtime. Scene detection auto-retries with fixed interval when fewer than 5 scenes detected.
- Published App Frame Thumbnails — frame extractor output is now included in execution results, so published apps show correct frame thumbnails instead of stale editor data.
🔄 Changed
- 6 new integration tests covering published app iterator pipeline — bare key lookup, data.result extraction, scene detection fallback, upstream result sync.
AI Chat Agent, Streaming Generator & Content Safety
AI Chat Agent, Streaming Generator & Content Safety
Meet the AI Chat Agent — a conversational assistant that lives right on your canvas. Describe what you want to build and watch it assemble nodes, wire edges, and configure models in real time. This release also adds a streaming workflow generator, content moderation, and collapsible sidebar.
📦 New
- AI Chat Agent — an in-canvas assistant that can build, modify, and explain your workflows through natural conversation. Ask it to add nodes, change models, or troubleshoot — it has full context of your canvas and tools.
- Streaming Workflow Generator — describe a workflow in plain English and watch nodes appear on the canvas one by one as the AI generates them. Includes real-time thinking feedback so you can see the reasoning behind each decision.
- Content Moderation — automatic NSFW filtering runs before AI execution, catching inappropriate prompts before they reach the model. Image uploads in the editor are also scanned.
- Sidebar Collapse — toggle the node sidebar with
Cmd+B. Hover the collapsed bar to reveal edge handles. State persists across sessions via localStorage.
🧠 Improved
- Smarter Edge Handles — handles now validate against hydrated port IDs and enforce proper in-/out- prefixes, reducing broken connections.
- Rate Limiting — bumped to 120 requests/hour after observing power users hitting the previous 60/hour cap.