What will you create?
Generate visuals, clips, and edits in one workspace. Choose a tool to begin.
Collaborate
Review the same assets live with pinned notes, approvals, and project rooms
Review
Move generated assets through a clean approval queue with one large stage and fast change states
Canvas
Lay out a shared generation board with blocks, outputs, notes, and review steps
Timeline
Stack recent shots into a rough cut with video, still, and derived audio lanes
ComfyUI
Run node-based local workflows, queue jobs, and map custom generation graphs
Deliberate
Compare directions, weigh tradeoffs, and lock the strongest creative plan before generating
Image
Generate images from text prompts
Video
Create cinematic AI video sequences
Actor Capture
Assemble a reusable identity pack for a performer before you generate shots
Agent
Clarify the creative direction, shape the prompt, and stage the next Studio2 action.
Intent to prompt
Turn the request, references, and target format into a ready prompt and production plan.
Brief
VideoPlan
StarterConversation
LiveReview
Triaging generated assets should feel separate from generation. Use this queue to approve, reject, and send strong work back into Home or Collaborate.
No assets ready
How this room works
Keep Home focused on creating. Use Review as the queue for asset decisions, then push the chosen prompt back into Home or hand the asset to Collaborate for notes.
0 available
Collaborate
Shared project rooms for live review, pinned notes, approvals, and simultaneous asset feedback.
Review the same asset at the same time, leave anchored notes, and move work through approval.
This room combines the strongest current collaboration patterns into one flow: Figma-style pinned comments on the asset, Frame.io-style review and approval lanes, Canva-style shared commenting, and Freepik-style project spaces for team work.
Review asset
Review Flow
Concurrent ReviewRoom Features
Best PracticeNotes & Reviews
0 OpenTimeline
Assemble recent generated shots into a rough sequence with picture and derived audio lanes.
Sequence 01
Recent Home outputs land here first so the edit pass can start without rebuilding a bin.
Sequence Controls
This first pass uses your latest generated assets as source material. Remove clips from the current cut, then reset the sequence when you want to rebuild it from history.
Recent Assets
Open any source asset directly from the bin.
ComfyUI
Build and run node-based local workflows, manage queue state, and route custom generation graphs into the rest of the studio.
Keep the node graph, queue, and outputs in one production surface.
Use ComfyUI when the work needs custom graphs, explicit control over samplers and models, or reusable workflow recipes. Treat it as the technical generation bench that can still feed outputs back into Studio2, Canvas, and review.
Workflow Setup
NodesQueue & Runtime
ExecutionOnce the graph, inputs, and output targets are defined, the local run is ready to queue.
Workflow Preview
GraphNext Actions
PipelineDeliberate
Compare competing directions, weigh tradeoffs, and lock a clearer brief before the team spends generation time.
Stress-test creative options before production begins.
Use Deliberate when the team has multiple viable directions and needs to reason through them. Treat it as the decision layer that compares options, captures pros and cons, and turns ambiguity into a clearer generation plan.
Directions
CompareDecision Criteria
ReasoningOnce the strongest direction is chosen, the final prompt strategy can move directly into Studio2.
Deliberation Output
BriefNext Actions
PipelineImage Generation
Create and edit images with text prompts and reference photos.
Drop images here or click to browse
PNG, JPG, WebP — up to 14 reference imagesYour generated image will appear here
Enter a prompt and click generateVideo Generation
Produce cinematic AI video sequences from text, images, or keyframes.
Drop image or click to browse
PNG, JPG, WebP — max 20 MBDrop image or click to browse
PNG, JPG, WebP — max 20 MBDrop images or click to browse
PNG, JPG, WebP — characters, objects, or style referencesYour generated video will appear here
Enter a prompt and click generateActor Capture
Build a reusable performer pack with identity stills, motion references, and continuity notes before sending the actor into Studio or Video.
Lock the performer before you generate the scene.
Use Actor Capture to collect the clean views that keep one person consistent across image, video, and edit workflows. Treat it as the source-of-truth pack for face, wardrobe, silhouette, and motion.
Reference Uploads
8-12 assetsSession Notes
ContinuityOnce this pack is complete, you can use it as the clean input set for actor-consistent generations.
Capture Preview
Actor packNext Actions
Pipeline- Upload three clean portrait angles before building any scene variants.
- Add one full-body frame with visible footwear and hand shape.
- Attach a short motion reference if the actor will appear in video.
- Carry the same pack into Studio, Video, or Image Edit for identity consistency.
Prop Capture
Build a clean prop pack with hero stills, material references, scale notes, and handling coverage before the object goes into image, video, or edit workflows.
Lock the prop before it enters the shot list.
Use Prop Capture to define the exact object language you want to preserve: silhouette, wear, material finish, scale, and handling. This becomes the source-of-truth pack for product, set dressing, and hero-object continuity.
Reference Uploads
6-10 assetsSession Notes
ContinuityOnce the pack is complete, the prop can be reused as a stable visual ingredient across generations.
Capture Preview
Prop packNext Actions
PipelineEnvironment Capture
Build a reusable location pack with spatial coverage, lighting references, material cues, and atmosphere notes before generating scenes inside that world.
Define the world before you stage the action.
Use Environment Capture to hold the structure of a place: architecture, light direction, set dressing, weather, and mood. This becomes the source-of-truth pack for scene consistency across stills, edits, and video shots.
Reference Uploads
8-14 assetsSession Notes
WorldbuildingOnce complete, the environment pack can anchor new angles and generated action inside the same world.
Capture Preview
Environment packNext Actions
PipelineLighting Revision
Run portrait relighting directly inside Studio 2. This tab keeps the source frame, prompt, credits, and result in one place instead of sending you to the old relight page.
Source Frame And Lighting Prompt
Upload the approved still, describe the new lighting direction, and render the revised frame through the integrated relight engine.
Upload the still that should keep the same composition and performance while the lighting changes.
Credits And Relit Result
The current balance and the finished relight stay visible here without leaving Studio 2.
Exact Camera Re-Angle
Generate a controlled alternate camera position from one approved source image directly inside Studio 2. This uses the Studio 2 Nano Banana edit path instead of sending you to the old page.
Source, Viewpoint, And Framing
Upload one source image, adjust the virtual camera controls, then render a matching alternate angle while preserving the scene.
Upload the frame that should keep identity, texture, colors, and scene continuity.
Viewpoint Preview And Result
Studio 2 generates a guide image for the requested camera change, then runs the edit through the same backend image pipeline.
The guide image tells the edit model which viewpoint to adopt.
Re-format
Turn one approved asset into a full delivery set across aspect ratios, crops, and platform-safe layouts without losing the original composition intent.
Rebuild the same shot for every destination.
Use Re-format to take one image or video and generate controlled versions for landscape, portrait, square, stories, feeds, ads, and cutdowns. Treat it as the finishing layer that adapts the asset without drifting the core scene.
Input Asset
SourceOutput Rules
FormatsOnce the crop and safety rules are locked, the asset can be reformatted into a full delivery pack.
Format Preview
VariantsNext Actions
PipelineResolution Finishing
Run image and video upscaling directly inside Studio 2. This panel now owns the upscale flow instead of sending you to a separate page.
Choose The Source And Target
Studio 2 uploads the source asset into shared storage, queues the upscale job, and saves the result back into history.
Upload the approved still that should be upscaled into a higher-resolution master.
Upscaled Result
Queue status and the final upscale output render here without leaving Studio 2.
Stitch
Combine multiple approved assets into one continuous sequence or panoramic composite while preserving continuity, motion direction, and scene logic.
Join separate pieces into one coherent output.
Use Stitch to merge shots, frames, plates, or environment segments into a single deliverable. Treat it as the composition layer that bridges seams, keeps subjects aligned, and smooths spatial or temporal transitions.
Input Sequence
2-8 assetsBlend Rules
ContinuityOnce overlaps and alignment rules are defined, the asset set is ready for a seamless combined render.
Stitch Preview
CompositeNext Actions
PipelineCompare
Review multiple versions side by side so teams can evaluate prompt changes, treatments, edit passes, and delivery choices without losing context.
Put competing outputs next to each other and decide fast.
Use Compare to line up image variants, video passes, or edit treatments in one review surface. This is where a team can judge framing, style, motion, and prompt quality before pushing a version forward.
Comparison Set
2-6 variantsReview Rules
Team syncOnce the candidate set is assembled, Compare becomes the fastest place to choose a winner and move it forward.
Side-by-Side Stage
A/BCompare Workflow
4 stepsSound Effects
Generate sound effects through FAL's ElevenLabs Sound Effects V2 model and save finished audio back into Studio 2 history and projects.
Describe The Sound
Write the sound effect as production direction: source, texture, movement, space, and intensity.
Generated Audio
Completed audio can be played, downloaded, filed into the active project, and synced to account history.
Talking Performance
Run standard and pro lip sync directly inside Studio 2. The old page is no longer the intended workflow surface for this tool.
Video, Audio, And Sync Mode
Upload the face clip and the audio driver here, then queue the job directly from inside Studio 2.
Upload the source clip with the face that should be driven by the audio.
Upload the voice, dialogue, or final mix that should drive the lips.
Synced Video
Progress and the final synced performance stay inside Studio 2.
Your creation will appear here
Activity
0 itemsYour creation will appear here
Studio2 is ready
Generate or upload media to populate this board. Drag cards into projects on the left once assets start landing here.
Archive this project?
Changing editing modes will discard ungenerated changes!
Profile & Billing
Studio 2 uses the unified account stack for auth, subscription, credits, payments, and history. This panel is the control surface for that system.
Guest
Sign in with your Studio 2 account to restore billing, credits, and history.
Plans And Credits
Checkout stays on the integrated billing stack. After Stripe returns, Studio 2 confirms the purchase here and refreshes the live account snapshot.
Pro
Business
Provider Usage
Estimated FAL/provider consumption for your account, filtered by time window, model, and project folder.
Shell Settings
These settings stay inside Studio 2 while your authenticated session remains active.
Restore Your Session
Login, signup, me, and logout are all running through the same-origin Studio 2 adapter surface.
Payment History
Loaded directly from the live payment ledger.
Motion Control
Drive image-to-video animation with the integrated motion-control backend. This panel uses the same upload storage, queue tracking, and history persistence as the rest of Studio 2.
Source Image And Motion Video
Animate requires both an image source and a motion reference clip.
Upload the still frame that should inherit the motion.
Upload the clip that provides the motion pattern.
Result And Logs
Queue state and final output are pulled through the same-origin Studio 2 adapter.
Video Generation
Run Kling O1 directly from Studio 2. This panel exposes the supported Kling input modes instead of sending video work through the old stack.
Choose A Kling Input Mode
Different modes require different uploaded media. The panel uses the same shared upload storage and Kling O1 endpoint as the rest of the site.
Required for first / last frame mode.
Optional second frame for FLFV mode.
Required for video edit and video reference modes.
Upload one or more guide images for the reference-based Kling modes.
Generated Video
Kling O1 returns a completed result directly through the integrated backend.
FAL Model Runner
Search active FAL models, fetch the live schema through the integrated backend, upload inputs through the Studio 2 media adapter, and run jobs without leaving Studio 2.
Browse
Models are loaded from the live FAL adapter.
Inputs
Input widgets are built from the live schema returned by the FAL adapter.
Result
Completed job output is rendered here, with a raw JSON fallback for unsupported payload shapes.
Diffuse
Diffuse is now part of the Studio 2 navigation and ready for a dedicated diffusion-focused workspace.
Diffuse Surface
This tab has been added directly into Studio 2 instead of living outside the product flow.
Gallery
All your creations in one place.
No generations yet
Start creating to build your galleryImage Edit
Edit and transform images with AI-powered tools.
Drop images here or click to browse
PNG, JPG, WebPEdited result will appear here
History
All your image and video generations in one place.
No generations yet
Create something to see it here