Luma launches Luma Agents, powered by Unified Intelligence, for creative work
March 5, 2026

Luma launches Luma Agents, powered by Unified Intelligence, for creative work

PALO ALTO, CA - Luma (www.lumalabs.ai), the builder of unified multimodal AI systems, has launched Luma Agents, a new class of AI collaborators capable of executing end-to-end creative work. Designed for agencies, marketing teams, studios and enterprise organizations, Luma Agents maintain full context, from initial brief to final delivery, coordinating tools, models and iterations within one unified system. 
 
In recent years, many AI systems have been assembled by chaining together separate models for language, vision, video and reasoning, stitching outputs together through orchestration layers. These systems can fragment context and require complex workflows to produce reliable creative results. 
 
Luma Agents replace fragmented, multi-model workflows with coordinated execution built on unified reasoning. Instead of switching between disconnected tools and rebuilding context at every step, teams work alongside agents that execute projects end-to-end while maintaining and sharing text, image, video and audio. 

Luma Agents are built on Unified Intelligence, a new model architecture designed to move beyond the industry’s prevailing approach of assembling intelligence in pieces. Unified Intelligence trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.



The first model built on this architecture is Uni-1. Uni-1 is a decoder-only autoregressive transformer operating over a shared token space that interleaves language and image tokens, allowing both modalities to function as first-class inputs and outputs in the same sequence. This design enables the model to reason in language while imagining and rendering in pixels within the same forward pass. Rather than generating outputs step-by-step across disconnected systems, Uni-1 can plan, visualize and produce creative artifacts as part of a single coherent reasoning process. The result is a foundation where thinking and creation are tightly coupled.

Luma Agents can coordinate across leading AI models, including Ray3.14, Veo 3, Sora 2, Kling 2.6, Nano Banana Pro, Seedream, GPT Image 1.5, and ElevenLabs. It can automatically select and route tasks to the best model or capability for each step and maintain persistent context across assets, collaborators and creative iterations. Together, these capabilities allow Luma Agents to function as collaborative AI creatives capable of executing end-to-end creative work.

Luma recently showed Post a video that demonstrated Agents’ capabilities. In the example, the AI tool was given a single image for reference – in this case, a tube of lipstick. With prompting and refinement, Agents then created a multi-platform beauty campaign in which four different women modeled the makeup at locations throughout the world. In addition to video elements, the tool created print assets, digital signage and social media content in a range of aspect ratios, as well with variations of the product itself. Luma COO Caroline Ingeborn suggested that a similar campaign might cost a million dollars to produce practically, and could now be executed for only tens of thousands. 



Publicis Groupe Middle East and Serviceplan Group are deploying Luma Agents across strategy, creative development and production workflows to increase throughput while maintaining brand consistency across markets.

“Luma is now part of our broader 'house of AI' ecosystem and integrated directly into our creative workflows," notes says Alexander Schill, global CCO at Serviceplan Group. "It allows our teams across more than 20 countries to collaborate more smoothly and develop great work faster. For our clients, that means high-quality creative output delivered with greater speed and efficiency – without compromising craft.”