The tool sequence designers actually used, by discipline. See how the stack shifts as AI takes more of the execution.
Graphics designers in Q2 2026 batch-generated visual directions in Midjourney or Firefly — sometimes 50–100 images in a session — then compressed that into a tight 3–5 direction shortlist for client review. The craft value was in the curation and iteration: knowing which generation to extend, which to kill, and how to steer the model toward brand-consistent output rather than generic aesthetics. Motion work (Runway) was increasingly expected even in still-image briefs, as clients wanted social-ready loops alongside print deliverables.
44 synthesized monthsin the data layer. Stage breakdowns (Starter / Scaler / Titan) are available for 2026 only — earlier months show under the All segment but won’t appear under stage filters until the design-context pipeline runs further back.
Graphics and generative-image designers ran prompt-to-asset pipelines through Midjourney v7 and Firefly 4 for hero imagery and illustration, then used Krea AI for real-time style refinement and Runway Gen-3 for short motion loops. The production ratio inverted: a single designer could output a week's worth of raw visuals in a morning, meaning curation, retouching for brand consistency, and motion post-production became the majority of billable hours. Buyers increasingly valued art-direction documentation — written prompt frameworks and style references — as a deliverable alongside the assets themselves.
Illustrators and generative artists in Q1 were running multi-model pipelines: Midjourney or Krea for initial image generation, Photoshop (now with conversational AI) for compositing and correction, and Runway for motion passes on still assets. The NVIDIA-Firefly partnership announcement signaled that commercially safe, enterprise-grade generative images are coming; for working graphics creatives, the short-term shift is that clients expect more options faster. The edge is art direction — knowing which outputs are worth extending and why.
Graphics and motion designers ran a generate-refine-animate pipeline in Q1 2026: Midjourney or Firefly for foundational image generation and brand moodboarding, Photoshop Generative Fill for client-asset cleanup and expansion, then Runway or Pika for motion layers—animating stills, adding transitions, or compositing short-form video for social. A typical week included 2–3 client briefs resolved as multi-format asset sets (static + motion), with the Midjourney→Pika pipeline being the dominant fast-turnaround route and Runway reserved for higher-fidelity production outputs. The February model rush meant designers ran bake-offs on the same brief across providers before committing a week's direction.
Graphics and generative image designers in Q1 2026 were recalibrating around the open-weight moment — with models like GLM-Image closing the gap on complex in-image text rendering, the workflow moved toward self-hosted or fine-tuned pipelines for brand-sensitive work rather than defaulting to a vendor API. A typical week involved generating a batch of campaign imagery in Midjourney or Firefly, running legibility and brand-compliance checks in Photoshop, and increasingly using Runway for short motion treatments to make static assets feel alive. The new skill premium was prompt engineering for typography-heavy compositions, given that legible in-image text had been generative AI's most visible weakness.