The breakthroughs just keep coming and coming. I’ve been busy the past couple of weeks neck-deep in video models, comparing and contrasting and testing, to get to the point where I can create a little proof of concept short film. I would be delighted to share some of what I’ve learned, comparing Kling with Runway, Minimax, Luma, Hedra, Veo2, SVD, AnimateDiff, Hunyuan, FramePack, LTXV, and - of course - Wan which is truly amazing.
I could go into the various Wan workflows, models, optimizations and daily breakthroughs and accelerations, it’s been quite exciting and there’s been so much to learn and develop expertise in and now —what do you know— all that is quite possibly obsolete as of today because Google just dropped Veo3 (and a boatload of other goodies) and Veo3 appears to be a whole other animal.
I ‘hear’ Veo3 additionally creates the SOUND for each clip. And its prompt adherence seems to be off the charts.
As a researcher and tinkerer and artist and occasional AI craftsman this is par for the course. At once miraculous and devastating. Discovering by tweet my expertise has become obsolete. The cycle is tighter than ever. wan2.1 just hit a couple months ago. Now we’ll be demanding wan3.
And I’m sure it won’t be long to get here. Hard not to wonder why bother testing and researching and bleeding to make any of these workflows work when something massively better faster easier is always just a week away.
For now the damn thing is only available in the US and you need to pay $250/month to have access to it. That too will change fast.
Will it replace stock footage? Will we be able to generate wide shots/action/stunt shots of our cast? Will I be able to get alternate performances, or change their appearance?
Will Veo4 just be generative Netflix?
We’ll know soon enough. Join me browsing at the amazing veo3 clips on X #veo