Partnered with Adobe research so we’re never going to get the actual model.
And if we do it will be bundled in an Ai suite with some license checking software that crawls your whole computer to a crawl for no reason.
Crawls to a crawl is a very common phrase, I don’t know why people are saying it’s not.
Because it isn’t? Slows to a crawl is correct. “Crawls to a crawl” means nothing.
More mediocre images for everyone!
While I think the realism of some models is fantastic and the flexibility of others is great it is starting to feel like we’re reaching a plateau on quality. Most of the white papers I’ve seen posted lately are about speed or some alternate way of doing what ControlNet or inpainting can already do.
Well, when it’s fast enough you can do it in real time. How about making old games look like they looked to you as a child?
There’s way more to a game’s look than textures though. Arguably ray tracing will have a greater impact than textures. Not to mention, for retro games, you could just generate the textures beforehand, no need to do it in real time.
When the output of something is the average of the inputs it will naturally be mediocre. It will always look like the output of a committee by the nature of how it is formed.
Certain artists stand out because they are different from everyone else, and that is why they are celebrated. M.C. Escher has a certain style that when run through AI looks like a skilled high school student doing their best impression of M.C. Escher.
Now as a tool to inspire, AI is pretty good at creating mashups of multiple things really fast. Those could be used by an actual artist to create something engaging. Most AI reminds me of photoshop battles.
Who says the output is an average?
I agree for narrow models and Loras trained on a specific style they can never be as good as the original but i also think that is the lamest uncreative way to generate.
Much more fun to use general use models and to crack the settings to generate exactly what you want the way you want,
That’s maybe because we’ve reached the limits of what the current architecture of models can achieve on the current architecture of GPUs.
To create significantly better models without having a fundamentally new approach, you have to increase the model size. And if all accelerators accessible to you only offer, say, 24gb, you can’t grow infinitely. At least not within a reasonable timeframe.
Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.
Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.
Please link original article and paper when posting
Article: https://news.mit.edu/2024/ai-generates-high-quality-images-30-times-faster-single-step-0321
These kind of performance improvements have really cool potential for real time image/texture generation in games. I’ve already seen some games do this, but they usually rely on generating the images online.
ASCII and low graphic roguelike’s have a lot of generation freedom where they can create very unique monsters/items/etc. However a lot of this flexibility is lost as you move to more polished games that require models and art assets for everything. This is also one of the many reasons that old-styled games are still popular, is because they often offer more variety and randomization than newer titles. I think generated art assets could be a cool way to bridge the gap though, and let more modern games have crazy unique monsters/items with visuals.
Pfft, I can do that, just run them on a computer that’s 30 times faster!