ITT people who don’t understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.
And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn’t feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.
You can for example run some upscaling models on your phone just fine (I mentioned the SuperImage app in the photography tips megathread). Yes the most powerful and memory-hungry models will need more RAM than what your phone can offer but it’s a bit misleading if Google doesn’t say that those are being run on the cloud.
So much for the brilliant AI-specialized Tensor processor
It’s basically just a mediocre processor that offloads interesting things to the mothership.
This really doesn’t surprise me.
Yeah, obviously. The storage and compute required to actually run these AI generative models is absolutely massive, how would that fit in a phone?
Fuuuuck that.
Using Google products has always been a “privacy nightmare” - it’s not like this is some mega open source phone or anything it’s literally Google’s flagship. Is this really surprising? Playing with fire gets you burned.
Even ignoring all the privacy issues with that, it’s kinda shit to unnecessarily lose phone features when you’ve got no signal
That’s how phones have always worked. As long as they are not automatically ingesting data from people without permission. I don’t see an issue with this. At least in this instance we have some choice in what we send or don’t. It’s no more privacy nightmare than mid-Journey doll-e or any of the others.