Coffeezilla asks: “Is the LAM a Scam? Down the rabbit hole we go.”
Here is an alternative Piped link(s):
https://piped.video/zLvFc_24vSM
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Pretty much everything AI is a scam, I mean it has its uses but isn’t exactly as claimed yet. Pretty much every non phone AI gadget I’ve seen so far definetly is a scam.
If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.
There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.
Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.
I think it’s becoming fair to label a lot of commercial AI “scams” at this point, considering the huge gulf between the hype and the end results.
Open source projects are different due to their lack of commercialisation.
Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.
And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.
We have:
Suno’s music generation
NVidia’s upscaling
Midjourney’s Image Generation
OpenAI’s ChatGPT
Etc.
So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.
I mean, LLaMA is open-source and it’s made by Facebook for profit, there’s grey areas. Imo tho, any service that claims to be anything more than a fancy wrapper for OpenAI, Anthropic, etc. API calls is possibly a scam. Especially if they’re trying to sell you hardware, or the service costs more than like $10/month, LLM API calls are obscenely cheap. I use a local frontend as an AI assistant that works by making API calls through a service called openrouter (basically a unified service that makes API calls to all the major cloud LLM providers for you). I put like $5 in it 3 or 4 months ago and it still hasn’t run out.
Machine translation has been used by large organizations for years. Anyone saying AI is a scam doesn’t realize it’s been around, and useful, for quite a while
I find there’s 4 kinds of folks talking about AI.
There’s folks who think it’s as amazing as all the tech firms tell us:
- And we’re all gonna die
Or
- And life will be amazing
Then there’s folks who think AI is hype whack bananas
- And think it’s a scam.
And lastly,
- The folks who see that we’ve already changed life as we know it with AI. That there’s still massive potential, but folks in categories 1 and 2(, and 3,) are all kinda nuts.
4 gang.
There’s a 5th type - those of us who understand that the technology itself isn’t a scam and has valid uses (even if many “AI” startups actually are scams), but think there isn’t that much potential left with current methods due to the extreme amount of data and energy required (which seems to be supported by some research lately, but only time will tell).
This is because dedicated consumer AI hardware is a dumb idea. If it’s powerful enough to run a model locally, you should be able to use it for other things (like, say, as a phone or PC) and if it’s sending all its API requests to the cloud, then it has no business being anything but a smartphone app or website.
I can’t agree with that. ASICs can specialize to do one thing at lightning speeds, and fail to do even the most basic of anything else. It’s like claiming your GPU is super powerful so it should be able to run your PC without a CPU.
Investments in AI are in the billions. With that kind of money flying around, it’s going to attract a lot of snake oil salesmen. It didn’t help that for the general public and investors, any sufficiently advanced technology is indistinguishable from magic, and LLMs reached that point for many.
Just keep the hype cycle in mind. It’ll all go downhill after the point of inflated expectations. With AI, it always does.
It’s very much fake it till you make it.
Just go all out, and gamble that in 5 years the technology will be here to actually make it all function like your dreamt it would. And by then you are the defacto name within that space and can take advantage of that.
No, Google and Amazon were actually well run businesses with sensible business plans to meet needs in the market and did it well.
I am not surprised that it’s just ChataGPT in a box lol, not at all.
Artificial, yes. Intelligent, no.
BUT THE LAM! People reported on the “large action model” like it was real. It always sounded like bullshit, in this case. Even if they were selling ideas they feel are obvious and inevitable.
I dunno. It sounds like a somewhat feasible thing that could be kinda useful if done right. It just doesn’t actually exist which is the problem here. It doesn’t sound too crazy which is why people bought this thing. The part I struggle with conceptually is that LAM would essentially weaponize bots - the same thing all these stupid captchas are meant to stop. Also it would drive users away from websites and therefore away from ads. This would be all out war and the money (I. E. Websites with ad revenue) would ultimately win unfortunately.