“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”
Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.
Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.
AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.
Well sure, but you’re forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.
How is that a qualification on anything they said? If our knowledge of protein folding has gone up by multiples, then it has gone up by multiples, regardless of whatever funding shenanigans Trump is pulling or what effects those might eventually have. None of that detracts from the value that has already been delivered, so I don’t see how they are “forgetting” anything. At best, it’s a circumstance that may play in economically but doesn’t say anything about AI’s intrinsic value.
Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.
You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.
A Large Language Model is a translator basically, all it did was bridge the gap between us speaking normally and a computer understanding what we are saying.
The actual decisions all these “AI” programs do are Machine Learning algorithms, and these algorithms have not fundamentally changed since we created them and started tweaking them in the 90s.
AI is basically a marketing term that companies jumped on to generate hype because they made it so the ML programs could talk to you, but they’re not actually intelligent in the same sense people are, at least by the definitions set by computer scientists.
Image recognition models are also useful for astronomy. The largest black hole jet was discovered recently, and it was done, in part, by using an AI model to sift through vast amounts of data.
https://www.youtube.com/watch?v=wC1lssgsEGY
This thing is so big, it travels between voids in the filaments of galactic super clusters and hits the next one over.
AI is just what we call automation until marketing figures out a new way to sell the tech. LLMs are generative AI, hardly useful or valuable, but new and shiny and has a party trick that tickles the human brain in a way that makes people give their money to others. Machine learning and other forms of AI have been around for longer and most have value generating applications but aren’t as fun to demonstrate so they never got the traction LLMs have gathered.
Like all good sci-fi, they just took what was already happening to oppressed people and made it about white/American people, while adding a little misdirection by extrapolation from existing tech research. Only took about 20 years for Foucault’s boomerang to fully swing back around, and keep in mind that all the basic ideas behind LLMs had been worked out by the 80s, we just needed 40 more years of Moore’s law to make computation fast enough and data sets large enough.
Ah yes same with Boolean logic, it only took a century for Moore law to pick up, they had a small milestone along the way when the transistor was invented. All computer science was already laid out by Boole from day 1, including everything that AI already does or will ever do.
/S
That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it’s as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn’t provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let’s not just twist what he said to be “Microsoft CEO says AI provides no value” when that is not what he said.
I think that’s pretty clear to people who get past the clickbait. Oddly enough though, if you read through what he actually said, the takeaway is basically a tacit admission, interpreted as him trying to establish a level-set on expectations from AI without directly admitting the strategy of massively investing in LLM’s is going bust and delivering no measurable value, so he can deflect with “BUT HEY CHECK OUT QUANTUM”.
It is fun to generate some stupid images a few times, but you can’t trust that “AI” crap with anything serious.
I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.
For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.
On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.
So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.
Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.
While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.
Ironically, Google might be accelerating its own downfall as it tries to copy the “market”, considering LLMs are just a hole in its pocket.
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.
You’re not going to replace a senior dev with it, of course, but it’s a great tool.
My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.
LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!
Exactly - I find AI tools very useful and they save me quite a bit of time, but they’re still tools. Better at some things than others, but the bottom line is that they’re dependent on the person using them. Plus the more limited the problem scope, the better they can be.
Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can’t tell.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.
Not this again… LLM is a subset of ML which is a subset of AI.
AI is very very broad and all of ML fits into it.
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable. Labeling this as “AI” is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.
People who condone the use of the term “AI” for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.