41 points

At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

permalink
report
reply
35 points

“Confidently incorrect” I think describes a lot of AI aficionados.

permalink
report
parent
reply
16 points

And LLMs themselves.

permalink
report
parent
reply
10 points
*

I would also add “hopeful delusionals” and “unhinged cultist” to that list of labels.

Seriously, we have people right now making their plans for what they’re going to do with their lives once Artificial Super Intelligence emerges and changes the entire world to some kind of post-scarcity, Star-Trek world where literally everyone is wealthy and nobody has to work. They think this is only several years away. Not a tiny number either, and they exist on a broad spectrum.

Our species is so desperate for help from beyond, a savior that will change the current status-quo. We’ve been making fantasies and stories to indulge this desire for millenia and this is just the latest incarnation.

No company on Earth is going to develop any kind of machine or tool that will destabilize the economic markets of our capitalist world. A LOT has to change before anyone will even dream of upending centuries of wealth-building.

permalink
report
parent
reply
3 points

AI itself too i guess. Also i have to point this out every time but my username was chosen way before all this shit blew up into our faces. Ive used this one on every platform for years.

permalink
report
parent
reply
16 points

Some people can only hear “AI means I can pay people less/get rid of them entirely” and stop listening.

permalink
report
parent
reply
6 points

AI means C level jobs should be on the block as well. The board can make decisions based on their output.

permalink
report
parent
reply
7 points

The whole ex-Mckinsey management layer is at risk. Whole teams of people who were dedicated to producing pretty slides with “action titles” for managers higher up the chain to consume and regurgitate are now having their lunch eaten by AI.

permalink
report
parent
reply
2 points

Just wait until Elon puts AI in those new robots he invented!!!

/s for those who need it…

permalink
report
parent
reply
10 points

I’ve noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

permalink
report
parent
reply
5 points
*

I also notice the ONLY people who can offer firsthand reports how it’s actually useful in any way are in a very, very narrow niche.

Basically, if you’re not a programmer, and even then a very select set of programmers, then your life is completely unimpacted by generative AI broadly. (Not counting the millions of students who used it to write papers for them.)

AI is currently one of those solutions in search of a problem. In its current state, it can’t really do anything useful broadly. It can make your written work sound more professional and at the same time, more mediocre. It can generate very convincing pictures if you invest enough time into trying to decode the best sequence of prompts and literally just get lucky, but it’s far too inacurate and inconsistent to generate say, a fully illustrated comic book or cartoon, unless you already have a lot of talent in that field. I have tried many times to use AI in my current job to analyze PDF documents and spreadsheets and it’s still completely unable to do work that requires mathematics as well as contextual understanding of what that math represents.

You can have really fun or cool conversations with it, but it’s not exactly captivating. It is also wildly inaccurate for daily use. I ask it for help finding songs by describing the lyrics and other clues, and it confidentially points me to non-existing albums by hallucinated artists.

I have no doubt in time it’s going to radically change our world, but that time frame is going to require a LOT more time and baking before it’s done. Despite how excited a few select people are, nothing is changing overnight. We’re going to have a century-long “singularity” and won’t realize we’ve been through it until it’s done. As history tends to go.

permalink
report
parent
reply
1 point
*

“AI, how do I do <obscure thing> in <complex programming framework>”

“Here is some <language> code. Please fix any errors: <paste code here>”

These save me hours of work on a regular basis and I don’t even use the paid tier of ChatGPT for it. Especially the first one because I used to read half the documentation to answer that question. Results are accurate 80% of the time, and the other 20% is close enough that I can fix it in a few minutes. I’m not in an obscure AI related field, any programmer can benefit from stuff like this.

permalink
report
parent
reply
4 points

Because as a social phenomenon it promises to decide for them what it should actually do.

permalink
report
parent
reply
3 points

Porn / ai gf. Thats what 90% of ai power users do.

permalink
report
parent
reply
6 points
*

I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, … that alone is a game changer in “parsing the real world” … that energy amount needed to do this “right >= 3x” is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

They poison their own well because they jump directly to the enshittyfication stage.

So people talking about embedding it into workflow… hi… here I am! =D

permalink
report
parent
reply
3 points

A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical

permalink
report
parent
reply
5 points

I would argue that this makes the process microscopically more efficient and macroscopically way less efficient. That whole process probably is useless, and imagine wasting so much energy, water and computing power just to speed this useless process up and saving a handful of minutes (I am a lead and it takes me 2/3 minutes to put together a status of my team, and I don’t usually even request a status from each member).

I keep saying this to everyone in my company who pushes for LLMs for administrative tasks: if you feel like LLMs can do this task, we should stop doing it at all because it means we are just going through the motions and pleasing a process without purpose. You will have people producing reports via LLM from a one-line prompt, the manager assembling it together with LLM and at vest someone reading it distilling it once again with LLMs. It is all a great waste of money, energy, time, cognitive effort that doesn’t benefit anybody.

As soon as someone proposes to introduce LLMs in a process, raise with cutting that process altogether. Let’s produce less bullshit, instead of more while polluting even more in the process.

permalink
report
parent
reply
135 points
*

A big issue that a lot of these tech companies seem to have is that they don’t understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

I do not understand all the hype around AI. I can understand the danger; people who don’t see that it’s bad are using it in place of people who know how to do things. But in my teaching for example I’ve never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can’t imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it’s actually impressive technology - we’ve had computers that can do advanced math highly accurately for a while, but we’ve finally developed one that’s worse at math than the average undergrad in a gen-ed class!)

permalink
report
reply
60 points

The answer is that it’s all about “growth”. The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

To make share price go up like that, you have to do one of two things; show that you’re bringing in new customers, or show that you can make your existing customers pay more.

For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

And that’s what “AI” looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That’s why they’re shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you’re “engaging” with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show “engagement”, “interest”, which they can then use to promise that down the line huge piles of money will fall out of this pinata.

The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren’t just them setting huge piles of money on fire.

permalink
report
parent
reply
6 points

permalink
report
parent
reply
9 points
*

I know I’m an enthusiast, but can I just say I’m excited about NotebookLLM? I think it will be great for documenting application development. Having a shared notebook that knows the environment and configuration and architecture and standards for an application and can answer specific questions about it could be really useful.

“AI Notepad” is really underselling it. I’m trying to load up massive Markdown documents to feed into NotebookLLM to try it out. I don’t know if it’ll work as well as I’m hoping because it takes time to put together enough information to be worthwhile in a format the AI can easily digest. But I’m hopeful.

That’s not to take away from your point: the average person probably has little use for this, and wouldn’t want to put in the effort to make it worthwhile. But spending way too much time obsessing about nerd things is my calling.

permalink
report
parent
reply
16 points

From a nerdy perspective, LLMs are actually very cool. The problem is that they’re grotesquely inefficient. That means that, practically speaking, whatever cool use you come up with for them has to work in one of two ways; either a user runs it themselves, typically very slowly or on a pretty powerful computer, or it runs as a cloud service, in which case that cloud service has to figure out how to be profitable.

Right now we’re not being exposed to the true cost of these models. Everyone is in the “give it out cheap / free to get people hooked” stage. Once the bill comes due, very few of these projects will be cool enough to justify their costs.

Like, would you pay $50/month for NotebookLM? However good it is, I’m guessing it’s probably not that good. Maybe it is. Maybe that’s a reasonable price to you. It’s probably not a reasonable price to enough people to sustain serious development on it.

That’s the problem. LLMs are cool, but mostly in a “Hey this is kind of neat” way. They do things that are useful, but not essential, but they do so at an operating cost that only works for things that are essential. You can’t run them on fun money, but you can’t make a convincing case for selling them at serious money.

permalink
report
parent
reply
4 points
*

Being able to summarize and answer questions about a specific corpus of text was a use case I was excited for even knowing that LLMs can’t really answer general questions or logically reason.

But if Google search summaries are any indication they can’t even do that. And I’m not just talking about the screenshots people post, this is my own experience with it.

Maybe if you could run the LLM in an entirely different way such that you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

Even then I’m not sure the huge computational requirements make it worth it over ctrl-f or a slightly more sophisticated search algorithm.

permalink
report
parent
reply
-2 points

You’re using the wrong tool.

Hell, notepad is the wrong tool for every use case, it exists in case you’ve broken things so thoroughly on windows that you need to edit a file to fix it. It’s the text editor of last resort, a dumb simple file editor always there when you need it.

Adding any feature (except possibly a hex editor) makes it worse at its only job.

permalink
report
parent
reply
1 point

The answer is that it’s all about “growth”. The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

As you can see, this can’t go on indefinitely. And also such unpleasantries are well known after every huge technological revolution. Every time eventually resolved, and not in favor of those on the quick buck train.

It’s still not a dead end. The cycle of birth, growth, old age, death, rebirth from the ashes and so on still works. It’s only the competitive, evolutionary, “fast” model has been killed - temporarily.

These corporations will still die unless they make themselves effectively part of the state.

BTW, that’s what happened in Germany described by Marx, so despite my distaste for marxism, some of its core ideas may be locally applicable with the process we observe.

It’s like a worldwide gold rush IMHO, but not even really worldwide. There are plenty of solutions to be developed and sold in developing countries in place of what fits Americans and Europeans and Chinese and so on, but doesn’t fit the rest. Markets are not exhausted for everyone. Just for these corporations because they are unable to evolve.

Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

If only Sun survived till now, I feel they would have good days. What made them fail then would make them more profitable now. They were planning too far ahead probably, and were too careless with actually keeping the company afloat.

My point is that Sun could, unlike these corporations, function as some kind of “the phone company”, or “the construction company”, etc. Basically what Microsoft pretended to be in the 00s. They were bad with choosing the right kind of hype, but good with having a comprehensive vision of computing. Except that vision and its relation to finances had schizoaffective traits.

Same with DEC.

The point is to show “engagement”, “interest”, which they can then use to promise that down the line huge piles of money will fall out of this pinata.

Well. It’s not unprecedented for business opportunities to dry out. It’s actually normal. What’s more important, the investors supporting that are the dumber kind, and the investors investing in more real things are the smarter kind. So when these crash (for a few years hunger will probably become a real issue not just in developing countries when that happens), those preserving power will tend to be rather insightful people.

permalink
report
parent
reply
1 point
*

If only Sun survived till now, I feel they would have good days

The problem is a lot of what Sun brought to the industry is now in the Linux arena. If Sun survived, would Linux have happened? With such a huge development infrastructure around Linux, would Sun really add value?

I was a huge fan of Sun also, they revolutionized the industry far above their footprint. However their approach seemed more research or academic at times, and didn’t really work with their business model. Red Hat figured out a balance where they could develop opensource while making enough to support their business. The Linux world figured out a different balance where the industry is above and beyond individual companies and doesn’t require profit

permalink
report
parent
reply
8 points

I’ve ran some college hw through 4o just to see and it’s remarkably good at generating proofs for math and algorithms. Sometimes it’s not quite right but usually on the right track to get started.

In some of the busier classes I’m almost certain students do this because my hw grades would be lower than the mean and my exam grades would be well above the mean.

permalink
report
parent
reply
2 points

I understand some of the hype. LLMs are pretty amazing nowadays (though closedai is unethical af so don’t use them).

I need to program complex cryptography code for university. Claude sonnet 3.5 solves some of the challenges instantly.

And it’s not trivial stuff, but things like “how do I divide polynomials, where each coefficient of that polynomial is an element of GF(2^128).” Given the context (my source code), it adds it seamlessly, writes unit tests, and it just works. (That is important for AES-GCM, the thing TLS relies on most of the time .)

Besides that, LLMs are good at what I call moving words around. Writing cute little short stories in fictional worlds given some info material, or checking for spelling, or re-formulating a message into a very diplomatic nice message, so on.

On the other side, it’s often complete BS shoehorning LLMs into things, because “AI cool word line go up”.

permalink
report
parent
reply
76 points
*

There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

But generative AIs are really good at tasks I wouldn’t have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow. It’s not just hype.

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

permalink
report
reply
69 points

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can “DO ALL THE THINGS!” which it’s not and never will be.

permalink
report
parent
reply
11 points

The obsession with replacing workers with AI isn’t going to die. It’s too late. The large financial company that I work for has been obsessively tracking hours saved in developer time with GitHub Copilot. I’m an older developer and I was warned this week that my job will be eliminated soon.

permalink
report
parent
reply
4 points
*

The large financial company that I work for

So the company that is obsessed with money that you work for has discovered a way to (they think) make more money by getting rid of you and you’re surprised by this?

At least you’ve been forewarned. Take the opportunity to abandon ship. Don’t be the last one standing when the music stops.

permalink
report
parent
reply
39 points

Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow

Like which one? Because it’s now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

permalink
report
parent
reply
6 points

Computer programming has radically changed. Huge help having llm auto complete and chat built in. IDEs like Cursor and Windsurf.

I’ve been a developer for 35 years. This is shaking it up as much as the internet did.

permalink
report
parent
reply
2 points

@remindme@mstdn.social 1 year. Let me know about the seachange of new 10x transform based programmers that have automated me out of a job.

permalink
report
parent
reply
34 points
*

I quit my previous job in part because I couldn’t deal with the influx of terrible, unreliable, dangerous, bloated, nonsensical, not even working code that was suddenly pushed into one of the projects I was working on. That project is now completely dead, they froze it on some arbitrary version.
When junior dev makes a mistake, you can explain it to them and they will not make it again. When they use llm to make a mistake, there is nothing to explain to anyone.
I compare this shake more to an earthquake than to anything positive you can associate with shaking.

permalink
report
parent
reply
35 points

I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don’t compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can’t see it as a “radical change” and more like a well configured snippet plugin and auto complete feature.

LLMs can’t count, can’t analyze novel problems (by definition) and provide innovative solutions…why would they radically change programming?

permalink
report
parent
reply
-4 points

Exactly this. Things have already changed and are changing as more and more people learn how and where to use these technologies. I have seen even teachers use this stuff who have limited grasp of technology in general.

permalink
report
parent
reply
1 point

I don’t know anything about the online news business but it certainly appears to have changed. Most of it is dreck, either way, and those organizations are not a positive contributor to society, but they are there, it is a business, and it has changed society

permalink
report
parent
reply
2 points

I don’t see the change. Sure, there are spam websites with AI content that were not there before, but is this news business at all? All major publishers and newspapers don’t (seem to) use AI as far as I can tell.

Also I would argue this is no much of a change except maybe in simplicity to generate fluff. All of this existed already for 20 years now, and it’s a byproduct of the online advertisement business (that for sure was a major change in society!). AI pieces are just yet another way to generate content in the hope of getting views.

permalink
report
parent
reply
-11 points

Review of legal documents.

permalink
report
parent
reply
15 points

Oh boy…what can possibly go wrong for documents where small minutiae like wording can make a huge difference.

permalink
report
parent
reply
11 points
*

Goldman Sachs, quote from the article:

“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”

Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.

As humans we’re not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.

permalink
report
parent
reply
23 points

Computers have always been good at pattern recognition. This isn’t new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

The same goes for code completion. They will just generate something that fills the pattern they’re told to look for. It doesn’t matter if it’s right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we’ve had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can’t even get an llm to reliably spit out a hello world program.

permalink
report
parent
reply
5 points

I never know what to think when I come across a comment like this one—which does describe, even if only at a surface level, how an LLM works—with 50% downvotes. Like, are people angry at reality, is that it?

permalink
report
parent
reply
15 points

With as much misinformation that’s being spread about regarding LLMs. It would only lose more people’s comprehension to go into anything more than a generalization.

The problem is people are being sold AGI. But chat GPT and all these other tools don’t even remotely qualify for that. They’re really nothing more than a glorified Alice chatbot system on steroids. The one neat new trick to all this is that they’ve automated the training a bit. But these llms have no more comprehension of their output or the input they were given than something like the old Alice chatbot.

These tools have been described as artificial intelligence to layman for decades at this point. It makes it really hard to change that calcified opinion. People would rather believe that it’s some magical thing not just probability and maths.

permalink
report
parent
reply
2 points

Downvoting someone on the Internet is easier than tangentially modifying reality in a measurable way

permalink
report
parent
reply
2 points

“It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’”
-Pamela McCorduck

“AI is whatever hasn’t been done yet.”
- Larry Tesler

That’s the curse of the AI Effect.
Nothing will ever be “an actual AI” until we cross the barrier to an actual human-like general artificial intelligence like Cortana from Halo, and even then people will claim it isn’t actually intelligent.

permalink
report
parent
reply
8 points

I mean, I think intelligence requires the ability to integrate new information into one’s knowledge base. LLMs can’t do that, they have to be trained on a fixed corpus.

Also, LLMs have a pretty shit-tastic track record of being able to differentiate correct data from bullshit, which is a pretty essential facet of intelligence IMO

permalink
report
parent
reply
8 points

Well at least until those who study intelligence and self-awareness actually come up with a comprehensive definition for it. Something we don’t even have currently. Which makes the situation even more silly. The people selling LLMs and AGNs as artificial intelligence are the PT Barnum of the modern era. This way to the egress folks come see the magnificent egress!

permalink
report
parent
reply
1 point

Sometimes it seems like the biggest success of AI has been refining the definition of intelligence. But we still have a long way to go

permalink
report
parent
reply
-5 points

Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

Yet, they’re unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it’s up to you to decide to learn it or not. I see quite an even split among devs. I think it’s worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

permalink
report
parent
reply
4 points

If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…

permalink
report
parent
reply
1 point

Humans are notoriously worse at tasks that have to do with reviewing than they are at tasks that have to do with creating. Editing an article is more boring and painful than writing it. Understanding and debugging code is much harder than writing it etc., observing someone cooking to spot mistakes is more boring than cooking etc.

This also fights with the attention required to perform those tasks, which means a higher ratio of reviewing vs creating tasks leads to lower quality output because attention is depleted at some point and mistakes slip in. All this with the additional “bonus” to have to pay for the tool AND the human reviewing while also wasting tons of water and energy. I think it’s wise to ask ourselves whether this makes sense at all.

permalink
report
parent
reply
2 points

What is your favorite flavor of kool aid?

permalink
report
parent
reply
1 point
*

I have one of those at work now, but my experience with it is still quite limited. With Copilot it was quite useful for knocking up quick boutique solutions for particular problems (stitch together a load of PDFs sorted on a name heading), with the proviso that you might end up having to repair bleed between dependency versions and repair syntax. I couldn’t trust it with big refactors of existing systems.

permalink
report
parent
reply
16 points

This is easy to say about the output of AIs… if you don’t check their work.

Alas, checking for accuracy these days seems to be considered old fogey stuff.

permalink
report
parent
reply
2 points

Like what outcome?

I have seen gains on cell detection, but it’s “just” a bit better.

permalink
report
parent
reply
3 points

See now, I would prefer AI in my toaster. It should be able to learn to adjust the cook time to what I want no matter what type of bread I put in it. Though is that realky AI? It could be. Same with my fridge. Learn what gets used and what doesn’t. Then give my wife the numbers on that damn clear box of salad she buys at costco everytime, which take up a ton of space and always goes bad before she eats even 5% of it. These would be practical benefits to the crap that is day to day life. And far more impactful then search results I can’t trust.

permalink
report
parent
reply
10 points
*

There’s a good point here that like about 80% of what we’re calling AI right now… isn’t even AI or even LLM. It’s just… algorithm, code, plain old math. I’m pretty sure someone is going to refer to a calculator as AI soon. “Wow, it knows math! Just like a person! Amazing technology!”

(That’s putting aside the very question of whether LLMs should even qualify as AIs at all.)

permalink
report
parent
reply
4 points

In my professional experience, AI seems to be just a faster way to generate an algorithm that is really hard to debug. Though I am dev-ops/sre so I am not as deep in it as the devs.

permalink
report
parent
reply
6 points

You better believe that AI-powered toaster would only accept authorized bread from a bakery that paid top dollar to the company that makes them. To ensure the best quality possible and save you from inferior toast, of course.

permalink
report
parent
reply
2 points

Lol, enshitification should at least take a few months… I hope.

permalink
report
parent
reply
1 point
*

And I’m sure each slice will have an entirely necessary chip on it, legally protected from workarounds , to prevent using other brand or commodity bread ensure the optimal experience

permalink
report
parent
reply
1 point

or you go to make some toast and it spends 15 minutes downloading “updates” before you can use it

permalink
report
parent
reply
3 points
*

I agree with your wife: there’s always an aspirational salad in the fridge. For most foods, I’m pretty good at not buying stuff we won’t eat, but we always should eat more veggies. I don’t know how to persuade us to eat more veggies, but step 1 is availability. Like that Reddit meme

  1. Availability
  2. ???
  3. Profit by improved health
permalink
report
parent
reply
2 points

It’s been years… maybe we don’t need the costco size for the love of pete.

permalink
report
parent
reply
3 points

See now, I would prefer AI in my toaster.

You really wouldn’t.

permalink
report
parent
reply
3 points

I was so hoping that was toasty the toaster! Waffles? How about a bagel?

permalink
report
parent
reply
57 points

“Built to do my art and writing so I can do my laundry and dishes” – Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.

permalink
report
reply
18 points

Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

permalink
report
parent
reply
2 points

Too bad it actively makes all of your work lower quality via the “helping”.

permalink
report
parent
reply
6 points

Just like every other coworker, it’s important to know what tasks they do well and where they typically need help

permalink
report
parent
reply
10 points

Though the image generators are actually good. The visual arts will never be the same after this

permalink
report
parent
reply
27 points
*

Compare it to the microwave. Is it good at something, yes. But if you shoot your fucking turkey in it at Thanksgiving and expect good results, you’re ignorant of how it works. Most people are expecting language models to do shit that aren’t meant to. Most of it isn’t new technology but old tech that people slapped a label on as well. I wasn’t playing Soul Caliber on the Dreamcast against AI openents… Yet now they are called AI opponents with no requirements to be different. GoldenEye on N64 was man VS AI. Madden 1995… AI. “Where did this AI boom come from!”

Marketing and mislabeling. Online classes, call it AI. Photo editors, call it AI.

permalink
report
parent
reply
2 points

I wasn’t playing Soul Caliber on the Dreamcast against AI openents…

Maybe terminology differs by region, but I absolutely played against AI as a kid. When I set up a game of Command and Conquer or something, I’d pick the number of AI opponents. Sometimes we’d call them bots (more common in FPS) or “the computer” or “CPU” (esp in Civ and other TBS), but I distinctly remember calling RTS SP opponents “AI” and I think many games used that terminology during the 90s.

What frustrates me is the opposite of what you’re saying, people have changed the meaning of “AI” from a human programmed opponent to a statistical model. When I played against “AI” 20-30 years ago, I was playing against something a human crafted and tuned. These days, I don’t play against “AI” because “AI” generates text, images, and video from a statistical model and can’t really play games. AI is something that runs in the cloud, with maybe a small portion on phones and Windows computers to do simple tasks where the network would add too much latency.

permalink
report
parent
reply
3 points

Sometimes I really regret having signed onto an instance that disables downvotes.

permalink
report
parent
reply
1 point

It’s easy to switch.

That said, I think the comment is constructive. It used to be that websites, textbooks, etc would pay artists or pay for stock photos (which indirectly pays artists), but now they can gen a dozen or so images and pick their favorite.

I’m not saying this is good or bad, but I do agree that art will never be the same.

permalink
report
parent
reply
1 point
*

I’ve been thinking about this a lot recently. No, we’re not there yet, may never be. Compare what Jesar, one of my favorite artists, can do - and that was in the oh-so-long-ago 2000s - and what an AI can do. It’s simply not up to the task. I do use AI a lot to create what is basically utility art. But it depends on pre-defined textual or visual inputs whereas only an artist can have divine inspiration. AI is more of a sterile tool, like interactive clipart, if you will.

permalink
report
parent
reply
1 point

I think “interactive clipart” is a great description. You are, I believe, totally correct that (at least for now) GenAI can’t do what professionals can do, but it can do better than many / most non-professionals. I can’t do art to save my life, and I don’t have the money to pay pros to make the mundane, boring everyday things that I need (like simple, uncluttered pictures for vocabulary cards). GenAI solves that problem for me.

Similarly, teachers used to try to rewrite complex texts for students at lower reading levels (such as English Learners). That took time and some expertise. Now, GenAI does it prolly many tens of thousands of times a day for teachers all over the USA.

I think, at least for the moment, that middle / lower level is where GenAI is currently most helpful - exactly the places that, in earlier times, were happy with clipart.

permalink
report
parent
reply
28 points

So you’re saying we wont have any crowdsourced blockchain Web 2.0 AIs?

permalink
report
reply
11 points

Quantum! don’t forget quantum, you filthy peasant.

permalink
report
parent
reply
5 points

Nope. No Crowdsourced Blockchain Web 3.0 VR+AR AI NFTs.

permalink
report
parent
reply
4 points

Please, stay with the time. We’re at Web 6.0 already.

permalink
report
parent
reply
2 points

Where’s my federated open source AI that runs on Linux 😤

permalink
report
parent
reply
1 point

The drones doing airstrikes might!

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 13K

    Posts

  • 570K

    Comments