As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

29 points
*

Good. It’s not even AI. That word is just used because ignorant people eat it up.

permalink
report
reply
13 points

Call it whatever you want, if you worked in a field where it’s useful you’d see the value.

“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

Holy shit! So you mean… Like humans? Lol

permalink
report
parent
reply
3 points
*

I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.

Edit: When the fuck did I say “limited usefulness = not useful for anything”? God the fucking goalpost-moving. I’m fucking out.

permalink
report
parent
reply
0 points

If you think it’s usefulness is limited you don’t work on a professional environment that utilizes it. I find new uses everyday as a network engineer.

Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me.

Sure I had to tweak it to match my specific deployment, and there was a couple of things it was out of date on, but that’s the point isn’t it? Humans using AI to get more work done, not AI replacing us wholesale. I’ve never gotten more accurate information faster than with AI, search engines are like going to the library and skimming the shelves by comparison.

Is it perfect? No. Is it still massively useful and in the next decade will overhaul data work and IT the same way that computers did in the 90’s/00’s? Absolutely. If you disagree it’s because you either have been exclusively using it to dick around or you don’t work from behind a computer screen at all.

permalink
report
parent
reply
1 point

okay, you write a definition of AI then

permalink
report
parent
reply
12 points

“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

Holy shit! So you mean… Like humans? Lol

No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
16 points

It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.

Ignorant people get upset about that word being used because all they know about “AI” is from sci-fi shows and movies.

permalink
report
parent
reply
13 points

Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.

permalink
report
parent
reply
0 points

The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.

To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.

permalink
report
parent
reply
18 points

“Artificial intelligence” is well-established technical jargon that’s been in use by researchers for decades. There are scientific journals named “Artificial Intelligence” that are older than I am.

If the general public is so confused they can come up with their own new name for it. Call them HALs or Skynets or whatever, and then they can rightly say “ChatGPT is not a Skynet” and maybe it’ll calm them down a little. Changing the name of the whole field of study is just not in the cards at this point.

permalink
report
parent
reply
9 points

I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.

There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.

But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.

I expect AI to be a very similar monster.

If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.

permalink
report
parent
reply
-5 points
*

Current “AI” doesn’t solve problems. It doesn’t understand context. It can’t see fingers and say “those are fingers, make sure there’s only five”. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.

permalink
report
parent
reply
0 points

So…it acts like a human?

permalink
report
parent
reply
4 points

You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.

You can use the more specific terms “weak AI” or “narrow AI” if you prefer.

Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).

permalink
report
parent
reply
-3 points

true, not AI but it’s doing a quite impressive job. Injecting fake money should not be allowed and these companies should generate sales. Especially in disrupting in some human field, even if it is a fad.

You can compete OK, but you use your own money and benefits to support your cost.

Yeah I know, something is called “investment”

permalink
report
parent
reply
16 points

Where’s all the “NoOoOoO this isn’t like crypto it’s gonna be different” people at now?

permalink
report
reply
12 points

I can derive value from LLMs. I already have. There’s no value in crypto. And if you tell me there is, I won’t agree. It’s bullshit. So is this, but to a lesser degree.

Mint some NFTs and tell me how that improves your life.

permalink
report
parent
reply
19 points
*

That’s an incredibly bad comparison. LLMs are already used daily by many people saving them time in different aspects of their life and work. Crypto on the other hand is still looking for it’s everyday use case.

permalink
report
parent
reply
2 points
*

Yeah, I assumed the general consensus was “alt coins” in crypto or the scams themselves are the “bubble”. But, Ethereum and initial projects that basically create the foundational technologies (smart contracts, etc) are still respected and I’d say has a use case, but is not “production ready?”. So for AI/ML in LLMs at least, things like LLaMa, Stability’s, GPT’s, Anthropic’s Claude, are not included in this bubble, since they aren’t necessarily built on top of each other, but are separate implementations of a foundation. But, anything a layer higher maybe is.

permalink
report
parent
reply
-2 points

Right, but how much time is it actually saving when you have to fact check everything you put into it anyway?

permalink
report
parent
reply
11 points

We’re too busy automating our jobs.

Really though, this was never like crypto/NFTs. AI is a toolset used to troubleshoot and amplify workloads. Tools survive no matter what, whereas crypto/NFT’s died because they never had a use case.

Just because a bunch of tech bros were throwing their wallets at a wall full of start ups that’ll fail doesn’t mean AI as a concept will fail. That’s no different than saying because of the dot.com bubble that websites and the Internet are going to be a fad.

Websites are a tool, just because everyone and their brother has one for no reason doesn’t mean actual use cases won’t appear (in fact they already exist, much like the websites that survived the internet bubble.)

permalink
report
parent
reply
26 points

I mean, it is different than crypto, but that’s an incredibly low bar to clear.

permalink
report
parent
reply
37 points
*

AI is bringing us functional things though.

.Com was about making webtech to sell companies to venture capitalists who would then sell to a company to a bigger company. It was literally about window dressing garbage to make a business proposition.

Of course there’s some of that going on in AI, but there’s also a hell of a lot of deeper opportunity being made.

What happens if you take a well done video college course, every subject, and train an AI that’s both good working with people in a teaching frame and is also properly versed on the subject matter. You take the course, in real time you can stop it and ask the AI teacher questions. It helps you, responding exactly to what you ask and then gives you a quick quiz to make sure you understand. What happens when your class doesn’t need to be at a certain time of the day or night, what happens if you don’t need an hour and a half to sit down and consume the data?

What if secondary education is simply one-on-one tutoring with an AI? How far could we get as a species if this was given to the world freely? What if everyone could advance as far as their interest let them? What if AI translation gets good enough that language is no longer a concern?

AI has a lot of the same hallmarks and a lot of the same investors as crypto and half a dozen other partially or completely failed ideas. But there’s an awful lot of new things that can be done that could never be done before. To me that signifies there’s real value here.

*dictation fixes

permalink
report
reply
8 points

You got two problems:

First, ai can’t be a tutor or teacher because it gets things wrong. Part of pedagogy is consistency and correctness and ai isn’t that. So it can’t do what you’re suggesting.

Second, even if it could (it can’t get to that point, the technology is incapable of it, but we’re just spitballing here), that’s not profitable. I mean, what are you gonna do, replace public school teachers? The people trying to do that aren’t interested in replacing the public school system with a new gee whiz technology that provides access to infinite knowledge, that doesn’t create citizens. The goal of replacing the public school system is streamlining the birth to workplace pipeline. Rosie the robot nanny doesn’t do that.

The private school class isn’t gonna go for it either, currently because they’re ideologically opposed to subjecting their children to the pain tesseract, but more broadly because they are paying big bucks for the best educators available, they don’t need a robot nanny, they already have plenty. You can’t sell precision mass produced automation to someone buying bespoke handcrafted goods.

There’s a secret third problem which is that ai isn’t worried about precision or communicating clearly, it’s worried about doing what “feels” right in the situation. Is that the teacher you want? For any type of education?

permalink
report
parent
reply
-1 points

First, ai can’t be a tutor or teacher because it gets things wrong.

Since the iteration we have that’s designed for general purpose language modeling and is trained widely on every piece of data in existence can’t do exactly one use case, you can’t conceive that it can ever be done with the technology? GTHO. It’s not like we’re going to say ChatGPT teach kids how LLM works, but some more stuctured program that uses something like chatGPT for communication. This is completely reasonable.

that’s not profitable.

A. It’s my opinion but I think you’re dead wrong and it’s easily profitable if not to ivy league standards it would certainly put community college out of business.

B. Screw profit. Philanthropic investment throws a couple billion into a nonprofit run by someone who wants to see it happen.

The private school class isn’t gonna go for it either,

You think an Ivy League school is above selling a light model of their courseware when they don’t have to pay anyone to teach the classes, or grade the work? Check out Harvard University Edx. It’s not a stretch.

t third problem which is that ai isn’t worried about precision or communicating clearly

Ohh a secret third problem, that sounds fun. I’ll let you in on another secret, AI isn’t worried because it’s a very large complicated math program. It doesn’t worry about communicating clearly, the people who pile on layer upon layer of LLM to produce output do that. It doesn’t give a damn about anything, but the people who work on it do.

You want clarity?

Let’s have GTP4, here as it sits clear up your complaint about my post:

"Here is a revised version of your text that sounds more educated:

There are two primary issues with the notion of using artificial intelligence as a tutor or teacher. Firstly, artificial intelligence is not infallible and can make mistakes. Consistency and correctness are integral components of pedagogy, and artificial intelligence is not yet capable of achieving these standards. As such, it is not suitable for the role you have suggested.

Secondly, even if artificial intelligence were capable of fulfilling this role (which it is not, due to technological limitations), it would not be a profitable endeavor. Replacing public school teachers with artificial intelligence would not align with the goals of those seeking to reform the public school system. The aim is to streamline the process from birth to the workplace, and an artificial intelligence nanny would not facilitate this.

Furthermore, the private school class would not be interested in such a proposal. They are currently opposed to subjecting their children to such technology and are willing to pay for the best educators available. Precision mass-produced automation cannot compete with bespoke handcrafted goods.

There is also a third issue to consider: artificial intelligence is not concerned with precision or clear communication. Instead, it focuses on what “feels” right in a given situation. Is this the type of teacher we want for any form of education?"

or perhaps you’d like to have it communicate clearly to younger people

"Sure, here is a revised version of the text written at a 5th-grade level:

There are two big problems with using computers to teach kids. The first problem is that computers can make mistakes. Teachers need to be right all the time, and computers aren’t always right. So, they can’t do the job you want them to do.

The second problem is that even if computers could do the job (they can’t, but let’s pretend), it wouldn’t make money. People who want to change schools don’t want to use computers to teach kids everything. They want to make it easier for kids to get jobs when they grow up. A computer nanny can’t do that.

Rich kids who go to private schools wouldn’t like it either. Their parents pay a lot of money for the best teachers. They don’t want a computer nanny. You can’t sell something cheap and easy to make to someone who wants something special and handmade.

There’s also a secret third problem. Computers don’t care about being right or explaining things clearly. They just do what they think is best at the moment. Is that the kind of teacher you want? For any kind of learning?"

permalink
report
parent
reply
2 points

Woof.

I’m not gonna ape your style of argumentation or adopt a tone that’s not conversational, so if that doesn’t suit you don’t feel compelled to reply. We’re not machines here and can choose how or even if we respond to a prompt.

I’m also not gonna stop anthropomorphizing the technology. We both know it’s a glorified math problem that can fake it till it makes it (hopefully), if we’ve both accepted calling it intelligence there’s nothing keeping us from generalizing the inference “behavior” as “feeling”. In lieu of intermediate jargon it’s damn near required.

Okay:

Outputting correct information isn’t just one use case, it’s a deep and fundamental flaw in the technology. Teaching might be considered one use case, but it’s predicated on not imagining or hallucinating the answer. Ai can’t teach for this reason.

If ai were profitable then why are there articles ringing the bubble alarm bell? Bubbles form when a bunch of money gets pumped in as investment but doesn’t come out as profit. Now it’s possible that there’s not a bubble and all this is for nothing, but read the room.

But let’s say you’re right and there’s not a bubble: why would you suggest community college as a place where ai could be profitable? Community colleges are run as public goods, not profit generating businesses. Ai can’t put them out of business because they aren’t in it! Now there are companies that make equipment used in education, but their margins aren’t usually wide enough to pay back massive vc investment.

It’s pretty silly to suggest that billionaire philanthropy is a functional or desirable way to make decisions.

Edx isn’t for the people that go to Harvard. It’s a rent seeking cash grab intended to buoy the cash raft that keeps the school in operation. Edx isn’t an example of the private school classes using machine teaching on themselves and certainly not on a broad scale. At best you could see private schools use something like Edx as supplementary coursework.

I already touched on your last response up at the top, but clearly the people who work on ai don’t worry about precision or clarity because it can’t do those things reliably.

Summarizing my post with gpt4 is a neat trick, but it doesn’t actually prove what you seem to be going for because both summaries were less clear and muddy the point.

Now just a tiny word on tone: you’re not under any compulsion to talk to me or anyone else a certain way, but the way you wrote and set up your reply makes it seem like you feel under attack. What’s your background with the technology we call ai?

permalink
report
parent
reply
2 points

This weekend my aunt got a room at a ery expensive motel, and was delighted by the fact that a robot delivered amenities to her room. And at breakfast we had an argument about whether or not it saved the hotel money to us the robot instead of a person.

But the bottom line is that the robot was only in use at an extremely expensive hotel and is not commonly seen at cheap hotels. So the robot is a pretty expensive investment, even if it saves money in the long run.

Public schools are NEVER going to make an investment as expensive as an AI teacher, it doesn’t matter how advanced the things get. Besides, their teachers are union. I will give you that rich private schools might try it.

permalink
report
parent
reply
5 points

Essentially we have invented a calculator of sorts, and people have been convinced it’s a mathematician.

permalink
report
parent
reply
2 points

We’ve invented a computer model that bullshits it’s way through tests and presentations and convinced ourselves it’s a star student.

permalink
report
parent
reply
16 points

.com brought us functional things. This bubble is filled with companies dressing up the algorithms they were already using as “AI” and making fanciful claims about their potential use cases, just like you’re doing with your AI example. In practice, that’s not going to work out as well as you think it will, for a number of reasons.

permalink
report
parent
reply
-2 points

Gentlemans bet, There will be AI teaching college level courses augmenting video classes withing 10 years. It’s a video class that already exists, coupled with a helpdesk bot that already exists trained against tagged text material that already exists. They just need more purpose built non-AI structure to guide it all along the rails and oversee the process.

permalink
report
parent
reply
2 points
*

@linearchaos How can a predictive text model grade papers effectively?

What you’re describing isn’t teaching, it’s a teacher using an LLM to generate lesson material.

permalink
report
parent
reply
1 point

In the current state people can take classes on say Zoom, formulate a question, and then type it into Google, which pulls up an LLM-generated search result from Baird.

Is there profit in generating an LLM application on a much narrower set of training data to sell it as a pay-service competitor to an ostensibly free alternative? It would need to pretty significantly more efficient or effective than the free alternative. I don’t question the usefulness of the technology since it’s already in-use, just the business case feasibility amidst the competitive environment.

permalink
report
parent
reply
1 point

What happens if you take a well done video college course, every subject, and train an AI that’s both good working with people in a teaching frame and is also properly versed on the subject matter. You take the course, in real time you can stop it and ask the AI teacher questions. It helps you, responding exactly to what you ask and then gives you a quick quiz to make sure you understand. What happens when your class doesn’t need to be at a certain time of the day or night, what happens if you don’t need an hour and a half to sit down and consume the data?

You get stupid-ass students because an AI producing word-salad is not capable of critical thinking.

permalink
report
parent
reply
0 points

It would appear to me that you’ve not been exposed to much in the way of current AI content. We’ve moved past the shitty news articles from 5 years ago.

permalink
report
parent
reply
0 points

Five years ago? Try last month.

Or hell, why not try literally this instant.

permalink
report
parent
reply
15 points

The Internet also brought us a shit ton of functional things too. The dot com bubble didn’t happen because the Internet wasn’t transformative or incredibly valuable, it happened because for every company that knew what they were doing there were a dozen companies trying something new that may or may not work, and for every one of those companies there were a dozen companies that were trying but had no idea what they were doing. The same thing is absolutely happening with AI. There’s a lot of speculation about what will and won’t work and make companies will bet on the wrong approach and fail, and there are also a lot of companies vastly underestimating how much technical knowledge is required to make ai reliable for production and are going to fail because they don’t have the right skills.

The only way it won’t happen is if the VCs are smarter than last time and make fewer bad bets. And that’s a big fucking if.

Also, a lot of the ideas that failed in the dot com bubble weren’t actually bad ideas, they were just too early and the tech wasn’t there to support them. There were delivery apps for example in the early internet days, but the distribution tech didn’t exist yet. It took smart phones to make it viable. The same mistakes are ripe to happen with ai too.

Then there’s the companies that have good ideas and just under estimate the work needed to make it work. That’s going to happen a bunch with ai because prompts make it very easy to come up with a prototype, but making it reliable takes seriously good engineering chops to deal with all the times ai acts unpredictably.

permalink
report
parent
reply
3 points

they were doing there were a dozen companies trying something new that may or may not work,

I’d like some samples of that. A company attempting something transformative back then that may or may not work that didn’t work. I was working for a company that hooked ‘promising’ companies up with investors, no shit, that was our whole business plan, we redress your site in flash, put some video/sound effects in, and help sell you to someone with money looking to buy into the next google . Everything that was ‘throwing things at the wall to see what sticks’ was a thinly veiled grift for VC. Almost no one was doing anything transformative. The few things that made it (ebay, google, amazon) were using engineers to solve actual problems. Online shopping, Online Auction, Natural language search. These are the same kinds of companies that continue to spring into existence after the crash.

It’s the whole point of the bubble. It was a bubble because most of the money was going into pockets not making anything. People were investing in companies that didn’t have a viable product and had no intention south of getting bought by a big dog and making a quick buck. There weren’t all of a sudden this flood of inventors making new and wonderful things unless you count new and amazing marketing cons.

permalink
report
parent
reply
9 points
*

There are two kinds of companies in tech: hard tech companies who invent it, and tech-enabled companies who apply it to real world use cases.

With every new technology you have everyone come out of the woodwork and try the novel invention (web, mobile, crypto, ai) in the domain they know with a new tech-enabled venture.

Then there’s an inevitable pruning period when some critical mass of mismatches between new tool and application run out of money and go under. (The beauty of the free market)

AI is not good for everything, at least not yet.

So now it’s AI’s time to simmer down and be used for what it’s actually good at, or continue as niche hard-tech ventures focused on making it better at those things it’s not good at.

permalink
report
parent
reply
5 points

I absolutely love how cypto (blockchain) works but have yet to see a good use case that’s not a pyramid scheme. :)

LLM/AI I’ll never be good for everything. But it’s damn good a few things now and it’ll probably transform a few more things before it runs out of tricks or actually becomes AI (if we ever find a way to make a neural network that big before we boil ourselves alive).

The whole quantum computing thing will get more interesting shortly, as long as we keep finding math tricks it’s good at.

I was around and active for dotcom, I think right now, the tech is a hell of lot more interesting and promising.

permalink
report
parent
reply
2 points

Crypto is very useful in defective economies such as South America to compensate the flaws of a crumbling financial system. It’s also, sadly, useful for money laundering.

Fir these 2 uses, it should stay functional.

permalink
report
parent
reply
19 points

In the dot com boom we got sites like Amazon, Google, etc. And AOL was providing internet service. Not a good service. AOL was insanely overvalued, (like insanely overvalued, it was ridiculous) but they were providing a service.

But we also got a hell of a lot of businesses which were just “existing business X… but on the internet!”

It’s not too dissimilar to how it is with AI now really. “We’re doing what we did before… but now with AI technology!”

If it follows the dot com boom-bust pattern, there will be some companies that will survive it and they will become extremely valuable the future. But most will go under. This will result in an AI oligopoly among the companies that survive.

permalink
report
parent
reply
1 point

AOL was NOT a dotcom company, it was already far past it’s prime when the bubble was in full swing still attaching cdrom’s to blocks of kraft cheese.

The dotcom boom generated an unimaginable number of absolute trash companies. The company I worked for back then had it’s entire schtick based on taking a lump sum of money from a given company, giving them a sexy flash website and connecting them with angel investors for a cut of their ownership.

Photoshop currently using AI to get the job done is more of an advantage that 99% of the garbage that was wrought forth and died on the vine in the early 00’s. Topaz labs can currently take a poor copy of VHS video uploaded to Youtube and turn it into something nearly reasonable to watch in HD. You can feed rough drafts of performance reviews or apologetic letters to people through ChatGPT and end up with nearly professional quality copy that iterates your points more clearly than you’d manage yourself with a few hours of review. (at least it does for me)

Those companies born around the dotcom boon that persist didn’t need the dotcom boom to persist, they were born from good ideas and had good foundation.

There’s still a lot to come out of the AI craze. Even if we stopped where we are now, upcoming advances in the medical field alone with have a bigger impact on human quality of life than 90% of those 00’s money grabs.

permalink
report
parent
reply
57 points

It’s starting to look like the crypto/nft scam because it is the same fucking assholes forcing this bullshit on all of us.

permalink
report
reply
3 points
*

cryptonft is shit though. At least “A.I” have actual tech behind it.

permalink
report
parent
reply
23 points

What they are calling “ai” is usually not ai at all though…

permalink
report
parent
reply
1 point

yeah i know hence “ai”

permalink
report
parent
reply
6 points

Crypto had real tech behind it too. The reason it was bullshit wasn’t that there wasn’t serious tech backing it, it’s that there was no use case that wasn’t a shittier version of something else.

permalink
report
parent
reply
2 points

yeah “real tech”. crypto/nft is not real as in it is useless. as of now. it is useful to criminal though.

permalink
report
parent
reply
2 points

A broken clock is right twice a day. The crypto dumbasses jump on every trend, so you still need to evaluate it on its own merits. The crypto Bros couldn’t come up with a real world compelling use case over years and years, so that was obviously bullshit. Generative AI is just kicking off and there are already tons of use cases for it.

permalink
report
parent
reply
8 points

I said the same thing. It feels like that. I wonder if there’s some sociological study behind what has been pushing “wrappers” of implementations at high volume. Wrappers meaning, 90%+ of the companies are not incorporating Intellectual Property of any kind and saturating the markets with re-implementations for quick income and then scrapping. I feel this is not a new thing. But, for some reason it feels way more “in my face” the past 4 years.

permalink
report
parent
reply
35 points

As someone that currently works in AI/ML, with a lot of very talented scientists with PhD’s and dozens of papers against their name, it boils my piss when I see crypto cunts I used to know that are suddenly all on the AI train, trying to peddle their “influence” on LinkedIn.

permalink
report
parent
reply
10 points

never heard boil my piss before

permalink
report
parent
reply
6 points

NFTs in their mainstream form were the most cringe-worthy concept imaginable. A random artist makes a random ape which suddenly becomes a collectible, and all it happened to be was an S3 url on a particular blockchain? Which could be replicated on another chain? How did people think this was a smart thing to invest in?! Especially the Apes and rubbish?!

permalink
report
parent
reply
5 points

AI will follow the same path as VR IMO. Everybody will freak out about it for a while, tons of companies will try getting into the market.

And after a few years, nobody will really care about it anymore. The people that use it will like it. It will become integrated in subtle and mundane ways, like how VR is used in TikTok filters, Smart phone camera settings, etc.

I don’t think it will become anything like general intelligence.

permalink
report
reply
4 points
*
Deleted by creator
permalink
report
parent
reply
4 points

Nah, this ain’t it.

So here’s the thing about AI, every company desperately wants their employees to be using it because it’ll increase their productivity and eventually allow upper management to fire more people and pass the savings onto the C suites. Just like with computerization.

The problem is that you can’t just send all of your spreadsheets on personal financial data to OpenAI/Bing because from a security perspective that’s a huge black hole of data exfiltration which will make your company more vulnerable. How do we solve the problem?

In the next five to ten years you will see everyone from Microsoft/Google to smaller more niche groups begin to offer on-premise or cloud based AI models that are trained on a standardized set of information by the manufacturer/distributor, and then personally trained on company data by internal engineers or a new type of IT role completely focused on AI (just like how we have automation and cloud engineering positions today.)

The data worker of the future will have a virtual assistant that emulates everything that we thought Google assistant and Cortana was going to be, and will replace most data entry positions. Programmers will probably be fewer and further between, and the people that keep their jobs in general will be the ones who can multiply and automate their workload with the ASSISTANCE of AI.

It’s not going to replace us anytime soon, but it’s going to change the working environment just as much as the invention of the PC did.

permalink
report
parent
reply
8 points

That’s exactly what was said about the internet in 1990. We have no idea what the next step will be.

permalink
report
parent
reply
4 points

The problem with VR is the cost of a headset. It’s a high cost of entry. Few want to buy another expensive device unless it’s really worth it.

Generative AI has a small cost of entry for the consumer. Just log in to a site, maybe pay some subscription fee, and start prompting. I’ve used it to quickly generate Excel formulas for example. Instead of looking for a particular answer in some website with SEO garbage I can get an answer immediately.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments