Avatar

GreenKnight23

GreenKnight23@lemmy.world
Joined
0 posts • 222 comments
Direct message

I have had multiple offers to work at AWS. I won’t because I know how shitty they treat their employees.

if they make a dude piss in a bottle shuttling boxes, imagine what they’re making higher paid engineers do. sleepless nights? impossible deadlines? insane scope creep?

no. thank you, but no.

permalink
report
parent
reply

and yet the crypto/AI bros swear that the second coming of AI Christ is here.

permalink
report
reply

he’s the one whose blurred the lines between the businesses.

taking funds from one to pay for the other regularly.

I’d say the EU has every right to do it this way.

permalink
report
parent
reply

the crypto scam ended when the AI scam started. AI conveniently uses the same/similar hardware that crypto used before the bubble burst.

that not enough? take a look at this google trends that shows when interest in crypto died AI took off.

so yeah, there’s a lot more that connects the two than what you’d like people to believe.

permalink
report
parent
reply

not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.

everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML. AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.

if nobody can provide peer reviewed articles, that means they don’t exist, which means all the “power” behind AI is just hot air. if they existed, just pop it into your little LLM and have it spit the articles out.

AI is a marketing joke like “the cloud” was 20 years ago.

permalink
report
parent
reply

According to Oxford, they define plagiarism as,

Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement.

I think that covers 100% of your argument here.

LLMs can’t provide reference to their source materials without opening the business behind it to litigation. this means the LLM can’t request consent.

the child, in this case, cannot get consent from the original author that wrote the content that trained the LLM, cannot get consent from the LLM, and incorporated the result of LLM plagiarism into their work and attempted to pass it off as their own.

the parents are entitled and enabling pricks and don’t have legal ground to stand on.

permalink
report
parent
reply

when I get an email written by AI, it means the person who sent it doesn’t deem me worth their time to respond to me themselves.

I get a lot of email that I have to read for work. It used to be about 30 a day that I had to respond to. now that people are using AI, it’s at or over 100 a day.

I provide technical consulting and give accurate feedback based on my knowledge and experience on the product I have built over the last decade and a half.

if nobody is reading my email why does it matter if I’m accurate? if generative AI is training on my knowledge and experience where does that leave me in 5 years?

business is built on trust, AI circumvents that trust by replacing the nuances between partners that grow that trust.

permalink
report
parent
reply

those aren’t examples they’re hearsay. “oh everybody knows this to be true”

You are ignoring ALL of the of the positive applications of AI from several decades of development, and only focusing on the negative aspects of generative AI.

generative AI is the only “AI”. everything that came before that was a thought experiment based on the human perception of a neural network. it’d be like calling a first draft a finished book.

if you consider the Turing Test AI then it blurs the line between a neural net and nested if/else logic.

Here is a non-exhaustive list of some applications:

  • In healthcare as a tool for earlier detection and prevention of certain diseases

great, give an example of this being used to save lives from a peer reviewed source that won’t be biased by product development or hospital marketing.

  • For anomaly detection in intrusion detection system, protecting web servers

let’s be real here, this is still a golden turd and is more ML than AI. I know because it’s my job to know.

  • Disaster relief for identifying the affected areas and aiding in planning the rescue effort

hearsay, give a creditable source of when this was used to save lives. I doubt that AI could ever be used in this way because it’s basic disaster triage, which would open ANY company up to litigation should their algorithm kill someone.

  • Fall detection in e.g. phones and smartwatches that can alert medical services, especially useful for the elderly.

this dumb. AI isn’t even used in this and you know it. algorithms are not AI. falls are detected when a sudden gyroscopic speed/ direction is identified based on a set number of variables. everyone falls the same when your phone is in your pocket. dropping your phone will show differently due to a change in mass and spin. again, algorithmic not AI.

  • Various forecasting applications that can help plan e.g. production to reduce waste. Etc…

forecasting is an algorithm not AI. ML would determine the percentage of an algorithm is accurate based on what it knows. algorithms and ML is not AI.

There have even been a lot of good applications of generative AI, e.g. in production, especially for construction, where a generative AI can the functionally same product but with less material, while still maintaining the strength. This reduces cost of manufacturing, and also the environmental impact due to the reduced material usage.

this reads just like the marketing bullshit companies promote to show how “altruistic” they are.

Does AI have its problems? Sure. Is generative AI being misused and abused? Definitely. But just because some applications are useless it doesn’t mean that the whole field is.

I won’t deny there is potential there, but we’re a loooong way from meaningful impact.

A hammer can be used to murder someone, that does not mean that all hammers are murder weapons.

just because a hammer is a hammer doesn’t mean it can’t be used to commit murder. dumbest argument ever, right up there with “only way to stop a bad guy with a gun is a good guy with a gun.”

permalink
report
parent
reply

If I just hand wave all the bad things and call them amazing, AI is everything but the bad things!

  • cryptobros AI evangelists
permalink
report
parent
reply