We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don’t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

41 points

It’s so wild how ChatGPT and this “style” of AI literally didn’t exist two years ago yet we’re all expected to believe it’s this essential, indispensable, irreplaceable tool that people can’t live without, and actually you’re the meanie for suggesting people do something the exact same way they would have in 2022 instead of using the environmental-disaster spam machine

permalink
report
reply
20 points

This fractal sucks because it’s the voices of the underprivileged that need to be amplified. Using gen AI will smother them entirely.

permalink
report
reply
27 points

This… sucks.

NaNoWriMo should know more than anyone that AI can’t supplement bad writing. It’s not ableist to condemn it, it’s recognition that writing is a skill that needs to be honed. I know Ao3 doesn’t condemn AI either but that kind of just aligns with their ‘anything goes’ posting rules.

I was actually kind of excited to attempt NaNoWriMo this year, but now I’m not to sure :(

permalink
report
reply
18 points

Free advice from a stranger on the Internet: Don’t let the assholes ruin your fun! If you want to try writing 50,000 words for the sake of having written 50,000 words, go for it. And I mean that quite sincerely!

permalink
report
parent
reply
4 points

that’s kind of what I think I’m gonna do anyway! I’m already a casual writer but I’d really love to turn it into something more. and actually forcing myself to get 50k words out in a specific time frame might be a fun step. I’ve already GOT complete stories rolling around in my head, it’s just the problem of getting them out on paper :p

permalink
report
parent
reply
32 points

I can at least understand the guys who are using the AI text conveyor belt to make a cheap buck. Do the hustle, get your bag, whatever. We live in a capitalist hellscape and if that’s how you choose to survive, then fuck you, but I get it.

I don’t understand these guys who think it’s actively good that people don’t write their own words. It’s just a level of misanthropy that doesn’t make sense for how inflated their egos are.

permalink
report
reply
-3 points

People who hire writers, don’t write their own words. You can say that human connection is a crucial part of the writing process. But I just honestly don’t think that’s true for the vast majority of things we write. But also, eventually AI will be indistinguishable, If not better, than a human writer.

When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person. The reason the rich are so powerful is they can pay other people to do things. Most people only have the power to do what they can physically do in the world, But the rich can multiply that effort by however many people they can afford.

permalink
report
parent
reply
3 points

When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person.

Setting aside the “and then a miracle occurs” bit, this basically seems to be “rich people get to have servants and slaves… what if we democratised that?”. Maybe AGI will invent a new kind of ethics for us.

But the rich can multiply that effort by however many people they can afford.

If the hardware to train and run what currently passes for AI was cheap and trivially replicable, Jensen Huang wouldn’t be out there signing boobs.

permalink
report
parent
reply
4 points

when my dick grows wings, it will truly therefore be a magical flying unicorn pony

permalink
report
parent
reply
-4 points

Do you not think AGI is possible?

permalink
report
parent
reply
18 points

I feel like this has to be built on a lack of appreciation for words as a facilitator of human connection. By finding means of expression and being understood we manage to link our brains together on a conceptual level. By building these skills communally we expand the possible bandwidth of connection and even the range and fidelity of our own thoughts.

This has to be motivated by a view of words as Authoritative Things that sit on shelves and bestseller lists and are authored by Smart And Successful People.

permalink
report
parent
reply
15 points
*

Exactly, its a natural conclusion of accepting commodification as the One True Path. Words don’t mean anything if they don’t make a profit, and clearly you’re a bozo who can’t Make It (and the bar of Making It is always rising because Number Go Up), so you should join the borg and let my buddy Claude speak for you.

permalink
report
parent
reply
10 points

By the way, thank you Terry Pratchett for teaching me the use of Meaningful Capitalisation.

permalink
report
parent
reply
45 points

Doesn’t even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.

And even those should always be labeled, since AI is categorically inferior to intentional communication.

They seem focused on the use case “I don’t have the ability to communicate with intention, but I want to pretend I do.”

permalink
report
reply
4 points

They added those at my work and they are terrible. A picture of the company CEO standing in front of a screen with the text on it announcing a major milestone? “man in front of a screen” Could get more information from the image filename.

permalink
report
parent
reply
-4 points

AI and ML (and I’m not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is “you have to make a decision quickly, and there’s a high tolerance for errors or imprecision”.

Your example is a perfect example: it’s not as good as a human-generated caption, it can lack context, or be wrong. But it’s better than the alternative of having nothing.

permalink
report
parent
reply
10 points

But it’s better than the alternative of having nothing.

I’d take nothing over trillions of dollars dedicated to igniting the atmosphere for an incorrectly captioned video

permalink
report
parent
reply
4 points

Oh yeah I’m not arguing with you on that. AI has become synonymous with LLM, and doing the most generic models possible, which means syphoning (well stealing actually) stupid amounts of data, and wasting a quantity of energy second only to cryptocurrencies.

Simpler models that are specialized in one domain instead do not cost as much, and are more reliable. Hell, spam filters have been partially based on some ML for years.

But all of that is irrelevant at the moment, because IA/ML is not one possible solution among other solutions that are not based on ML. Currently they are something that must be pushed as much as possible because it’s a bubble that gets investors, and I’m so waiting forward for it to burst.

permalink
report
parent
reply
11 points

I don’t accept a wrong caption is better than not being captioned. I’m concerned that when you say “High tolerance for error”, that really means you think it’s something unimportant.

permalink
report
parent
reply
-1 points

No, what I’m saying is that if I had vision issues and had to use a screen reader to use my computer, if I had to choose between

  • the person who did that website didn’t think about accessibility, so sucks to be you, you’re not gonna know what’s on those pictures
  • there’s no alt, but your screen reader tries to describe the picture, you know it’s not perfect, but at least you probably know it’s not a dog.

I’d take the latter. Obviously the true solution would be to make sure everyone thinks about accessibility, but come on… Even here it’s not always the case and the fediverse is the place where I’ve seen the most focus on accessibility.

Another domain I’d see is preprocessing (a human will do the actual work) to make some tasks a bit easier or quicker and less repetitive.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 2K

    Monthly active users

  • 737

    Posts

  • 18K

    Comments

Community moderators