We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."
- Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
- Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
- General Access Issues. All of these considerations exist within a larger system in which writers don’t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.
Presented without comment.
I swear if I hear “being against AI is ableist” one more time I’m gonna lose my shit. Disabled artists have existed for as long as art itself, and the only ableism here is AI-brained fuckwits using disabled people as an escape goat by suggesting they are unable to create things from their own effort and need spicy autocomplete to do so.
Edit: fuck it, I’m keeping the escape goat!
*scapegoat
I think autocorrect boned you there.
And I agree whole-heartedly.
The escape goat is the goat that is released by pressing the ESC key. It solves the problem of a frozen computer by eating the computer.
There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
If you can’t see the issues in your own writing, you’re exactly who is most vulnerable to AI’s “syntactically valid but complete nonsense” output.
I don’t entirely agree, though.
That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren’t interested in my writing and that’s totes cool).
I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, “I want /something/ accessible to be a rubber ducky” is valid.
My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it’s silicon valley connections. Wouldn’t it be nice if NaNoWriMo said something like, “Whatever technology tools exist today or tomorrow, we stand for writer’s essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process.”
Doesn’t even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.
And even those should always be labeled, since AI is categorically inferior to intentional communication.
They seem focused on the use case “I don’t have the ability to communicate with intention, but I want to pretend I do.”
AI and ML (and I’m not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is “you have to make a decision quickly, and there’s a high tolerance for errors or imprecision”.
Your example is a perfect example: it’s not as good as a human-generated caption, it can lack context, or be wrong. But it’s better than the alternative of having nothing.
I don’t accept a wrong caption is better than not being captioned. I’m concerned that when you say “High tolerance for error”, that really means you think it’s something unimportant.
No, what I’m saying is that if I had vision issues and had to use a screen reader to use my computer, if I had to choose between
- the person who did that website didn’t think about accessibility, so sucks to be you, you’re not gonna know what’s on those pictures
- there’s no alt, but your screen reader tries to describe the picture, you know it’s not perfect, but at least you probably know it’s not a dog.
I’d take the latter. Obviously the true solution would be to make sure everyone thinks about accessibility, but come on… Even here it’s not always the case and the fediverse is the place where I’ve seen the most focus on accessibility.
Another domain I’d see is preprocessing (a human will do the actual work) to make some tasks a bit easier or quicker and less repetitive.
But it’s better than the alternative of having nothing.
I’d take nothing over trillions of dollars dedicated to igniting the atmosphere for an incorrectly captioned video
Oh yeah I’m not arguing with you on that. AI has become synonymous with LLM, and doing the most generic models possible, which means syphoning (well stealing actually) stupid amounts of data, and wasting a quantity of energy second only to cryptocurrencies.
Simpler models that are specialized in one domain instead do not cost as much, and are more reliable. Hell, spam filters have been partially based on some ML for years.
But all of that is irrelevant at the moment, because IA/ML is not one possible solution among other solutions that are not based on ML. Currently they are something that must be pushed as much as possible because it’s a bubble that gets investors, and I’m so waiting forward for it to burst.
It’s so wild how ChatGPT and this “style” of AI literally didn’t exist two years ago yet we’re all expected to believe it’s this essential, indispensable, irreplaceable tool that people can’t live without, and actually you’re the meanie for suggesting people do something the exact same way they would have in 2022 instead of using the environmental-disaster spam machine
A note for the unawares that Nanowrimo also tried to cover up a scandal when one of their mods was found to be referring minors to an ABDL fetish site. To my knowledge Nanowrimo never tried to own up to it, never even admitted anything was wrong until the FBI got involved, and still blocks any discussion of the situation.
https://xcancel.com/Arumi_kai/status/1760770617073082629
https://speak-out.carrd.co/
Reportedly they’re now shilling AI hard on their Facebook (I don’t have Facebook to check). I consider it 100% likely that, from this year on, everyone who uploads their 50k words to the organisation to prove completion will have their work promptly fed to the hungry algorithms.
At least one writer in the board has already resigned over the AI blog post https://xcancel.com/djolder/status/1830464713110540326