Avatar

ebu

ebu@awful.systems
Joined
0 posts • 71 comments

gay blue dog

https://lucario.dev/

Direct message

The point is that even if the chances of [extinction by AGI] are extremely slim

the chances are zero. i don’t buy into the idea that the “probability” of some made-up cataclysmic event is worth thinking about as any other number because technically you can’t guarantee that a unicorn won’t fart AGI into existence which in turn starts converting our bodies into office equipment

It’s kind of like with the trinity nuclear test. Scientists were almost 100% confident that it wont cause a chain reaction that sets the entire atmosphere on fire

if you had done just a little bit of googling instead of repeating something you heard off of Oppenheimer, you would know this was basically never put forward as serious possibility (archive link)

which is actually a fitting parallel for “AGI”, now that i think about it

EDIT: Alright, well this community was a mistake…

if you’re going to walk in here and diarrhea AGI Great Filter sci-fi nonsense onto the floor, don’t be surprised if no one decides to take you seriously

…okay it’s bad form but i had to peek at your bio

Sharing my honest beliefs, welcoming constructive debates, and embracing the potential for evolving viewpoints. Independent thinker navigating through conversations without allegiance to any particular side.

seriously do all y’all like. come out of a factory or something

permalink
report
parent
reply

as someone who only draws as a hobbyist, but who has taken commissions before, i think it would be very annoying to have a prospective client go “okay so here’s what i want you to draw” and then send over ai-generated stuff. if only because i know said client is setting their expectations for the hyper-processed, over-tuned look of the machine instead of what i actually draw

permalink
report
parent
reply

i couldn’t resist

at least when this rhetoric popped up around crypto and GameStop stocks, there was a get-rich-quick scheme attached to it. these fuckers are doing it for free

permalink
report
reply

there were bits and pieces that made me feel like Jon Evans was being a tad too sympathetic to Elizer and others whose track record really should warrant a somewhat greater degree of scepticism than he shows, but i had to tap out at this paragraph from chapter 6:

Scott Alexander is a Bay Area psychiatrist and a writer capable of absolutely magnificent, incisive, soulwrenching work … with whom I often strongly disagree. Some of his arguments are truly illuminatory; some betray the intellectual side-stepping of a very smart person engaged in rationalization and/or unwillingness to accept the rest of the world will not adopt their worldview. (Many of his critics, unfortunately, are inferior writers who misunderstand his work, and furthermore suggest it’s written in bad faith, which I think is wholly incorrect.) But in fairness 90+% of humanity engages in such rationalization without even worrying about it. Alexander does, and challenges his own beliefs more than most.

the fact that Jon praises Scott’s half-baked, anecdote-riddled, Red/Blue/Gray trichotomy as “incisive” (for playing the hits to his audience), and his appraisal of the meandering transhumanist non-sequitur reading of Allen Ginsberg’s Howl as “soulwrenching” really threw me for a loop.

and then the later description of that ultimately rather banal New York Times piece as “long and bad” (a hilariously hypocritical set of adjectives for a self-proclaimed fan of some of Scott’s work to use), and the slamming of Elizabeth Sandifer as being a “inferior writer who misunderstands Scott’s work”, for uh, correctly analyzing Scott’s tendencies to espouse and enable white supremacist and sexist rhetoric… yeah it pretty much tanks my ability to take what Jon is writing at face value.

i don’t get how after so many words being gentle but firm about Elizer’s (lack of) accomplishments does he put out such a full-throated defense of Scott Alexander (and the subsequent smearing of his “”“enemies”“”). of all people, why him?

permalink
report
reply

it is a little entertaining to hear them do extended pontifications on what society would look like if we had pocket-size AGI, life-extension or immortality tech, total-immersion VR, actually-good brain-computer interfaces, mind uploading, etc. etc. and then turn around and pitch a fit when someone says “okay so imagine if there were a type of person that wasn’t a guy or a girl”

permalink
report
parent
reply

simply ask the word generator machine to generate better words, smh

this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what “prompting” even is. you’re not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

in my personal experiments with offline models, using something like “below is a transcript of a chat log with XYZ” as a prompt instead of “You are XYZ” immediately gives much better results. not good results, but better

permalink
report
parent
reply

syncthing is an extremely valuable piece of software in my eyes, yeah. i’ve been using a single synced folder as my google drive replacement and it works nearly flawlessly. i have a separate system for off-site backups, but as a first line of defense it’s quite good.

permalink
report
reply

“rat furry” :3

“(it’s short for rationalist)” >:(

permalink
report
parent
reply

i think you’re missing the point that “Deepseek was made for only $6M” has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.

to stretch your metaphor, it’s like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once

permalink
report
parent
reply