ArchRecord
SBF’s case was completely different, since the legality of his actions was much more easily provable as a crime. Not only was every transaction on the actual blockchain, which is immutable and couldn’t have possibly been faked, but his actions didn’t exactly have any nuance that could be argued in court. There were funds, they weren’t his, but he used them. Case closed.
Trump’s case involves not only a lot more possible statutes he could have violated, but also a lot of arbitrary actions that don’t perfectly fall into a rigid box of “this is legal” or “this is illegal.”
Plus, if you have more money to draw out legal fights, you can keep them going for longer, regardless of your case. SBF had most of his assets confiscated since they were almost entirely from the fraud, so he didn’t have the same luxuries.
Why doesn’t it apply to genocide? What’s the defining line?
Trump has not only supported the actions of the US in relation to Israel, but he’s very clearly heavily racist, an ethnostatist, and would like nothing more than to increase Israel’s power as a US ally by letting them genocide the Palestinian population completely regardless of any complaints by his constituents.
Genuinely, which side do you think is more likely to stop if pressured enough by the American people, or by international orgs? Trump, or Kamala? Because, at least personally, I doubt Trump would be more likely to stop it, let alone even just give it less support in general.
If we only have these two candidates to pick between, I’d rather go for the one that we at least have a chance of convincing to stop, rather than one that we know will likely just ignore the American people in favor of his own ideals.
There is no “harm reduction”
There most certainly is. If one side is worse than the other, voting for the one that does less harm reduces (but doesn’t eliminate or fix) the harm being done.
I’m not saying it’s a solution, it’s definitely a bandage on a bleeding wound, but a bandage is better than letting it bleed out.
can you imagine anything that would cause you to not vote for the democrats? If full throated support for genocide isn’t a bridge too far, I have to wonder if you have any absolute principles at all.
If the Democrats implemented policies that would cause greater overall harm than the Republicans, then I would vote the other direction, but that would imply a total switch in partisan policies. (for an example of some policies I support to give you a general idea of what I consider to be harm, I’m a socialist, utilitarian, I believe all lives have equal value, I’m pro-abortion, anti-fascist, I hope you get the gist.)
Voting for the greater evil never gives you a beneficial edge. Voting for nobody when the greater evil benefits from that won’t give you a higher likelihood of implementing positive policy in the future.
I absolutely don’t support the Democrat’s endorsement of a genocide, but acting as if they’re the only ones doing it is silly. Trump is very clearly even more genocidal, and would not only implement even worse policy with regard to the Palestinian people, but would also do numerous other genocidal acts here, and in other locations abroad.
Statistically speaking, the only thing that would give the genocide a higher likelihood of ending, when the only two possibilities in this election are Democrats or Republicans, is the Democrats, because they will likely do the least amount of genocide by comparison. If we want any hope of actually stopping the genocide, we first want the most sympathetic party to that idea in power.
But of course, if you don’t believe harm reduction as a concept even exists, then I wouldn’t expect this argument to convince you. It’s fine if you aren’t though. You’re absolutely entitled to your own opinion, however wrong I may think it to be.
Then I suppose you simply must reject the world we live in right now.
Both sides are going to continue the genocide, we know that, it’s their stated positions. The most we can do with our votes in the current election is take a stance of harm reduction, since that’s the only choice available. Anything else won’t make a change to the system of oppression facing the Palestinians today.
But will they, that’s the question.
They have the ability to, but if they won’t, then we still end up with the same two choices. And if picking the other side won’t make them change their mind, then whatever they can do is irrelevant in a conversation about what will produce the best tangible outcome.
That depends on how you define utilitarianism though.
That minority also factors into a utilitarian’s assessment of what will maximize happiness. If 10% of the population hates the 0.1% minority, but oppressing that minority would also harm them, then you have to factor in the relative harm caused to them as well, not just in raw %'s, but also in terms of if the value given to the 10% from their oppression would outweigh the harm done to those being oppressed.
Furthermore, I’d argue most utilitarians would argue that the very hate towards that minority in the first place is what causes harm, not the minority themselves. The best utilitarian action to take would be to reduce the hate for that minority, and increase their acceptableness, rather than oppress that minority to satisfy the 10%. Especially considering we know this tends to not just be a one-time thing, and that hate will likely continue, leading to further oppression over time, and harm not only to the minority, but also to the mental well-being of the 10%. Thus, the best course of action would be to eliminate the hate, not the minority.
Of course, utilitarians aren’t a monolith, but that’s at least how I would interpret the situation.
Computers are a fundamental part of that process in modern times.
If you were taking a test to assess how much weight you could lift, and you got a robot to lift 2,000 lbs for you, saying you should pass for lifting 2000 lbs would be stupid. The argument wouldn’t make sense. Why? Because the same exact logic applies. The test is to assess you, not the machine.
Just because computers exist, can do things, and are available to you, doesn’t mean that anything to assess your capabilities can now just assess the best available technology instead of you.
Like spell check? Or grammar check?
Spell/Grammar check doesn’t generate large parts of a paper, it refines what you already wrote, by simply rephrasing or fixing typos. If I write a paragraph of text and run it through spell & grammar check, the most you’d get is a paper without spelling errors, and maybe a couple different phrases used to link some words together.
If I asked an LLM to write a paragraph of text about a particular topic, even if I gave it some references of what I knew, I’d likely get a paper written entirely differently from my original mental picture of it, that might include more or less information than I’d intended, with different turns of phrase than I’d use, and no cohesion with whatever I might generate later in a different session with the LLM.
These are not even remotely comparable.
Assuming the point is how well someone conveys information, then wouldn’t many people better be better at conveying info by using machines as much as reasonable? Why should they be punished for this? Or forced to pretend that they’re not using machines their whole lives?
This is an interesting question, but I think it mistakes a replacement for a tool on a fundamental level.
I use LLMs from time to time to better explain a concept to myself, or to get ideas for how to rephrase some text I’m writing. But if I used the LLM all the time, for all my work, then me being there is sort of pointless.
Because, the thing is, most LLMs aren’t used in a way that conveys info you already know. They primarily operate by simply regurgitating existing information (rather, associations between words) within their model weights. You don’t easily draw out any new insights, perspectives, or content, from something that doesn’t have the capability to do so.
On top of that, let’s use a simple analogy. Let’s say I’m in charge of calculating the math required for a rocket launch. I designate all the work to an automated calculator, which does all the work for me. I don’t know math, since I’ve used a calculator for all math all my life, but the calculator should know.
I am incapable of ever checking, proofreading, or even conceptualizing the output.
If asked about the calculations, I can provide no answer. If they don’t work out, I have no clue why. And if I ever want to compute something more complicated than the calculator can, I can’t, because I don’t even know what the calculator does. I have to then learn everything it knows, before I can exceed its capabilities.
We’ve always used technology to augment human capabilities, but replacing them often just means we can’t progress as easily in the long-term.
Short-term, sure, these papers could be written and replaced by an LLM. Long-term, nobody knows how to write papers. If nobody knows how to properly convey information, where does an LLM get its training data on modern information? How do you properly explain to it what you want? How do you proofread the output?
If you entirely replace human work with that of a machine, you also lose the ability to truly understand, check, and build upon the very thing that replaced you.
Schools are not about education but about privilege, filtering, indoctrination, control, etc.
Many people attending school, primarily higher education like college, are privileged because education costs money, and those with more money are often more privileged. That does not mean school itself is about privilege, it means people with privilege can afford to attend it more easily. Of course, grants, scholarships, and savings still exist, and help many people afford education.
“Filtering” doesn’t exactly provide enough context to make sense in this argument.
Indoctrination, if we go by the definition that defines it as teaching someone to accept a doctrine uncritically, is the opposite of what most educational institutions teach. If you understood how much effort goes into teaching critical thought as a skill to be used within and outside of education, you’d likely see how this doesn’t make much sense. Furthermore, the heavily diverse range of beliefs, people, and viewpoints on campuses often provides a more well-rounded, diverse understanding of the world, and of the people’s views within it, than a non-educational background can.
“Control” is just another fearmongering word. What control, exactly? How is it being applied?
Maybe if a “teacher” has to trick their students in order to enforce pointless manual labor, then it’s not worth doing.
They’re not tricking students, they’re tricking LLMs that students are using to get out of doing the work required of them to get a degree. The entire point of a degree is to signify that you understand the skills and topics required for a particular field. If you don’t want to actually get the knowledge signified by the degree, then you can put “I use ChatGPT and it does just as good” on your resume, and see if employers value that the same.
Maybe if homework can be done by statistics, then it’s not worth doing.
All math homework can be done by a calculator. All the writing courses I did throughout elementary and middle school would have likely graded me higher if I’d used a modern LLM. All the history assignment’s questions could have been answered with access to Wikipedia.
But if I’d done that, I wouldn’t know math, I would know no history, and I wouldn’t be able to properly write any long-form content.
Even when technology exists that can replace functions the human brain can do, we don’t just sacrifice all attempts to use the knowledge ourselves because this machine can do it better, because without that, we would be limiting our future potential.
This sounds fake. It seems like only the most careless students wouldn’t notice this “hidden” prompt or the quote from the dog.
The prompt is likely colored the same as the page to make it visually invisible to the human eye upon first inspection.
And I’m sorry to say, but often times, the students who are the most careless, unwilling to even check work, and simply incapable of doing work themselves, are usually the same ones who use ChatGPT, and don’t even proofread the output.