image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3

0 points

I don’t see how AI is inherently bad for the environment. I know they use a lot of energy, but if the energy comes from renewable sources, like solar or hydroelectric, then it shouldn’t be a problem, right?

permalink
report
reply
2 points

i kinda agree. currently many places still use oil for engery generation, so that kinda makes sense.

but if powered by cool solar panels and cool wind turbine things, that would be way better. then it would only be down to the production of GPUs and the housing.

permalink
report
parent
reply
2 points
*

Also cooling! Right now each interaction from each person using chatGPT uses roughly a bottle’s worth of water per 100 words generated (according to a research study in 2023). This was with GPT-4 so it may be slightly more or slightly less now, but probably more considering their models have actually gotten more expensive for them to host (more energy used -> more heat produced -> more cooling needed).

Now consider how that scales with the amount of people using ChatGPT every day. Even if energy is clean everything else about AI isn’t.

permalink
report
parent
reply
2 points

The problem is that we only have a finite amount of energy. If all of our clean energy output is going toward AI then yeah it’s clean but it means we have to use other less clean sources of energy for things that are objectively more important than AI - powering homes, food production, hospitals, etc.

Even “clean” energy still has downsides to the environment also like noise pollution (impacts local wildlife), taking up large amounts of space (deforestation), using up large amounts of water for cooling, or having emissions that aren’t greenhouse gases, etc. Ultimately we’re still using unfathomably large amounts of energy to train and use a corporate chatbot trained on all our personal data, and that energy use still has consequences even if it’s “clean”

permalink
report
parent
reply
6 points

This list is missing: AI generated images are not art.

permalink
report
reply
1 point

I disagree, but I can respect your opinion.

permalink
report
parent
reply

i also think that way, but it’s also true that generated images are being used all over the web already, so people generally don’t seem to care.

permalink
report
parent
reply
10 points

I’ll just repeat what I’ve said before, since this seems like a good spot for this conversation.

I’m an idiot with no marketable skills. I want to write, I want to draw, I want to do a lot of things, but I’m bad at all of them. gpt like ai sounds like a good way for someone like me to get my vision out of my brain and into the real world.

My current project is a wiki of lore for a fictional setting, for a series of books that I will never actually write. My ideal workflow involves me explaining a subject as best I can to the ai (an alien technology or a kingdom’s political landscape, or drama between gods, or whatever), telling the ai to ask me questions about the subject at hand to make me write more stuff, repeat a few times, then have the ai summarize the conversation back to me. I can then refer to that summary as I write an article on the subject. Or, me being lazy, I can just copy-pasta the summary and that’s the article.

As an aside, I really like chatgpt 4o for lore exploration, but I’d prefer to run an ai on my own hardware. Sadly, I do not understand github and my brain glazes over every time I look at that damn site.

It is way too easy for me to just let the ai do the work for me. I’ve noticed that when I try to write something without ai help, it’s worse now than it was a few years ago. generative ai is a useful tool, but it should be part of a larger workflow, it should not be the entire workflow.

If I was wealthy, I could just hire or commission some artists and writers to do the things. From my point of view, it’s the same as having the ai do the things, except it’s slower and real humans benefit from it. I’m not wealthy though, hell, I struggle to pay rent.

The technology is great, the business surrounding it is horrible. I’m not sure what my point is.

permalink
report
reply
1 point

I’m sorry, but did you ever think of the option to try? To write a story you have to work on it and get better.

GPT or llms can’t write a story for you, and if you somehow wrangle it to write a story without losing it’s thread - then is it even your story?

look, it’s not going to be a good story if you don’t write it yourself. There’s a reason for why companies want to push it, they don’t want writers.

I’m sure you can write something, but that you have issues which you need to deal with before you can delve into this. I’m not saying it’s easy, but it’s worth it.

Also read books. Read books to become a better writer.

PPS. If you make an llm write it you’ll come across issues copyrighting it, at least last I heard.

permalink
report
parent
reply
19 points

I honestly am skeptical about the medical stuff. Machine learning can’t even do the stuff it should be good at reliably, specifically identifying mushrooms/mycology in general.

permalink
report
reply
3 points

From what little I know if it, it’s sorta twofold what it does:

  1. It looks through documentation across a patient record to look for patterns a doctor might miss. For example, a patient comes in complaining of persistent headaches/fatigue. A doctor might look at that in isolation and just try to treat the symptoms, but an AI might see some potentially relevant lab results in their histories and recommend more testing to rule out a cancer diagnosis that the doctor might have thought unlikely without awareness of that earlier data.

  2. Doctors have to do a lot of busywork in their record keeping that AIs can help streamline. A lot of routine documentation, attestations, statements, etc. Since so much of it is very template-heavy already, an AI might be able to streamline the process as well as tailor it better to the patient. E.g. the record indicates “assigned male at birth” and an ER doctor defaults to he/him pronouns looking only at the medical birth sex marker, but the patient is also being seen by a gender clinic at which she is receiving gender affirming treatment as a trans woman and brings up that earlier data to correct the documentation and make it more accurate and personalized for the patient.

In reality, I am sure that practices and hospital systems are just going to use this as an excuse to say “You don’t need to spend as much time on documentation and chart review now so you can see more patients, right?” It’s the cotton gin issue.

permalink
report
parent
reply
9 points

that is interesting. i know that there are plenty of plant recognition onces, and recently there have been some classifiers specifically trained on human skin to see if it’s a tumor or not. that one is better than a good human doctor in his field, so i wonder what happened to that mushroom classifier. Maybe it is too small to generalize or has been train in a specific environment.

permalink
report
parent
reply
7 points

I haven’t looked closely enough to know, but I recall medical image analytics being “better than human” well before the current AI/LLM rage. Like, those systems use machine learning, but in a more deterministic, more conventional algorithm sense. I think they are also less worried about false positives, because the algorithm is always assumed to be checked by a human physician, so my impression is that the real sense in which medical image analysis is ‘better’ is that it identifies smaller or more obscure defects that a human quickly scanning the image might overlook.

If you’re using a public mushroom identification AI as the only source for life-and-death choice, then false positives are a much bigger problem.

permalink
report
parent
reply
3 points

yes, that is what i have heard too. there was a news thing some days ago that this “cancer scanner” thing will be available in two years to all doctors. so that’s great! but yes, we very much still need a human to watch over it, so its out-of-distribution-generations stay in check.

permalink
report
parent
reply
5 points

Do not trust AI to tell you if you can eat a mushroom. Ever. The same kinds of complexity goes into medicine. Sure, the machine learning process can flag something as cancerous (for example), but will always and forever need human review unless we somehow completely change the way machine learning works and speed it up by an order of magnitude.

permalink
report
parent
reply
5 points

yeah, we still very much need to have real humans go “yes, this is indeed cancer”, but this ai cancer detection feels like a reasonable “first pass” to quickly get a somewhat good estimation, rather than no estimation with lacking doctors.

permalink
report
parent
reply
3 points

Having worked with ML in manufacturing, if your task is precise enough and your input normalized enough, it can detect very impressive things. Identifying mushrooms as a whole is already too grand a task, especially as it as to deal with different camera angles, lighting … But ask it to differentiate between a few species, and always offer pictures using similar angles, lighting and background, and the results will most likely be stellar.

permalink
report
parent
reply
2 points

Like I said, I’m just skeptical. I know it can do impressive things, but unless we get a giant leap forward, it will always need extensive human review when it comes to medicine (like my mycology example). In my opinion, it is a tool for quick and dirty analysis in the medical field which may speed things up for human review.

permalink
report
parent
reply
2 points

In my experience, the best uses have been less fact-based and more “enhancement” based. For example, if I write an email and I just feel like I’m not hitting the right tone, I can ask it to “rewrite this email with a more inviting tone” and it will do a pretty good job. I might have to tweak it, but it worked. Same goes for image generation. If I already know what I want to make, I can have it output the different elements I need in the appropriate style and piece them together myself. Or I can take a photograph that I took and use it to make small edits that are typically very time consuming. I don’t think it’s very good or ethical for having it completely make stuff up that you will use 1:1. It should be a tool to aid you, not a tool to do things for you completely.

permalink
report
reply
3 points

yesyesyes, can see that completely. i might not be the biggest fan of using parts of generated images, but that still seems fine. using LLMs for fact-based stuff is like - the worst usecase. You only get better output if you provide it with the facts, like in a document or a search result, so it’s essentially just rephrasing or summarizing the content, which LLMs are good at.

permalink
report
parent
reply

196

!onehundredninetysix@lemmy.blahaj.zone

Create post

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very “off topic”.

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren’t unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don’t rush mod actions. If a case doesn’t need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

Community stats

  • 8.8K

    Monthly active users

  • 905

    Posts

  • 7.8K

    Comments