I know people here are very skeptical of AI in general, and there is definitely a lot of hype, but I think the progress in the last decade has been incredible.

Here are some quotes

“In my field of quantum physics, it gives significantly more detailed and coherent responses” than did the company’s last model, GPT-4o, says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany.

Strikingly, o1 has become the first large language model to beat PhD-level scholars on the hardest series of questions — the ‘diamond’ set — in a test called the Graduate-Level Google-Proof Q&A Benchmark (GPQA)1. OpenAI says that its scholars scored just under 70% on GPQA Diamond, and o1 scored 78% overall, with a particularly high score of 93% in physics

OpenAI also tested o1 on a qualifying exam for the International Mathematics Olympiad. Its previous best model, GPT-4o, correctly solved only 13% of the problems, whereas o1 scored 83%.

Kyle Kabasares, a data scientist at the Bay Area Environmental Research Institute in Moffett Field, California, used o1 to replicate some coding from his PhD project that calculated the mass of black holes. “I was just in awe,” he says, noting that it took o1 about an hour to accomplish what took him many months.

Catherine Brownstein, a geneticist at Boston Children’s Hospital in Massachusetts, says the hospital is currently testing several AI systems, including o1-preview, for applications such as connecting the dots between patient characteristics and genes for rare diseases. She says o1 “is more accurate and gives options I didn’t think were possible from a chatbot”.

You are viewing a single thread.
View all comments View context

Its energy consumption is absolutely unacceptable, it puts the Crypto market to utter shame regarding its ecological impact. I mean, Three Mile Island Site 1 is being recommissioned to service Microsoft Datacenters instead of the 800,000 homes it could service with its 835 megawatt output. This is being made possible thanks to taxpayer backed loans provided by the federal government. So American’s tax dollars are being funneled into a private energy company, to provide a private tech company 835 megawatts of power output, for a service they are attempting to make a profit from. Instead of being provided clean, reliable energy to their households.

Power consumption is only one half of the ecological impact that AI brings to the table, too. The cooling requirement of AI text generation has been found to consume just over 1 bottle of water (519 milliliters) per 100 words, or the equivalent of a brief email. In areas where electricity costs are high, they consume an insane amount of water from the local supply. In one case, The Dalles, Google’s datacenters were using nearly a quarter of all the water available in the town. Some of these datacenters use cooling towers where external air travels across a wet media so the water evaporates. Which means that they do not recycle the water being used to cool, and it is consumed and removed from whatever water supply they are drawing from.

These datacenters consume resources, but often do not bring economic advantages to the people living in the areas they are constructed. Instead, those people are subject to the sounds of their cooling systems (if being electrically cooled), a hit to their property value, strain on their local electric grid, and often are a massive consumer of local water (if being liquid cooled).

Models need to be trained and that training happens in datacenters, which can at times take months to complete. The training is an expense the company pays just to get these systems off the ground. So before any productive benefits can be gained by these AI systems, you have to consume a massive number of resources just to train the models. Microsoft’s data center used 700,000 liters of water while training GPT-3 according to the Washington Post. Meta used 22 million liters of water training its LLaMA-3 open source AI model.

And for what exactly? As others have pointed out in this thread, and others outside this community broadly, these models only wildly succeed when placed into a bounded test scenario. As commenters on this NYT article point out:

Major problem with this article: competition math problems use a standardized collection of solution techniques, it is known in advance that a solution exists, and that the solution can be obtained by a prepared competitor within a few hours.

“Applying known solutions to problems of bounded complexity” is exactly what machines always do and doesn’t compete with the frontier in any discipline.

Note in the caption of the figure that the problem had to be translated into a formalized statement in AlphaGeometry’s own language (presumably by people). This is often the hardest part of solving one of these problems.

These systems are only capable of performing within the bounds of existing content. They are incapable of producing anything new or unexplored. When one data scientist looked at the o1 model, he had this to say about the speed at which the o1 model constructed code that took him months to complete:

Kyle Kabasares, a data scientist at the Bay Area Environmental Research Institute in Moffett Field, California, used o1 to replicate some coding from his PhD project that calculated the mass of black holes. “I was just in awe,” he says, noting that it took o1 about an hour to accomplish what took him many months.

He makes these remarks, with almost no self-awareness. The likelihood that this model was trained on his very own research is very high, and so naturally the system was able to provide him a solution. The data scientist labored for months creating a solution that, to be assumed, wasn’t a reality beforehand, and the o1 model simply internalized his solution. When asked to provide that solution, it did so. This isn’t an astonishing accomplishment, it’s a complicated, expensive, and damaging search engine that will hallucinate an answer when you’ve asked it to produce something that sits outside the bounds of its training.

The vast majority of use cases for these systems by the public are not cutting-edge research. It’s writing the next 100 word email you don’t want to write, and sacrificing a bottle of water every time they do it. It’s replacing jobs being held by working people and replacing them with a system that is often exploitable, costly, and inefficient at the task of performing the job. These systems are a parlor trick at best, and a demon whose hunger for electric and water is insatiable at worst.

permalink
report
parent
reply
-1 points

See?

This is so much more credible than going “I hate AI, AI is shit”

Posters like UlyssesT making everyone look bad.

permalink
report
parent
reply

Hey, you’re talkin’ about my man UlyssesT all wrong, it’s the wrong tone. You do it again, and I’ll have to pull out the PPB. Still nothing to say though, I see. Do you not have much of a defense against the idea that the slop slot machine everyone worships is destroying communities and the ecosystem at large? I’m not sure how you can look at the comment I left and have so little to say about these truths. Do you believe the ends justify the means in some way? What is it?

permalink
report
parent
reply
4 points

Banned FrogPrincess @lemmy.ml from the community technology reason: slap fight expires: in 3 days

You’re going to have to wait a while.

permalink
report
parent
reply
7 points
*
Deleted by creator
permalink
report
parent
reply
6 points

Shut the fuck up. Nobody wanted to respond to you (except RedWizard, he must have the patience of a saint) because we’ve already done this topic to death, and leading with an ableist meme doesn’t exactly imply you’re acting in good faith.

permalink
report
parent
reply
2 points

permalink
report
parent
reply
8 points
*
Deleted by creator
permalink
report
parent
reply
0 points

You didn’t get a reply to your effortpost because

You said this less than 15 minutes after the good comment.

permalink
report
parent
reply
4 points
*
Removed by mod
permalink
report
parent
reply

The classic 🙉🙈 of the “futurist”. We are on our way to AI Utopia, and you won’t tell me otherwise!

permalink
report
parent
reply
6 points
*
Deleted by creator
permalink
report
parent
reply
4 points

it puts the Crypto market to utter shame regarding its ecological impact

Crypto is still worse on electricity usage - I haven’t seen actual stats for AI-only electricity usage but crypto uses 0.4 percent of the global electricity supply compared to ~1.5 percent usage from all data centres. I don’t think AI comprises a full third of total usage. The AI hype crowd have projections that it will increase significantly which would require much better “AI” and actual use cases to see the sort of growth to make it a substantial issue.

Evaporative losses are vastly worse for agricultural with open irrigation. 22 million liters of water seems like a lot, but it’s only 0.022 gigalitres. Google used a total of ~25 gigalitres across all their data centres, while Arizona uses about 8,000 gigalitres a year.

For the Dalles example, Google used ~1.3 gigalitres in a town with a population of 15-25,000 thousand people, so 25 percent for a massive data centre is not unreasonable.

As you note, it’s junk so a waste of resources, but unless they manage to double the industry year on year, (doubt) it won’t be a huge issue.

permalink
report
parent
reply

technology

!technology@hexbear.net

Create post

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

  • 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
  • 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
  • 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
  • 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
  • 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
  • 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
  • 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.

Community stats

  • 1.1K

    Monthly active users

  • 1.6K

    Posts

  • 20K

    Comments