Avatar

scruiser

scruiser@awful.systems
Joined
4 posts • 149 comments
Direct message

Is that supposed to be an advertisement in favor of AI? (As opposed to stealth satire?) Seeing it makes me want to get off my computer and touch grass.

permalink
report
parent
reply

Wow, that is some skilled modeling. You should become a superforecaster and write prophecies AI timelines, they are quite popular on lesswrong.

permalink
report
parent
reply

To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.

And the evolutionary aspect requires a lot of compute, they don’t specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isn’t an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isn’t possible for most practical real world software).

Alphaevolve’s successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development

permalink
report
parent
reply

“You claim to like unions, but seem strangely hostile to police unions. Curious.”

  • Turning Point USA
permalink
report
parent
reply

Yep. If you’re looking for a snappy summary of this situation, this reddit comment had a nice summary. An open source LLM Pokemon harness/scaffold has 4.8k lines of python, and is missing features essential to Gemini’s harness. Whereas an open source LUA script to play Pokemon is 7.2k lines, was written in 2014, and it consistently speed runs the game in under two hours.

permalink
report
parent
reply

That’s unfair.

Beaker deserves better than to get compared to a eugenicist cryptofascist.

permalink
report
parent
reply

Fellas it’s almost June in the year of the “agents” and frankly I don’t see shit.

LLM agents can beat Pokemon… if you give them enough customized tools and prompting that with the same number of lines of instruction you could just directly code a bot that beats Pokemon without an LLM in the first place. And you don’t mind the LLM agent playing much much worse than literal children.

permalink
report
parent
reply

Yeah I pretty much agree. Penrose compares favorably to other cases of noble disease because the bar is so low (the Wikipedia page has got examples of racism, eugenics, homeopathy, astrology), not because his ideas about Quantum consciousness are actually good. It’s not good to cite Penrose as someone notable who disagrees with the possibility of AGI because the reason he disagree is because he believes in Quantum mysticism and misunderstands Godel’s theorem and computer science.

permalink
report
parent
reply

Yeah it’s really not productive to engage directly.

I’d almost categorize Penrose as a borderline case of noble disease himself for stuff he’s said about Quantum Consciousness and relatedly the halting problem and Godel’s incompleteness theorem. But he actually has a proposed mechanism (involving microtubules) that is testable and falsifiable and the physics half of what he is talking about is within his domain of expertise.

permalink
report
parent
reply

Stephen Hawking was starting to promote AI doomerism in 2014. But he’s not a Nobel prize winner. Yoshua Bengio is a doomer, but no Nobel prize either, although he is pretty decorated in awards. So yeah looks like one winner and a few other notable doomers that aren’t actually Nobel Prize winners somehow became winners plural in Scott’s argument from authority. Also, considering the long list of example of Noble Disease, I really don’t think Nobel Prize winner endorsement is a good way to gauge experts’ attitudes or sentiment.

permalink
report
parent
reply