You are viewing a single thread.
View all comments
25 points

The obvious easy solution would be to teach LLMs to guide the user through their “thinking process” or however you may call it. Instead of answering outright. This is what people do too, right? They look at what they thought and/or wrote. Or they would say “let’s test this”. Like good teachers do. Problem is, that would require some sort of intelligence, which artificial intelligence ironically doesn’t possess.

permalink
report
reply
1 point

There are chain of thought and tree of thought approaches and maybe even more. From what I understand it generates answer in several passes and even with smaller models you can get better results.

However it is funny how AI (LLMs) is heavily marketed as a thing that will make many jobs obsolete and/or will take over humanity. Yet to get any meaningful results people start to build whole pipelines around LLMs, probably even using several models for different tasks. I also read a little about retrieval augmented generation (RAG) and apparently it has a lot of caveats in terms of what data can and can not be successfully extracted, data should be chunked to fit into the context and yet retain all the valuable information and this problem does not have “one size fits all” solution.

Overall it feels like someone made black box (LLM), someone tried to use this black box to deal with the existing complexity, failed and started building another layer of complexity around the black box. So ultimately current AI adopters can find themselves with two complex entities at hand. And I find it kind of funny.

permalink
report
parent
reply
4 points

Didn’t wolfram alpha do this like 10 years ago?

permalink
report
parent
reply
15 points
*

This is “chain of thought” (and a few others based on “chain of thought”), and yes it gives much better results. Very common thing to train into a model. Chatgpt will do this a lot, surprised it didn’t do that here. Only so much you can do I suppose.

permalink
report
parent
reply
15 points
*

The very strategy of asking LLMs to “reason” or explain an answer tends to make them more accurate.

Because instead of the first token being “Yes” or “No”, it’s “That depends,” or If we look at…"

Thus increasing the number of tokens that determines the answer from 1, to theoretically hundreds or more.

permalink
report
parent
reply
2 points

How does this impact speed and efficiency vs 1 token?

permalink
report
parent
reply
4 points

Those are all one token. A token can be a whole sentence. Tokenization tends to be based on LZW compression which combines common phrases (of any length, e.g. “Once upon a time” could be a single token because it’s recurring)

“Yes” is almost always followed by an explanation of a single idea while “It depends” is followed by several possible explanations.

permalink
report
parent
reply
6 points

So just tool being a tool.

permalink
report
parent
reply
42 points

It’s because AI is not an actual AI, it’s just marketing buzzwords

permalink
report
parent
reply
7 points

I would consider even LLMs actual AI. Even bots in video games are called AIs, no? But I agree that people are vastly overestimating their capabilities and I hate the entrepreneurial bullshitting as much as everyone else.

Machine learning! That was the better term.

permalink
report
parent
reply
3 points

It is, but it’s not in the way the marketing implies.

permalink
report
parent
reply

memes

!memes@hexbear.net

Create post

dank memes

Rules:

  1. All posts must be memes and follow a general meme setup.

  2. No unedited webcomics.

  3. Someone saying something funny or cringe on twitter/tumblr/reddit/etc. is not a meme. Post that stuff in !the_dunk_tank@www.hexbear.net, it’s a great comm.

  4. Va*sh posting is haram and will be removed.

  5. Follow the code of conduct.

  6. Tag OC at the end of your title and we’ll probably pin it for a while if we see it.

  7. Recent reposts might be removed.

  8. Tagging OC with the hexbear watermark is praxis.

  9. No anti-natalism memes. See: Eco-fascism Primer

Community stats

  • 2K

    Monthly active users

  • 3.6K

    Posts

  • 31K

    Comments