• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
You are viewing a single thread.
View all comments
103 points

Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

permalink
report
reply
57 points

What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

permalink
report
parent
reply
27 points
*

That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

permalink
report
parent
reply
9 points

Plenty of free apps get monetized just fine. They just have to offer something people want to use that they can slather ads all over. The AI doo-dads haven’t shown they’re useful. I’m guessing the dedicated hardware strategy got them more upfront funding from stupid venture capital than an app would have, but they still haven’t answered why anybody should buy these. Just postponing the inevitable.

permalink
report
parent
reply
22 points

The answer is “marketing”

They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse

permalink
report
parent
reply
6 points
  • Incredible levels of hype
  • Tons of power consumption
  • Questionable utility
  • Small but very vocal fanbase

s/Crypto/AI/

permalink
report
parent
reply
11 points

Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.

permalink
report
parent
reply
4 points

I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…

permalink
report
parent
reply
10 points

I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.

permalink
report
parent
reply
4 points
Deleted by creator
permalink
report
parent
reply
4 points

It’s not black or white.

Of couse AI hallucinates, but not all that an LLM produces is garbage.

Don’t expect a “living” Wikipedia or Google, but, it sure can help with things like coding or translating.

permalink
report
parent
reply
9 points

I don’t necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That’s not what this product is. It’s a standalone device which you talk to. It’s supposed to replace pulling out your phone to answer a question.

permalink
report
parent
reply
3 points

I quite like kagis universal summarizer, for example. It let’s me know if a long ass YouTube video is worth watching

permalink
report
parent
reply
2 points

I use LLMs as a starting point to research new subjects.

The google/ddg search quality is hot garbage, so LLM at least gives me the terminology to be more precise in my searchs.

permalink
report
parent
reply
2 points
*

There is s fuck ton on money laundering coming from China nowadays and they invest millions in any tech-bro stupid idea to dump their illegal cash.

permalink
report
parent
reply
1 point

I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

permalink
report
parent
reply
5 points

Alexa and Google home came out nearly a decade ago

permalink
report
parent
reply
4 points

Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

permalink
report
parent
reply
0 points
*

Alexa is a fundamentally different architecture from the LLMs of today.

Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

Maybe actually read the entire post before being an ass.

permalink
report
parent
reply
1 point

I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.

permalink
report
parent
reply
-20 points

The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

permalink
report
parent
reply
18 points

“Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

permalink
report
parent
reply
-5 points

Hallucinations are largely dealt with if you use agents. It won’t be long until it gets packaged well enough that anyone can just use it. For now, it takes a little bit of effort to get a decent setup.

permalink
report
parent
reply
-7 points

1% correct is never “fairly high” wtf

Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

permalink
report
parent
reply
5 points

An LLM has never generated a correct answer to any of my queries.

permalink
report
parent
reply
8 points

That seems unlikely, unless “any” means two.

permalink
report
parent
reply
2 points

I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

permalink
report
parent
reply
-2 points

I don’t believe you

permalink
report
parent
reply
0 points

I think Meta hates your answer

permalink
report
parent
reply
-21 points
*

I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.

https://github.com/blainemartin/R7_Shodan_Cloudflare_IP_Sync_Tool

Tell me again how LLMs are useless?

permalink
report
parent
reply
22 points

To be honest… that doesn’t sound like a heavy lift at all.

permalink
report
parent
reply
10 points

Dream of tech bosses everywhere. Pay an intermediate dev for average level senior output.

permalink
report
parent
reply
15 points

It’s a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I’d be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it’s never “I’d never thought of doing it that way!” levels of amazing.

permalink
report
parent
reply
11 points

You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

permalink
report
parent
reply
11 points

I think the reason they’re useful for writing code is that there’s a third party - the parser or compiler - that checks their work. I’ve used LLMs to write code as well, and it didn’t always get me something that worked but I was easily able to catch the error.

permalink
report
parent
reply
-8 points

Are they useless?

Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

permalink
report
parent
reply
4 points
*
Removed by mod
permalink
report
parent
reply
3 points

I don’t think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.

permalink
report
parent
reply
3 points

Who’s going to tell them that “QA” just ran the code through the same AI model and it came back “Looks Good”.

:-)

permalink
report
parent
reply
0 points

The code is bad and I would not approve this. I don’t know how you think it’s a good example for LLMs.

permalink
report
parent
reply
1 point

The code looks like any other Python code out there.

permalink
report
parent
reply
-13 points

It’s no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus… To them, generative AI is nothing more than the free version of chatgpt from a year ago, they’ve not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

permalink
report
parent
reply
5 points
*
Removed by mod
permalink
report
parent
reply
-8 points

They aren’t trying to have a conversation, they’re trying to convince themselves that the things they don’t understand are bad and they’re making the right choice by not using it. They’ll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don’t understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won’t be disruptive.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 544K

    Comments