• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
241 points

It’s so weird how they’re just insisting it isn’t an android app even though people have proven it is. Who do they expect to believe them?

permalink
report
reply
133 points

The same question was asked a million times during the crypto boom. “They’re insisting that [some-crypto-project] is a safe passive income when people have proven that it’s a ponzi scheme. Who do they expect to believe them?” And the answer is, zealots who made crypto (or in this case, AI) the basis of their entire personality.

permalink
report
parent
reply
24 points

In this case the same people made both, so they are already practiced

permalink
report
parent
reply
49 points

Their target audience are the most gullible tech evangelists in the world that think AI is magic. If there was a limit to the lies those people are willing to believe, they wouldn’t be buying the thing to begin with.

permalink
report
parent
reply
8 points

This will flop though. So will the stupid Humane pin.

Either there are very few people that gullible or that group isn’t quite as gullible as you think.

permalink
report
parent
reply
1 point

Oh 🦆, I was thinking this was humane and wondering why people were saying it is only $200.

permalink
report
parent
reply
43 points

They are technically not wrong when they say that the whole experience isn’t made up of just an App

They are intentionally dodging the ACTUAL question.

Anyways here is a leak of their “LAM”, which is just playwright for the most part. https://web.archive.org/web/20240424133441if_/https://pixeldrain.com/api/file/vYHXbUwP?download

With that, we have both components, yay?

permalink
report
parent
reply
6 points

You know, pairing an LLM with Playright is actually a pretty great idea. But that’s something I can totally roll on my own.

permalink
report
parent
reply
34 points

It’s the Juicero strategy.

“You can’t squeeze our juice packs! Only our special machine can properly squeeze our juice packs for optimal taste!”

permalink
report
parent
reply
14 points

Ahh, the good ol’ days, before we knew how batshit AvE was.

permalink
report
parent
reply
11 points

I’m assuming you’re talking about the YouTuber; It’s been since before the pandemic that I’ve watched AvE, what did he do?

permalink
report
parent
reply
4 points

Reviewer proceeds to squeeze more juice out with their hands than the machine managed.

permalink
report
parent
reply
8 points

Investors who don’t bother reading past the letters A and I in the prospectus.

permalink
report
parent
reply
4 points

They have thought of a specific design for the device using its own interaction modality and created a product that is more than just software.

Therefore don’t get why people refer to it being just an app? Does it make it worth less, because it runs on Android? Many devices, e.g. e-readers are just Android Apps as well. If it works it works.

In this case it doesn’t, so why not focus on that?

permalink
report
parent
reply
26 points
*

The point being, they are charging 200 bucks for hardware that is superfluous and low end for an incomplete software experience that could be delivered without that on an app. The question is, are you going to give up your smartphone for this new device? Are you going to carry both? Probably not.

“It can do 10% of the shit your phone can do, only slower, on a smaller screen, with its own data connection, and inaccurately because you have to hope that our “AI” is sufficiently advanced to understand a command, take action on that command, and respond in a short amount of time. And that’s not to even speak about the horrible privacy concerns or that it’s a brick without connection!”

Everything about this project seems lackluster at best, other than maybe the aesthetic design from teenage engineering, but even then, their design work seems a bit repetitive. But that may be due to how the company is asking for the work. “We wanna be like Nothing and Playdate!!” “I gotchu fam!”

To address your point about e-readers, they have specific use cases. Long battery lives, large, efficient e-ink displays, and the convenience of having all your books, or a large subset, available to you offline! But when those things aren’t a concern, yea, an app will do.

Like with most contemporary product launches, I simply find myself asking, “Who is this for?”

permalink
report
parent
reply
3 points

I mean I have an eReader but most of the time I’m too lazy to go find it and my Kindle app works just fine. I am eyeing those eink phones though…

permalink
report
parent
reply
3 points

They’ve said they are working on integration with other apps, and have said the ultimate goal is the AI could create its own interface for any app. I dunno if that’s gonna happen but if it did it would be closer to an actual assistant, imagine “rabbit, log onto my work schedule app and check my vacation hours” or “rabbit, compare prices for a SanDisk 256 gig memory card on Amazon, eBay, and Newegg”.

More than likely it’ll just fuck it all up but that’s the dream I think.

permalink
report
parent
reply
-9 points

It’s an experimental device and by buying it you invest into r&d. It’s not meant to replace a smartphone as of now, but similar ones eventually will.

My point stands, because they are offering a completely new (but obv lacking) experience with novel design solutions. What they made is a toy, which is not really unusual for teenage engineering. But if they do as they did with other devices in the past this thing might actually rock in the future. They are not inexperienced and usually over super long support for their devices.

TE is way older than Nothing and Playdate btw…

permalink
report
parent
reply
9 points

Why even try to sell me another device though?

Anything and everything this square does, my phone can do better already and has the added benefit of already being in my pocket and not a pain in the ass to use.

permalink
report
parent
reply
-8 points
*

Because, you know, technological development? Someone has to fund R&D, because it’s not cheap. And in 10 years everyone will have similar ai-enhanced devices. No one thought smartphones will make it back in the days as well. And I’m already looking forward to the time when I don’t have to look down anymore to get information

permalink
report
parent
reply
2 points

No, they’re not.

An ereader is a piece of hardware that has a distinct purpose that cannot be matched by other hardware (high quality, high contrast, low power draw static content). Some of them do run Android, and that’s a huge value add. But the actual hardware is the reason it exists.

This is just a dogshit Android phone. There is no unique hardware niche it’s filling. It’s an extremely obvious scam that is very obviously massively downgraded in all of value, utility, and performance by being forced onto separate hardware.

permalink
report
parent
reply
-15 points

my honda is just android software, if thats the only part you look at too.

permalink
report
parent
reply
17 points

This is more like someone offering a “brand new method of personal travel” to replace your car, but it turns out that it’s just an old Honda with only one seat, a fuel tank that only holds 10L, and a custom navigation app. There’s nothing it does that your Honda can’t do better, and you won’t want to replace your Honda with this.

permalink
report
parent
reply
4 points

No it’s not. Your Honda has several different computers in it, only on of which is likely to be running Android.

permalink
report
parent
reply
-2 points

‘Android’ is a certification with requirements in installed Google apps and homscreen links, so there’s that.

permalink
report
parent
reply
134 points

The AI boom in a nutshell. Repackaged software and content with a shiny AI coat of paint. Even the AI itself is often just repackaged chatgpt.

permalink
report
reply
9 points

Repackaging ChatGPT is arguably a very nice potential value add, because going to a website is not always very convenient. But it needs to be done right to convince users to use a new method to access ChatGPT instead of just using their website.

permalink
report
parent
reply
-7 points

What’s interesting about this device is that it (supposedly) learns how apps work and how people use them, so if you ask it something that requires using an app it could do it.

So while it might be “just an android app”, if it does what’s advertised that would be impressive.

permalink
report
parent
reply
10 points

Apps are designed to be easy to use. If this device works as advertised (and that’s a huge if), then it wouldn’t offer much in the way of convenience anyway. From what I’ve been reading, it doesn’t work well at all.

permalink
report
parent
reply
9 points

Reviewers are saying it is not able to do this, along with several other promised features.

permalink
report
parent
reply
-9 points

perplexity for this device. still, excited to get my pre-order if only to add to my teenage engineering collection

permalink
report
parent
reply
10 points

It certainly looks sleek. Too bad that’s all it has in its favor.

permalink
report
parent
reply
8 points
*

must be a cool device to jailbreak and mess around with just for the sake of it tho
it has a very unique form factor after all

permalink
report
parent
reply
7 points

Unless you have tons of money, why preorder? Just wait for the company to inevitably go under and people start reselling their now-useless devices, and then scoop as many as you want from Ebay. Even if the company survives for a while, the functionality is so underwhelming they might start getting rid of them way sooner.

permalink
report
parent
reply
2 points

The company makes other things I don’t think they will go under of this fails

permalink
report
parent
reply
104 points

I heard someone even leaked the apk LMAO that’s hilarious that your 200 dollar product can be literally pirated

permalink
report
reply
48 points

You wouldn’t download a bunny…

permalink
report
parent
reply
4 points

I would stew a bunny…

permalink
report
parent
reply
7 points

Does the apk have unlimited access to Perplexity AI?

permalink
report
parent
reply
-4 points
Deleted by creator
permalink
report
parent
reply
17 points

Didn’t the bunny don’t have any subscription?

permalink
report
parent
reply
103 points

Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

permalink
report
reply
57 points

What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

permalink
report
parent
reply
27 points
*

That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

permalink
report
parent
reply
9 points

Plenty of free apps get monetized just fine. They just have to offer something people want to use that they can slather ads all over. The AI doo-dads haven’t shown they’re useful. I’m guessing the dedicated hardware strategy got them more upfront funding from stupid venture capital than an app would have, but they still haven’t answered why anybody should buy these. Just postponing the inevitable.

permalink
report
parent
reply
22 points

The answer is “marketing”

They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse

permalink
report
parent
reply
6 points
  • Incredible levels of hype
  • Tons of power consumption
  • Questionable utility
  • Small but very vocal fanbase

s/Crypto/AI/

permalink
report
parent
reply
11 points

Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.

permalink
report
parent
reply
4 points

It’s not black or white.

Of couse AI hallucinates, but not all that an LLM produces is garbage.

Don’t expect a “living” Wikipedia or Google, but, it sure can help with things like coding or translating.

permalink
report
parent
reply
9 points

I don’t necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That’s not what this product is. It’s a standalone device which you talk to. It’s supposed to replace pulling out your phone to answer a question.

permalink
report
parent
reply
3 points

I quite like kagis universal summarizer, for example. It let’s me know if a long ass YouTube video is worth watching

permalink
report
parent
reply
2 points

I use LLMs as a starting point to research new subjects.

The google/ddg search quality is hot garbage, so LLM at least gives me the terminology to be more precise in my searchs.

permalink
report
parent
reply
4 points

I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…

permalink
report
parent
reply
10 points

I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.

permalink
report
parent
reply
4 points
Deleted by creator
permalink
report
parent
reply
2 points
*

There is s fuck ton on money laundering coming from China nowadays and they invest millions in any tech-bro stupid idea to dump their illegal cash.

permalink
report
parent
reply
1 point

I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.

permalink
report
parent
reply
1 point

I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

permalink
report
parent
reply
5 points

Alexa and Google home came out nearly a decade ago

permalink
report
parent
reply
4 points

Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

permalink
report
parent
reply
0 points
*

Alexa is a fundamentally different architecture from the LLMs of today.

Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

Maybe actually read the entire post before being an ass.

permalink
report
parent
reply
-20 points

The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

permalink
report
parent
reply
18 points

“Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

permalink
report
parent
reply
-5 points

Hallucinations are largely dealt with if you use agents. It won’t be long until it gets packaged well enough that anyone can just use it. For now, it takes a little bit of effort to get a decent setup.

permalink
report
parent
reply
-7 points

1% correct is never “fairly high” wtf

Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

permalink
report
parent
reply
5 points

An LLM has never generated a correct answer to any of my queries.

permalink
report
parent
reply
8 points

That seems unlikely, unless “any” means two.

permalink
report
parent
reply
2 points

I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

permalink
report
parent
reply
-2 points

I don’t believe you

permalink
report
parent
reply
0 points

I think Meta hates your answer

permalink
report
parent
reply
-21 points
*

I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.

https://github.com/blainemartin/R7_Shodan_Cloudflare_IP_Sync_Tool

Tell me again how LLMs are useless?

permalink
report
parent
reply
22 points

To be honest… that doesn’t sound like a heavy lift at all.

permalink
report
parent
reply
10 points

Dream of tech bosses everywhere. Pay an intermediate dev for average level senior output.

permalink
report
parent
reply
15 points

It’s a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I’d be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it’s never “I’d never thought of doing it that way!” levels of amazing.

permalink
report
parent
reply
11 points

You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

permalink
report
parent
reply
11 points

I think the reason they’re useful for writing code is that there’s a third party - the parser or compiler - that checks their work. I’ve used LLMs to write code as well, and it didn’t always get me something that worked but I was easily able to catch the error.

permalink
report
parent
reply
-8 points

Are they useless?

Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

permalink
report
parent
reply
4 points
*

This is not really a slam dunk argument.

First off, this is not the kind of code I write on my end, and I don’t think I’m the only one not writing scripts all day. There’s a need for scripts at times in my line of work but I spend more of my time thinking about data structures, domain modelling and code architecture, and I have to think about performance as well. Might explain my bad experience with LLMs in the past.

I have actually written similar scripts in comparable amounts of times (a day for a working proof of concept that could have gone to production as-is) without LLMs. My use case was to parse JSON crash reports from a provider (undisclosable due to NDAs) to serialize it to our my company’s binary format. A significant portion of that time was spent on deciding what I cared about and what JSON fields I should ignore. I could have used ChatGPT to find the command line flags for my Docker container but it didn’t exist back then, and Google helped me just fine.

Assuming you had to guide the LLM throughout the process, this is not something that sounds very appealing to me. I’d rather spend time improving on my programming skills than waste that time teaching the machine stuff, even for marginal improvements in terms of speed of delivery (assuming there would be some, which I just am not convinced is the case).

On another note…

There’s no need for snark, just detailing your experience with the tool serves your point better than antagonizing your audience. Your post is not enough to convince me this is useful (because the answers I’ve gotten from ChatGPT have been unhelpful 80% of the time), but it was enough to get me to look into AutoGen Studio which I didn’t know about!

permalink
report
parent
reply
3 points

I don’t think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.

permalink
report
parent
reply
3 points

Who’s going to tell them that “QA” just ran the code through the same AI model and it came back “Looks Good”.

:-)

permalink
report
parent
reply
0 points

The code is bad and I would not approve this. I don’t know how you think it’s a good example for LLMs.

permalink
report
parent
reply
1 point

The code looks like any other Python code out there.

permalink
report
parent
reply
-13 points

It’s no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus… To them, generative AI is nothing more than the free version of chatgpt from a year ago, they’ve not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

permalink
report
parent
reply
5 points

Thing is, if you want to sell the tech, it has to work, and what most people have seen by now is not really convincing (hence the copious amount of downvotes you’ve received).

You guys sound like fucking cryptobros, which will totally replace fiat currency next year. Trust me bro.

permalink
report
parent
reply
-8 points

They aren’t trying to have a conversation, they’re trying to convince themselves that the things they don’t understand are bad and they’re making the right choice by not using it. They’ll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don’t understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won’t be disruptive.

permalink
report
parent
reply
81 points

I’m confused by this revelation. What did everybody think the box was?

permalink
report
reply

Magic

In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said “lEt’S mAkE aN iOt DeViCe GuYs!!1!” after their paltry attempt to racketeer an NFT metaverse game.

Neither this nor Humane do any AI computation on device. It would be a stretch to say there’s even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.

Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.

permalink
report
parent
reply

If the Humane could recognise speech on-device, and didn’t require its own data plan, I’d be reasonably interested, since I don’t really like using my phone for structuring my day.

I’d like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans

permalink
report
parent
reply

Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans

Latte Panda 2 or just wait a couple years. It’ll happen eventually because it’s so obvious it’s literally unpatentable.

permalink
report
parent
reply
16 points

I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.

permalink
report
parent
reply
1 point

I didn’t know how much work they put into customizing it, but being derived from Android does not mean it isn’t custom. Ubuntu is derived from Debian, that doesn’t mean that it isn’t a custom OS. The fact that you can run the apk on other Android devices isn’t a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.

permalink
report
parent
reply
1 point

The Rabbit OS is running server side.

permalink
report
parent
reply
-2 points

I would expect bespoke software and OS in a $200 device to be way less impressive than what a multi billion dollar company develops.

permalink
report
parent
reply
13 points

Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app

permalink
report
parent
reply
13 points

The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.

No small startup is going to revolutionize this space unless some kind of new physics is discovered.

permalink
report
parent
reply
3 points

I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).

So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.

permalink
report
parent
reply
1 point

The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.

permalink
report
parent
reply
10 points

Custom hardware and software I guess?

permalink
report
parent
reply
1 point

Running the Spotify app and dozens of others on a custom software stack?

permalink
report
parent
reply
1 point

Qualcomm is listed as having $10 billion in yearly profits (Intel has ~20B, Nvidia has ~80B), the news articles I can find about Rabbit say its raised around $20 million in funding ($0.02 billion). It takes a lot of money to make decent custom chips.

permalink
report
parent
reply
7 points

Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.

permalink
report
parent
reply
5 points

Most of the processing is done server side though.

permalink
report
parent
reply
4 points

Isn’t Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They’re derivatives of a base distribution. Often they’re even derivatives of a derivative.

Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.

permalink
report
parent
reply
3 points

It could have been a local AI and some special AI chip not found in all android phones, but since it is run in cloud, the privacy is really a problem

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 543K

    Comments