It’s a neat idea, but frankly, I don’t want or need other people hearing my business. It needs to pair with (smart?) earbuds.
I hate it. I don’t want to speak to people who are wearing one.
I’m going to actively avoid people doing so, and I feel like others will as well.
If someone walks up to you and they’re filming you on their phone, how would most people react?
For some reason this is making me wish Jobs were still around.
I’d hope he’d have some subtle burns about this product … maybe about how we’re visual animals and you can’t just throw out decades of progress on screen tech and call that innovation. Maybe something about how we’ve got one voice but 10 fingers and two eyes.
I am calling bullshit on all of their points.
-
No screen, but a projector to project on your hand? WTF? So not only will it far less information, but it will be a pain to use…
-
Voice commands? Meaning I will need to tell everyone around me what I am doing? Also calling bullshit on getting them to work in a busy area.
-
No it can’t, there are no ways to detect nutrition from a picture of a peice of food
-
Privacy? Yeah, get back with me in 20 years when it has been proven to not sell, leak or have data stolen, then I’ll be impressed.
In conclusion, this is as real as the Skarp laser razor is.
No it can’t, there are no ways to detect nutrition from a picture of a peice of food
Why not? at least to the extent that a human can. Some AI model recognizes the type of food, estimates the amount and calculates nutrition based on that (hopefully verified with actual data, unlike in this demo).
All three of these functions already exist, all that remains is to put them together.
Ok, if you take any book, keep it closed, how many times do the letters s, q, d and r appear in the book?
There is no way to know without opening the book and counting, sure, you could make some statisticsl analysis based on the language used, but that doesn’t take into account the font size and spacing, nor the number of pages.
Since the machine only has a photo to analyze, it can only give extremely generic results, making them effectively useless.
You would need to open the food up and actually analyze a part of the inside with something like a mass spectrometer to get any useful data.
I agree with you, but disagree with your reasoning.
If you take 1lb of potatoes, boil and mash them with no other add-ins, you can reasonably estimate the nutritional information through visual inspection alone, assuming you have enough reference to see there is about a pound of potatoes. There are many nutrition apps out there that utilize this, and it’s essentially just lopping off the extremes and averaging out the rest.
The problem with this is, it’s impossible to accurately guess the recipe, and therefore the ingredients. Take the aforementioned mashed potatoes. You can’t accurately tell what variety of potato was used. Was water added back during the mashing? Butter? Cream cheese? Cheddar? Sour cream? There’s no way to tell visually, assuming uniform mashing, what is in the potatoes.
Not to mention, the pin sees two pieces of bread on top of each other… what is in the bread? Who the fuck knows!
If i had a big list or directory of a lot of well known books and how many times s, q, d and r appears in them then sure I would be able to make a very good estimate on how many there are from just looking at the cover of the book, with a slight variance being in the editing that version may have. Almost like how a specific type of food will likely have a certain amount of protein fibre etc, with slight variations based on how the cook prepared the food.
Another tech that is going to be utterly irrelevant as AR glasses become a mainstream reality like it or not.
as AR glasses become a mainstream reality like it or not.
Just like the segway was going to change the way we build cities?
Or how cryptocurrencies will replace national fiat money?
Or how google glasses have already become a mainstream reality?
Or how everyone will watch TV in 3D all the time?
Or how everyone will play games in VR?
Just because a technology is possible it does not mean it will become ubiquitous.
VR has already happened. Are you referring to “deep diving” or whatever word choice you want, where you don’t need to use controllers?
I fully agree, but to think XR is not going to be the next computing platform is a little surprising. A little as I remember a lot of people saying they would never carry a computer around with them too.
Vr is a fancy display tech for niche games. It’s at best another lane in the console war.
AR, actual ar with light fields is not feasible. The tech will never get there. It’s just too computionally expensive and the optics don’t pan out.