

nightsky
sometimes a dragon
Seeing a lot of talk about OpenAI acquiring a company with Jony Ive and he’s supposedly going to design them some AI gadget.
Calling it now: it will be a huge flop. Just like the Humane Pin and that Rabbit thing. Only the size of the marketing campaign, and maybe its endurance due to greater funding, will make it last a little longer.
It appears that many people think that Jony Ive can perform some kind of magic that will make a product successful, I wonder if Sam Altman believes that too, or maybe he just wants the big name for marketing purposes.
Personally, I’ve not been impressed with Ive’s design work in the past many years. Well, I’m sure the thing is going to look very nice, probably a really pleasingly shaped chunk of aluminium. (Will they do a video with Ive in a featureless white room where he can talk about how “unapologetically honest” the design is?) But IMO Ive has long ago lost touch with designing things to be actually useful, at some point he went all in on heavily prioritizing form over function (or maybe he always did, I’m not so sure anymore). Combine that with the overall loss of connection to reality from the AI true believers and I think the resulting product could turn to be actually hilarious.
The open question is: will the tech press react with ridicule, like it did for the Humane Pin? Or will we have to endure excruciating months of critihype?
I guess Apple can breathe a sigh of relief though. One day there will be listicles for “the biggest gadget flops of the 2020s”, and that upcoming OpenAI device might push Vision Pro to second place.
If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.
The article title and that part remind me of this quote from Charles Babbage in 1864:
On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
It feels as if Babbage had already interacted with today’s AI pushers.
Re the GitLab marketing: what does it mean, what toolchains are they referring to, and what is “native AI”? Does that even mean anything, or is it just marketing gibberish to impress executives?
*scrolls down*
GitLab Duo named a Leader in the Gartner® Magic Quadrant™ for AI Code Assistants.
[eternal screaming]
Oh god, so many horror quotes in there.
With a community of 116 million users a month, Duolingo has amassed loads of data about how people learn
…and that’s why I try to avoid using smartphone apps as much as possible.
“Ultimately, I’m not sure that there’s anything computers can’t really teach you,”
How about common sense…
“it’s just a lot more scalable to teach with AI than with teachers.”
Ugh. So terrible. Tech’s obsession with “scaling” is one of the worst things about tech.
If “it’s one teacher and like 30 students, each teacher cannot give individualized attention to each student,” he said. “But the computer can.
No, it cannot. It’s a statistical model, it cannot give attention to anything or anyone, what are you talking about.
Duolingo’s CFO made similar comments last year, saying, “AI helps us replicate what a good teacher does”
Did this person ever have a good teacher in their life
the company has essentially run 16,000 A/B tests over its existence
Aaaarrgh. Tech’s obsession with A/B testing is another one of the worst things about tech.
Ok I stop here now, there’s more, almost every paragraph contains something horrible.
Maybe this is a bit old woman yells at cloud
Yell at cloud computing instead, that is usually justified.
More seriously: it’s not at all that. The AI pushers want to make people feel that way – “it’s inevitable”, “it’s here to stay”, etc. But the threat to learning and maintaining skills is real (although the former worries me more than the latter – what has been learned before can often be regained rather quickly, but what if learning itself is inhibited?).
My opinion of Microsoft has gone through many stages over time.
In the late 90s I hated them, for some very good reasons but admittedly also some bad and silly reasons.
This carried over into the 2000s, but in the mid-to-late 00s there was a time when I thought they had changed. I used Windows much more again, I bought a student license of Office 2007 and I used it for a lot of uni stuff (Word finally had decent equation entry/rendering!). And I even learned some Win32, and then C#, which I really liked at the time.
In the 2010s I turned away from Windows again to other platforms, for mostly tech-related reasons, but I didn’t dislike Microsoft much per se. This changed around the release of Win 10 with its forced spyware privacy violation telemetry since I categorically reject such coercion. Suddenly Microsoft did one of the very things that they were wrongly accused of doing 15 years earlier.
Now it’s the 2020s and they push GenAI on users with force, and then they align with fascists (see link at the beginning of this comment). I despise them more now than I ever did before, I hope the AI bubble burst will bankrupt them.
“For sure, there are some legitimate uses of AI” or “Of course, I’m not claiming AI is useless” like why are you not claiming that.
Yes, thank you!! I’m frustrated by that as well. Another one I have seen way too often is “Of course, AI is not like cryptocurrency, because it has some real benefits [blah blah blah]”… uhm… no?
As for the “study”, due to Brandolini’s law this will continue to be a problem. I wonder whether research about “AI productivity gains” will eventually become like studies about the efficacy of pseudo-medicine, i.e. the proponents will just make baseless claims that an effect were present, and that science is just not advanced enough yet to detect or explain it.
If this is real it would be double infuriating, not just because of the AI nonsense, but also because just 3 days ago SAP went bootlicker and announced ending diversity programs.