
HedyL
The only reason the tool supposedly has value is because the websites are made to be bad on purpose so that they make more money.
Yes, and because, as it appears, AI occasionally ingests content from some of the better websites out there. However, without sources, you’ll be unable to check whether that was the case for your specific query or not. At the same time, it is getting more and more difficult for us to access these better websites ourselves (see above), and sadly, incentives for creators to post this type of high-quality content appear to be decreasing as well.
For me, everything increasingly points to the fact that the main “innovation” here is the circumvention of copyright regulations. With possibly very erroneous results, but who cares?
From the original article:
Crivello told TechCrunch that out of millions of responses, Lindy only Rickrolled customers twice.
Yes, but how many of them received other similarly “useful” answers to their questions?
FWIW, years ago, some people who worked for a political think tank approached me for expert input. They subsequently published a report that cited many of the sources I had mentioned, but their recommendations in the report were exactly the opposite of what the cited sources said (and what I had told them myself). As far as I know, there was no GenAI at the time. I think these people were simply betting that no one would check the sources.
This is not to defend the use of AI, on the contrary - I think this shows quite well what sort of people would use such tools.
It’s also worth noting that your new variation of this “puzzle” may be the first one that describes a real-world use case. This kind of problem is probably being solved all over the world all the time (with boats, cars and many other means of transportation). Many people who don’t know any logic puzzles at all would come up with the right answer straight away. Of course, AI also fails at this because it generates its answers from training data, where physical reality doesn’t exist.
It is admittedly only tangential here, but it recently occurred to me that at school, there are usually no demerit points for wrong answers. You can therefore - to some extent - “game” the system by doing as much guesswork as possible. However, my work is related to law and accounting, where wrong answers - of course - can have disastrous consequences. That’s why I’m always alarmed when young coworkers confidently use chatbots whenever they are unable to answer a question by themselves. I guess in such moments, they are just treating their job like a school assignment. I can well imagine that this will only get worse in the future, for the reasons described here.
Reportedly, some corporate PR departments “successfully” use GenAI to increase the frequency of meaningless LinkedIn posts they push out. Does this count?
This is particularly remarkable because - as David pointed out - being a pilot is not even one of those jobs that nobody would want to do. There is probably still an oversupply of suitable people who would pass all the screening tests and really want to become pilots. Some of them would probably even work for a relatively average salary (as many did in the past outside the big airlines). The only problem for the airlines is probably that they can no longer count on enough people being willing (and able!) to take on the high training costs themselves. Therefore airlines would have to hire somewhat less affluent candidates and pay for all their training. However, AI probably looks a lot more appealing to them…
Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.