I just tried it in Chat GPT. here is the answer: “No, 3307 is not a prime number. It can be divided by 7 (3307 ÷ 7 = 471)”
Having used Co-Pilot, this is a pervasive problem.
It will frequently kneecap itself by writing an incorrect function header then spiraling into nonsense.
Or if you ask it to complete something for you and you got the start wrong it’ll just keep generating different incorrect answers.
It’s very useful for boilerplate stuff, but even then I’ve been got by mistyping a variable name and then it keeps using the wrong name over and over but in believable ways because it generates believable code.
just did this test with the “assistant” at my job (different AI) and it did the exact same thing lmao. anyway we NEED to use ai, its the future
capitalism is the best possible system
me when I skim a wikipedia article for 5 seconds before diving into a comment section and acting like a subject matter expert
We have successfully recreated human intelligence because AI refuses to acknowledge it is wrong and develops entire belief structures rather than revisit its earlier assumptions.