Not always truthfully, but it does answer. It is quite confidently incorrect sometimes.
Imagine chatGPT beliving trolls in its training data and suggesting users to sudo rm -rf /*
They’ve gone overboard in preventing troll behavior in this version. It constantly apologizes and refuses to say anything that could be considered even slightly controversial. It also spews morality lessons. But most importantly is that it understands the context of what it suggests, so it wouldn’t recommend that unless you’re trying to nuke your system. It probably wouldn’t recommend that even if you’re trying to nuke your system and would instead give you a lesson on why what you’re doing is destructive.
It’s works better as a conversation then just answering questions. The prompts you give it can also drastically alter it’s accuracy.
I use it at work frequently instead of the docs nowadays.
Me too. It sometimes saves me hours and writes code that is better than I would write. Other times it recommends code that doesn’t actually compile, but insists that it should. Often it provides working code that is about 3 times more complicated than it needs to be. But overall it is an amazing tool that massively improves productivity. If you use it for help with complex subjects that you already understand well, then it is a bad-ass advisor.