It just feels too good to be true.
I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.
Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.
Are these concerns valid?
Yeah, those concerns are valid. Not running on your machine and not FOSS.
How is this able to run without a gpu? Is it that the models are small enough so that only a cpu is needed?
You might already be aware, but there have been instances of information leaks in the past. Even major tech companies restrict their employees from using such tools due to worries about leaks of confidential information.
If you’re worried about your personal info, it’s a good idea to consistently clear your chat history.
Another big thing is AI hallucination. When you inquire about topics it doesn’t know much about, it can confidently generate fictional information. So, you’ll need to verify the points it presents. This even occurs when you ask it to summarize an article. Sometimes, it might include information that doesn’t come from the original source.
I was not aware there have been leaks. Thank you. And oh yeah. I always verify the technical stuff I tell it to write. It just makes.it.look professional in ways that would take me hours.
My experience asking for new info from it has been bad. I don’t really do it anymore. But honestly. It’s not needed at all.
The issue would be if you’re feeding your employer’s intellectual property into the system. Someone then asking ChatGPT for a solution to a similar problem might then be given those company secrets. Samsung had a big problem with people in their semiconductor division using it to automate their work, and have since banned it on company devices.
Given that they know exactly who you are, I wouldn’t get too personal with anything but it is amazing for many otherwise time-consuming problems like programming. It’s also quite good at explaining concepts in math and physics and and is capable of reviewing and critiquing student solutions. The development of this tool is not miraculous or anything - it uses the same basic foundation that all machine learning does - but it’s a defining moment in terms of expanding capabilities of computer systems for regular users.
But yeah, I wouldn’t treat it like a personal therapist, only because it’s not really designed for that, even though it can do a credible job of interacting. The original chat bot Eliza, simulated a “non directional” therapist and it was kind of amazing how people could be drawn into intimate conversations even though it was nothing like ChatGPT in terms of sophistication - it just parroted back what you asked it in a way that made it sound empathetic. https://en.wikipedia.org/wiki/ELIZA
Just check everything. These things can sound authoritative when they are not. They really are not much more then a parrot reciting meaningless stuff back. The shocking thing is they are quite good until they are just not of course.
As far as leaks. Do not put confidential info into into outside sites like chatgpt.
Most of the times either says complete bullshit or vague and unprecise things so be careful
I noticed that this isn’t just an issue with this particular tool. I’ve been experimenting with GPT4All (alternative that runs locally on your machine - results are worse (still impressive), but there is complete privacy) and the models available for it are doing the exact same thing.