It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

Yeah, those concerns are valid. Not running on your machine and not FOSS.

permalink
report
reply
2 points

Are there any viable alternatives?

permalink
report
parent
reply

Check out Meta’s LLaMa 2. Not FOSS, but source available and self hostable.

permalink
report
parent
reply
16 points

GPT4all, it’s open source and you can run it on your own machine.

permalink
report
parent
reply
1 point

How is this able to run without a gpu? Is it that the models are small enough so that only a cpu is needed?

permalink
report
parent
reply
41 points

You might already be aware, but there have been instances of information leaks in the past. Even major tech companies restrict their employees from using such tools due to worries about leaks of confidential information.

If you’re worried about your personal info, it’s a good idea to consistently clear your chat history.

Another big thing is AI hallucination. When you inquire about topics it doesn’t know much about, it can confidently generate fictional information. So, you’ll need to verify the points it presents. This even occurs when you ask it to summarize an article. Sometimes, it might include information that doesn’t come from the original source.

permalink
report
reply
4 points
*

I was not aware there have been leaks. Thank you. And oh yeah. I always verify the technical stuff I tell it to write. It just makes.it.look professional in ways that would take me hours.

My experience asking for new info from it has been bad. I don’t really do it anymore. But honestly. It’s not needed at all.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
1 point

The issue would be if you’re feeding your employer’s intellectual property into the system. Someone then asking ChatGPT for a solution to a similar problem might then be given those company secrets. Samsung had a big problem with people in their semiconductor division using it to automate their work, and have since banned it on company devices.

permalink
report
parent
reply
9 points
*

Given that they know exactly who you are, I wouldn’t get too personal with anything but it is amazing for many otherwise time-consuming problems like programming. It’s also quite good at explaining concepts in math and physics and and is capable of reviewing and critiquing student solutions. The development of this tool is not miraculous or anything - it uses the same basic foundation that all machine learning does - but it’s a defining moment in terms of expanding capabilities of computer systems for regular users.

But yeah, I wouldn’t treat it like a personal therapist, only because it’s not really designed for that, even though it can do a credible job of interacting. The original chat bot Eliza, simulated a “non directional” therapist and it was kind of amazing how people could be drawn into intimate conversations even though it was nothing like ChatGPT in terms of sophistication - it just parroted back what you asked it in a way that made it sound empathetic. https://en.wikipedia.org/wiki/ELIZA

permalink
report
reply
5 points

Ha, I spent way too much time typing stuff into the Eliza prompt. it was amazing for the late 80s

permalink
report
parent
reply
18 points
*

Just check everything. These things can sound authoritative when they are not. They really are not much more then a parrot reciting meaningless stuff back. The shocking thing is they are quite good until they are just not of course.

As far as leaks. Do not put confidential info into into outside sites like chatgpt.

permalink
report
reply
19 points

Most of the times either says complete bullshit or vague and unprecise things so be careful

permalink
report
reply
3 points

I noticed that this isn’t just an issue with this particular tool. I’ve been experimenting with GPT4All (alternative that runs locally on your machine - results are worse (still impressive), but there is complete privacy) and the models available for it are doing the exact same thing.

permalink
report
parent
reply
3 points

That’s the inherent problem with large language models.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.9K

    Monthly active users

  • 2.8K

    Posts

  • 55K

    Comments