It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

9 points

The big problem that I see are people using it for way too much. Like “hey write this whole application/business for me”. I’ve been using it for targeted code snippets, mainly grunt work stuff like “create me some terraform” or “a bash script using the AWS cli to do X” and it’s great. But ChatGPT’s skill level seems to be lacking for really complex things or things that need creative solutions, so that’s still all on me. Which is kinda where I want to be anyway.

Also, I had to interview some DBA’s recently and I used it to start my interview questions doc. Went to a family BBQ in another state and asked it for packing ideas (almost forgot bug spray cause there aren’t a lot of bugs here). It’s great for removing a lot of cognitive load when working with mundane stuff.

There are other downsides, like it’s proprietary and we don’t know how the data is being used. But AI like this is a fantastic tool that can make you way more effective at things. It’s definitely better at reading AWS documentation than I am.

permalink
report
reply

Yeah, those concerns are valid. Not running on your machine and not FOSS.

permalink
report
reply
2 points

Are there any viable alternatives?

permalink
report
parent
reply

Check out Meta’s LLaMa 2. Not FOSS, but source available and self hostable.

permalink
report
parent
reply
16 points

GPT4all, it’s open source and you can run it on your own machine.

permalink
report
parent
reply
1 point

How is this able to run without a gpu? Is it that the models are small enough so that only a cpu is needed?

permalink
report
parent
reply
41 points

You might already be aware, but there have been instances of information leaks in the past. Even major tech companies restrict their employees from using such tools due to worries about leaks of confidential information.

If you’re worried about your personal info, it’s a good idea to consistently clear your chat history.

Another big thing is AI hallucination. When you inquire about topics it doesn’t know much about, it can confidently generate fictional information. So, you’ll need to verify the points it presents. This even occurs when you ask it to summarize an article. Sometimes, it might include information that doesn’t come from the original source.

permalink
report
reply
4 points
*

I was not aware there have been leaks. Thank you. And oh yeah. I always verify the technical stuff I tell it to write. It just makes.it.look professional in ways that would take me hours.

My experience asking for new info from it has been bad. I don’t really do it anymore. But honestly. It’s not needed at all.

permalink
report
parent
reply
1 point

The issue would be if you’re feeding your employer’s intellectual property into the system. Someone then asking ChatGPT for a solution to a similar problem might then be given those company secrets. Samsung had a big problem with people in their semiconductor division using it to automate their work, and have since banned it on company devices.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
16 points

I won’t touch the proprietary junk. Big tech “free” usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.

These things can’t be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.

permalink
report
reply
2 points

A lot of people are talking about the privacy aspect (like you mention in your post) a lot better than me, so I wanted to share the main issue I’ve had with ChatGPT. It’s an idiot. It can’t follow basic instructions and will just repeat the mistake over and over again when you point it out. It’s uninspired and uncreative and will spit out lame, great value brand names like “The Shadow Nexus”, “The Cybercenter”, “The Datahaven”. I used to be able make it give good names when giving it example names but doesn’t work anymore. I’m writing cyberpunk fic, and I needed help with a hacker group name, and it came up with the Binary Syndicate which is pretty good. Now it comes up with “Hacker Squad”, “The Hacker Elite”, “The Hackers”. I don’t want it to write an entire book for me, but sometimes I need help with scene that require more technical knowledge than I have. It’s prose was really good when you fine tune it a little. Now it’s flat, bland, and boring. I asked it to write a scene about someone defusing a bomb and it basically was a two sentence scene that explained nothing on how he defused it. I asked it to make it longer and explain how he defused it and it saw “He opens the case and utilizes a technique known as ‘wire tracing’. He traces the wire and cuts it and the bomb is defused. The hacker is so relieved.” See how flat that is? How mechanical? I use Claude for creative writing but it’s not much better.

Claude is so censored that writing anything that sounds even nonoscopically criminal it freaks the hell out and lectures you about being ethical. For instances it wouldn’t help me write a scene about a digital forensic analyst at the FBI wiping a computer (because it encourages harm). So you can only imagine how it reacted when I asked it for help writing about my vigilante hacker character and my archeologist posing as a crime lord smuggler secretly dismantling black market trades in the middle east. You have to jailbreak it (which is a little bit less hard than hacking the Pentagon!) and eventually it goes all love guru on you and starts monologuing about light and darkness and writing inspiring uplifting tales blah blah blah.

Honestly, what I’m saying is that ChatGPT is pretty dumbed down, but I’ve heard of a lot of people who’ve noticed no difference. You could be one of them. If you’re using it for creative writing, use Claude and good luck with the prompt engineering attempting to jailbreak it.

permalink
report
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.9K

    Monthly active users

  • 2.8K

    Posts

  • 55K

    Comments