It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

51 points
  • it’s expensive to run, openAI is subsidising it heavily and it will come back to bite us in the ass soon
  • it can be both intentionally and unintentionally biased
  • the text it generates has a certain style to it that can be easy to pick up on
  • it can mix made up information with real information
  • it’s a black box
permalink
report
reply
20 points

Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

permalink
report
parent
reply
2 points

LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

permalink
report
parent
reply
3 points

Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…

permalink
report
parent
reply
41 points

You might already be aware, but there have been instances of information leaks in the past. Even major tech companies restrict their employees from using such tools due to worries about leaks of confidential information.

If you’re worried about your personal info, it’s a good idea to consistently clear your chat history.

Another big thing is AI hallucination. When you inquire about topics it doesn’t know much about, it can confidently generate fictional information. So, you’ll need to verify the points it presents. This even occurs when you ask it to summarize an article. Sometimes, it might include information that doesn’t come from the original source.

permalink
report
reply
4 points
*

I was not aware there have been leaks. Thank you. And oh yeah. I always verify the technical stuff I tell it to write. It just makes.it.look professional in ways that would take me hours.

My experience asking for new info from it has been bad. I don’t really do it anymore. But honestly. It’s not needed at all.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
1 point

The issue would be if you’re feeding your employer’s intellectual property into the system. Someone then asking ChatGPT for a solution to a similar problem might then be given those company secrets. Samsung had a big problem with people in their semiconductor division using it to automate their work, and have since banned it on company devices.

permalink
report
parent
reply
29 points

These types of uses make ChatGPT for the non-write the same as a calculator for the non-mathematician. Lots of people are shit at arithmetic, but need to use mathematics in their every day life. Rather than spend hours with a scratch pad and carrying the 1, they drop the numbers into calculator or spreadsheet and get answers.

A good portion of my life is spent writing (and re-writing) technical documents aimed at non-technical people. I like to think I’m pretty good at it. I’ve also seen some people who are very good, technically, but can’t write in a cohesive, succinct fashion. Using ChatGPT to overcome some of those hurdles, as long as you are the person doing final compilation and organization to ensure that the output is correct and accurate, is just the next step in spelling, usage, and grammar tools. And, just as people learning arithmetic shouldn’t be using calculators until they understand how its done, students should still learn to create writing without the assistance of ML/AI. The goal is to maximize your human productivity by reducing tasks on which you spend time for little added value.

Will the ML company misuse your inputs? Probably. Will they also use them to make your job easier or more streamlined? Probably. Are you contributing to the downfall of humanity? Sure, in some very small way. If you stop, will you prevent the misuse of ML/AI and substantially retard the growth of the industry? Not even a little bit.

permalink
report
reply
7 points

I like the calculator comparison.

permalink
report
parent
reply
28 points

The first downside is in the use of it exactly the way you’re using it. In this case, a company may decide they don’t actually need technical writers, just a low-paid editor who feeds tech specs into a prompt, gets a response, and tidies it up. How many skilled jobs are lost because of this?

Think of software devs. Feed a project spec into the prompt: “Give me a Django backend and Vue frontend to build an online calendar” and then you have just a QA dev who debugs and tests and maybe cleans up a bit. Now, instead of a team of software devs working to make sure you have a robust, secure and properly architected app, you have one or two low-paid testers who don’t understand the full architecture, can only fix bugs, and don’t understand the security issues inherent in the minimally viable code the bot spat out.

Think of writers. Just ignore actual creatives. Plug an “idea” into the prompt and then have an editor clean up any glaring strangeness and get it out the door. It can, and already is, flood the market with absolute drivel driving actual human creatives out. Look at the current writers strike. The Hollywood execs are fucking champing at the bit to just replace them all with an LLM and say to hell with the writers.

The core issue is: the people at the top with money only care about money. They don’t care if the product is good. Quality is irrelevant if they can crank it out at a tenth of the cost and at 1000x the volume. And every time you use it, you’re giving it training data. You’re justifying its use. And its use is, and will continue, to destroy entire industries, ruin web search, create mis- and disinformation, and endanger the sharing of actual human creativity.

permalink
report
reply
11 points

You’re not selling me here. Specifically because using ChatGPT in the role you are talking about is exactly what software developers have been doing for years - putting humans out of work. To use your own description, I could ask a software team to “Give me a calendar app” and a team of software devs, testers, and QA will produce a will go about working to make sure you have a robust, secure and properly architected app which will them obsolete thousands upon thousands of secretaries across the world. They were fully employed making intelligent decisions about their bosses schedules, managing conflicts, and coordinating with other humans to make sure things ran smoothly - and you caused nearly all of them to be fired and replaced with one or two low-paid data entry clerks who don’t understand the business or why certain meetings and people have priority over others.

We can go on. Bank tellers? Most of them fired thanks to automated machines. Copywriters? Some lazy programmer puts a dictionary in word and all of a sudden 90% of all misspellings are going. Usage? Yup - getting rid of most of those too. We can go back further to when telephone switchboards were automated and there was no need to talk to someone to make your connection. Sure, those people are dead now, but they wouldn’t have jobs if they were alive. And all of those functions were automated to mimic, and then exceed the utility of, humans who used to do that work. Everything from the cotton gin and mechanical thresher to a laser welder and 5DOF robotic assembly station are eliminating jobs. Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

permalink
report
parent
reply
12 points

But this isn’t what’s happening here. It’s not replacing menial bullshit jobs. It’s trying to replace skilled jobs and creative jobs, something that only soulless grifters and greedy capitalists want. It’s a solution in search of a problem.

Artists fearing losing their jobs to ml generation? Welcome to the world of modern old school photography. Modern photography, of course, is digital and has destroyed the jobs of hundreds of thousands or millions of analog photography jobs.

No, it didn’t. The only jobs lost were menial jobs in film production and development. Creatives didn’t lose their jobs. The medium just changed.

The only difference this time is that its you, or people of your intellectual station, who are in the crosshairs.

This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

permalink
report
parent
reply
9 points

So, it’s okay to replace jobs which seem like menial bullshit to you, but not jobs you deem to be “creative.” We’re taking a bell curve of human ability and simply drawing the line of “obsolete human” in a different place and you’re disappointed that you’re way closer to it than you were a decade ago.

NB: I sat in a room with 200 other engineers this summer and they all scoffed at the idea that a computer could take their place. But I’m absolutely certain that what we do could be - is being - automated even as we claim to be the intelligent ones who are not in fear of replacement. My job is just the learned sum of centuries of human knowledge which is honed year after year and has to be taught, wholecloth, to every new human in my profession. There are people who will say I’m the smartest guy in the room (for a small enough room ;-) but 90% what I do is just applying a set of rules based on inputs and boundary conditions. We feel like this shouldn’t happen to us because we’re smart. We think independently. We have special abilities which set us apart from ML generated outputs. We’re also full of shit. There are absolutely areas were ML/AI will not surpass our value in the system for quite some time, but more and more of our expertise will be accomplishable by application of distilled large data sets.

permalink
report
parent
reply
4 points

This is veering really close to the “creatives have been gatekeeping art and AI will ‘democratize’ it” bullshit

Ugh, that BS makes me want to blow up my own head with mind powers. Anyone can learn how to make art! It is not ‘democratizing’ art to make a computer do it and then take credit for the keywords you fed it! Puke worthy stuff, I appreciate you speaking out against that crap far better than I ever could. There’s enough of that BS on Reddit, can’t we just it leave it there?

permalink
report
parent
reply
2 points

And it won’t ever hit programmers. Because once we have strong AI we will simply become AI psychologists.

permalink
report
parent
reply
23 points

It not being conscious or self aware. It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

I’d also be VERY surprised if it isn’t harvesting people’s data in the exact way you’ve described.

permalink
report
reply
6 points

you don’t need to be surprised, in their ToS is written pretty big that anything you write to chatGPT will be used to train it.

nothing you write in that chat is private.

permalink
report
parent
reply
4 points
*
Deleted by creator
permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.9K

    Monthly active users

  • 2.8K

    Posts

  • 55K

    Comments