Just out of curiosity. I have no moral stance on it, if a tool works for you I’m definitely not judging anyone for using it. Do whatever you can to get your work done!

117 points

High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

permalink
report
reply
45 points

I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

permalink
report
parent
reply
12 points

How has this been received?

I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

permalink
report
parent
reply
14 points

I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

Chat GPT is the same. I would never use the content it makes without verifying that content first.

permalink
report
parent
reply
5 points

Is it not already? I’ve found it to be far less reliable than Wikipedia.

permalink
report
parent
reply
23 points

Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

permalink
report
parent
reply
16 points
*

I had professors do different wordings for questions throughout college, I never encountered a professor or TA that wouldn’t clarify if asked, and, generally, the amount of confusing questions evened out across all of the versions, especially over a semester. They usually aren’t doing it to trick students, they just want to make it harder for one student to look at someone else’s test.

There is a risk of it negatively impacting students, but encouraging students to ask for clarification helps a ton.

permalink
report
parent
reply
5 points

My professors would randomize the order of the questions instead.

permalink
report
parent
reply
1 point

Sure it could but the same issue is present with one question. Some students will get the wording or find it easy others may not. Having a test in groups to limit cheating is very common and never led to any problems as far as my anecdotal evidence goes.

permalink
report
parent
reply
1 point

You’re increasingly the odds by changing the wording. I don’t see why it’s necessary. Just randomize the order of the questions would suffice.

permalink
report
parent
reply
5 points

I’m a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I’m done, about an hour after I started.

I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.

permalink
report
parent
reply
4 points

I wish I had that much opportunity to write (or fabricate) my own teaching material. I’m in a standardized testing hellscape where almost every month there’s yet another standardized test or preparation for one.

permalink
report
parent
reply
1 point

It’s one of the fascinating paradoxes of education that the more you teach to standardized tests, the worse test results tend to be. Improved test scores are a byproduct of strong teaching - they shouldn’t be the only focus.

Teaching is every bit as much an art as it is a science and straight-jacketing teachers with canned curricula only results in worse test scores and a deteriorated school experience for students. I don’t understand how there are admins out there that still operate like this. The failures of No Child Left Behind mean we’ve known this for at least a decade.

permalink
report
parent
reply
1 point

permalink
report
parent
reply
0 points
Removed by mod
permalink
report
parent
reply
88 points
*

I don’t have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.

permalink
report
reply
43 points

Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I’m being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn’t matter. I’m paid for results.

permalink
report
parent
reply
80 points

A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted “I can’t come in to work today because I feel unwell” but now I can’t take this person quite so seriously any more.

permalink
report
reply
28 points

Classic over explaining to cover up a lie.

I never send anything other than “I’ll be out of the office today” for every PTO notice.

permalink
report
parent
reply
6 points

Exactly and lets me honest you coworkers don’t want to heard about you explosive diarrhea problems or the weird mole on your butt.

permalink
report
parent
reply
12 points

Ask yourself why they felt the need to generate an AI sick note instead of being honest 👌

permalink
report
parent
reply
13 points

I dunno, I’d consider it a moral failing on the part of the person who couldn’t be honest and direct, even if there’s a cultural issue in the workplace.

permalink
report
parent
reply
9 points

Dunno, everyone else seems to be happy sending a one-liner 👌

permalink
report
parent
reply
-5 points

Exactly, if they’re too lazy to write a fake sick note then they’re certainly too lazy to work, either send them in for remediation or terminate them, either way they shouldn’t be in the workplace

permalink
report
parent
reply
68 points

I had a coworker come to me with an “issue” he learned about. It was wrong and it wasn’t really an issue and the it came out that he got it from ChatGPT and didn’t really know what he was talking about, nor could he cite an actual source.

I’ve also played around with it and it’s given me straight up wrong answers. I don’t think it’s really worth it.

It’s just predictive text, it’s not really AI.

permalink
report
reply
16 points

I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

permalink
report
parent
reply
25 points

It’s definitely not artificial general intelligence, but it’s for sure AI.

None of the criteria you mentioned are needed for it be labeled as AI. Definition from Oxford Libraries:

the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

It definitely fits in this category. It is being used in ways that previously, customer support or a domain expert was needed to talk to. Yes, it makes mistakes, but so do humans. And even if talking to a human would still be better, it’s still a useful AI tool, even if it’s not flawless yet.

permalink
report
parent
reply
4 points

It just seems to me that by this definition, the moment we figure out how to do something with a computer, it ceases to be AI because it no longer requires human intelligence to accomplish.

permalink
report
parent
reply
5 points

i think learning where it can actually help is a bit of an art - it’s just predictive text, but it’s very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you’re right though, it doesn’t offer much if you don’t already know what you need.

permalink
report
parent
reply
3 points

Can you hand me an example? I keep hearing this but every time somebody presents something, be it work related or not, it feels like at best it would serve as better lorem ipsum

permalink
report
parent
reply
4 points

I’ve had good success using it to write Python scripts for me. They’re simple enough I would be able to write them myself, but it would take a lot of time searching and reading StackOverflow/library docs/etc since I’m an amateur and not a pro. GPT lets me spend more time actually doing the things I need the scripts for.

permalink
report
parent
reply
3 points

More often than not you need to be very specific and have some knowledge on the stuff you ask it.

However, you can guide it to give you exactly what you want. I feel like knowing how to interact with GPT it’s becoming similar as being good at googling stuff.

permalink
report
parent
reply
1 point

Isn’t that what humans also do and it’s what makes us intelligent? We analyze patterns and predict what will come next.

permalink
report
parent
reply
44 points

I’ve played around with it for personal amusement, but the output is straight up garbage for my purposes. I’d never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company’s information security officer, because anything you input automatically joins their training database, and you’re exposing your company to liability when, not if, OpenAI suffers another data breach.

permalink
report
reply
16 points

The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.

permalink
report
parent
reply
5 points

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 9.7K

    Monthly active users

  • 4.9K

    Posts

  • 275K

    Comments