Does anybody knows how do I get them permanently deleted or I’m powerless against it?

You are viewing a single thread.
View all comments View context
7 points

it was randomly-generated letters and numbers. it would be impossible to divine what te original comment was. I then did this, over and over 10 times, so the edit history was overwritten with blocks of randomized text.

what you suggest would just spit out more garbage, or, at best, completely fake comments.

permalink
report
parent
reply
1 point

You misunderstood my comment. Reddit probably has every version of your edits, so all they need to do is to put all your past comments through ChatGPT or something, by time in descending order. The first sensible one gets accepted. In some sense, that’s just like how a person would do it. This way, they don’t have to deal with individual approaches to obfuscating or messing with their data.

I was gonna just wait till this whole fiasco dies down, let it sit for a couple of months to a year, before going ahead and slowly remove my comments over time. It’s easy to build triggers for individual users to detect attempts at mass edit or mass deletion of comments after all, which may trigger some process in their systems. Doing it the low profile way is likely the best way to go.

permalink
report
parent
reply
7 points
*

the amounts of cost and resources for all of that would be profound. when they’re already complaining about profitability, I doubt they’d dumb huge amounts of additional funds into a project like that. they clearly have at least one level of backups, and I wouldn’t be shocked if they had 2 or 3 revision backups, but anything past that - let alone what you’re suggesting - would be too much to be a manageable cost.

permalink
report
parent
reply
0 points
*

It’s hard to say that without knowing what their infrastructure’s like, even if we think it’s expensive. And if they built their stack with OLAP being an important part of it, I don’t see why they wouldn’t have our comment edit histories stored somewhere that’s not a backup, and maybe they just toss dated database partitions into some cheap cold storage that allows for occasional, slow reads. They’re not gonna make a backup of their entire fleet of databases for every change that happens. That would be literally insane.

Also, tracking individual edit and delete rates over time isn’t expensive at all, especially if they just keep an incremental day-by-day, maybe more or less frequent, change over time. Or, just slap a counter for edits and deletes in a cache, reset that every day, and if either one goes higher than some threshold, look into it. There are probably many ways to achieve something similar in a cheap way.

And ChatGPT is just an example. I’m sure there already are other out-of-fashion-but-totally-usable language models or heuristics that are cheap to run and easy to use. Anything that can give a decent amount of confidence is probably good enough.

At the end of the day, the actual impact of their business from the API fiasco is just on a subset of power users and tech enthusiasts, which is vanishingly small. I know many that still use Reddit, some begrudgingly, despite knowing the news pretty well. Why? Cause the contents are already there. Restoring valuable content is important for Reddit, so I don’t see why they wouldn’t want to sink some money into ensuring that they keep what makes em future money. It’s basically an investment. There are some risks, but the chances to earn em back with returns on top of the cost is high.

permalink
report
parent
reply

Community stats

  • 8

    Monthly active users

  • 379

    Posts

  • 6.2K

    Comments

Community moderators