61 points

Don’t you people have a development environment?

permalink
report
reply
81 points

There’s that old saying ‘everyone has a development environment. Some people are lucky enough to have a separate production environment, too’

permalink
report
parent
reply
7 points

I get you’re making a meme but I’ve never worked anywhere that only has one environment in the last 10years.

permalink
report
parent
reply
92 points

The P in Prod stands for “It’ll be Pfine”

permalink
report
parent
reply
8 points

The letter you want after the P is an H.

permalink
report
parent
reply
6 points

I have several times insisted that a migration be done via an ad hoc endpoint, because I’m a jerk, but also it’s much easier then to test, and no one has to yolo connect directly to prod.

permalink
report
reply
1 point

Endpoint? Why the fuck is a migration using an endpoint, if you want testability a script will do just fine

permalink
report
parent
reply
2 points

Because I didn’t want someone to yolo connect to production, and we don’t have infrastructure in place for running arbitrary scripts against production. An http endpoint takes very little time to write, and let’s you take advantage of ci/cd/test infrastructure that’s already in place.

This was for a larger more complicated change. Smaller ones can go in as regular data migrations in source control, but those still go through code review and get deployed to dev before going out.

permalink
report
parent
reply
1 point

You’re definitely over complicating it and adding unnecessary bloat/tech debt. But if it works for you then it works

permalink
report
parent
reply
105 points
*

If you can fuck up a database in prod you have a systems problem caused by your boss. Getting fired for that shit would be a blessing because that company sucks ass.

permalink
report
reply
10 points

Small companies often allow devs access to prod DBs. It doesn’t change the fact that it’s a catastrophically stupid decision, but you often can’t do anything about it.

And of course, when they inevitably fuck up the blame will be on the IT team for not implementing necessary restrictions.

Frequent snapshots ftmfw.

permalink
report
parent
reply
51 points

What if you’re the one that was in charge of adding safe guards?

permalink
report
parent
reply
40 points

Never fire someone who fucked up (again; it isn’t their fault anyways). They know more about the system than anyone. They can help fix it.

permalink
report
parent
reply
14 points

This is the way usually but some people just don’t learn from their mistakes…

permalink
report
parent
reply
5 points

If you are adding guardrails to production… It’s the same story.

Boss should purchase enough equipment to have a staging environment. Don’t touch prod, redeploy everything on a secondary, with the new guardrails, read only export from prod, and cutover services to the secondary when complete.

permalink
report
parent
reply
9 points

Sorry, not in budget for this year. Do it in prod and write up the cap-ex proposal for next year.

permalink
report
parent
reply
11 points

Makes me think of what happened to gitlab

permalink
report
reply
45 points

If he recognized his typo with the space after the D:\ in his restore command he could have been saved at the bargaining stage. I am so glad I don’t work with this stuff anymore.

permalink
report
reply

Programmer Humor

!programmerhumor@lemmy.ml

Create post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.

Community stats

  • 5K

    Monthly active users

  • 1.6K

    Posts

  • 35K

    Comments