You are viewing a single thread.
View all comments View context
74 points
*

You should have rolling log files of limited size and limited quantity. The issue isn’t that it’s a text file, it’s that they’re not following pretty standard logging procedures to prevent this kind of thing and make logs more useful.

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting the oldest if there are more log files than your configured limit.

This prevents runaway logging like this, and also lets you store more logging info than you can easily open and go through in one document. If you want to store 20 gb of logs, having all of that in one file will make it difficult to go through. 10 2 gb log files is much easier. That’s not so much a consumer issue, but that’s the jist of it.

permalink
report
parent
reply
10 points

As a sysadmin there are few things that give me more problems than unbounded growth and timezones.

permalink
report
parent
reply
1 point

Printers. Desk phones. Wmi service crashing at full core lock under the guise of svchost.

permalink
report
parent
reply
15 points

Fully agree, but the way it’s worded makes it seem like log being a text file is the issue. Maybe I’m just misinterpreting intent though.

permalink
report
parent
reply
25 points

200GB of a text log file IS weird. It’s one thing if you had a core dump or other huge info dump, which, granted, shouldn’t be generated on their own, but at least they have a reason for being big. 200GB of plain text logs is just silly

permalink
report
parent
reply
8 points

no, 200gb of plain text logs is clearly a bug. I run a homelab with 20+ apps in it and all the logs together wouldn’t add up to that for years, even without log rotation. I don’t understand the poster’s decision to blame this on “western game devs” when it’s just a bug by whoever created the engine.

permalink
report
parent
reply
1 point

It could be a matter of storing non-text information in an uncompressed text format. Kind of like how all files are base 0s and 1s in assembly, other files could be “logged” as massive text versions instead of their original compressed file type.

permalink
report
parent
reply
5 points

Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting archiving the oldest

FTFY

permalink
report
parent
reply
3 points

Sure! Best practices vary to your application. I’m a dev, so I’m used to configuring stuff for local env use. In prod, archiving is definitely nice so you can track back even through heavy logging. Though, tbh, if you’re applications getting used by that many people a db logging system is probably just straight better

permalink
report
parent
reply