You are viewing a single thread.
View all comments
121 points

Before nginx was a thing, I worked with a guy who forked apache httpd and wrote this blog in C, like, literally embedded html and css inside the server, so when he made a tpyo or was adding another post he had to recompile the source code. The performance was out of this world.

permalink
report
reply
32 points
*

There are a lot of solutions like that in rust. You basically compile the template into your code.

permalink
report
parent
reply
8 points
*

yeah, templates can be parsed at compile time but these frameworks are not embeeding whole fucking prerendered static pages/assets

permalink
report
parent
reply
8 points

They are nowadays. Compiling assets and static data into rust and deliver virtual DOM via websocket to the browser is the new cool kid in the corner.

Have a look at dioxus

permalink
report
parent
reply
2 points
*

Compiling all assets into the binary is trivial in rust. When I have a small web server that generates everything in code I usually compile the favicon into the binary.

permalink
report
parent
reply
15 points

Does a file lookup really take that long? Id say the trick was to have just plain old html with no bloat and you’re golden.

permalink
report
parent
reply
29 points

Blog content was stored in memory and it was served with zero-copy to the socket, so yea, it’s way faster. It was before times of php-fpm and opcache that we’re using now. Back then things were deployed and communicated using tcp sockets (tcp to rails, django or php) or reading from a disk, when the best HDDs were 5600rpm, but rare to find on shared hosting.

permalink
report
parent
reply
6 points

Couldn’t the html be loaded into memory at the beginning of the program and then served whenever? I understand the reading from disk will be slow, but that only happens once in the beginning.

permalink
report
parent
reply
8 points

The answer is no. The more file is used the longer it sits in kernel filesystem cache. Getting file from cache versus having it in process memory is few function calls away all of which takes few microseconds. Which is negligible in comparison to network latency and other small issues that might be present in the code.

On few of our services we decided to store client configuration in JSON files on purpose instead of running with some sort of database storage. Accessing config is insanely fast and kernel makes sure that file is cached so when reading the file you always get fast and latest version. That service is currently handling around 100k requests a day, but we’ve had peaks in past that went up to almost a million requests a day.

Besides when it comes to human interaction and web sites you only need to get first contentful paint within one second. Anything above 1.5s will feel sluggish, but below 1s, it feels instant. That gives you on average around 800ms to send data back. Plenty of time unless you have a dependency nightmare and parse everything all the time.

permalink
report
parent
reply
13 points

This reminds me of one of my older projects. I wanted to learn more about network communications, so I started working on a simple P2P chat app. It wasn’t anything fancy, but I really enjoyed working on it. One challenge I faced was that, at the time, I didn’t know how to listen for user input while handling network communication simultaneously. So, after I had managed to get multiple TCP sockets working on one thread, I thought, why not open another socket for HTTP communication? That way, I could incorporate a fancy web UI instead of just a CLI interface.

So, I wrote a simple HTTP server, which, in hindsight, might not have been necessary.

permalink
report
parent
reply
11 points

Ah, you met fefe.

permalink
report
parent
reply
2 points
*

Fefe uses a LDAP server as backend, not Apache

permalink
report
parent
reply
3 points

He uses his own http server called gatling and an LDAP server instead of a database.

permalink
report
parent
reply
3 points

He also uses his own http server that in turn queries the ldap server solely for the articles. The rest is compiled into the http server binary.

permalink
report
parent
reply
1 point

Nothing good old cache can’t solve. Compile JS and CSS. Bundle CSS with main HTML file and send it in batches since HTTP2 supports chunkifying your output. HTTP prefers one big stream over multiple smaller anyway. So that guy was only inviting trouble for himself.

permalink
report
parent
reply
5 points
*

You’re telling me about compiling JS, to my story that is so old… I had to check. and yes, JS existed back then. HTTP2? Wasn’t even planned. This was still when IRC communities weren’t sure if LAMP is Perl or PHP because both were equally popular ;)

permalink
report
parent
reply
2 points

Am just saying including source code into Apache is an overkill. But I guess if Apache was so old that doing so wasn’t much of a chore, sure thing. Still think apache module would have been simpler.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 7.1K

    Monthly active users

  • 952

    Posts

  • 36K

    Comments