I prefer simplicity and using the first example but I’d be happy to hear other options. Here’s a few examples:
HTTP/1.1 403 POST /endpoint
{ "message": "Unauthorized access" }
HTTP/1.1 403 POST /endpoint
Unauthorized access (no json)
HTTP/1.1 403 POST /endpoint
{ "error": "Unauthorized access" }
HTTP/1.1 403 POST /endpoint
{
"code": "UNAUTHORIZED",
"message": "Unauthorized access",
}
HTTP/1.1 200 (🤡) POST /endpoint
{
"error": true,
"message": "Unauthorized access",
}
HTTP/1.1 403 POST /endpoint
{
"status": 403,
"code": "UNAUTHORIZED",
"message": "Unauthorized access",
}
Or your own example.
Giving back a 200 for an error always makes me bristle. Return correct codes people. “But the request to the web server was successful!”
I use this big expensive simulator called Questa, and if there’s an error during the simulation it prints Errors: 1, Warnings: 0
and then exits with EXIT_SUCCESS
(0)! I tried to convince them that this is wrong but they’re like “but it successfully simulated the error”. 🤦🏻♂️
We end up parsing the output which is very dumb but also seems to be industry standard in the silicon industry unfortunately (hardware people are not very good at software engineering).
That’s when you use different exit codes. 1 for failure during simulation, 2 for simulation failed.
Shame they wouldn’t listen.
I worked on a product that was only allowed to return 200 OK, no matter what.
Apparently some early and wealthy customer was too lazy to check error codes in the response, so we had to return 200 or else their site broke. Then we’d get emails from other customers complaining that our response codes were wrong.
I don’t necessarily disagree, but I have spent considerable time on this subject and can see merit in decoupling your own error signaling from the HTTP layer.
No matter how you design your API, if you’re passing through additional layers, like load balancers and CDNs, you no longer have full control over all responses your clients receive. At this point it may be viable to always signal a successful backend connection with a 200, even if the process resulted in a failure.
Going further, your API may include partial success scenarios, think batch processing, then the result could be a mix of success and failure that doesn’t translate to HTTP status.
You could even argue that there is really no reason to couple your API so tightly with a concept of the transport layer it uses.
You should consider if you really want to integrate your application super tightly with the HTTP protocol.
Will it always be used exclusively over a REST-ful HTTP API that you control, and it has exactly one hop to the client, or passes through hops that can be trusted to never alter the HTTP metadata significantly? In that case you can afford to make HTTP codes semantically relevant for your app.
But maybe you need to pass data through multiple different types of layers and different mechanisms (socket protocols, pub-sub, file storage etc.) In that case you want all your semantics to be independent from any form of transport.