They both have their place. I just recently discovered a bug in lemmy bot I wrote where the lemmy API module will raise an Exception if login fails (response status code != 200), which feels extremely out of place, as the error/status code do matter in that case.
Other times exceptions make more sense as Phillip pointed out.
It’s easier faster to ask for forgiveness than permission after all.
One problem with exceptions is composability.
You have to rely on good and up-to-date documentation or you have to dig into the source code to figure out what exceptions are possible. For a lot of third party dependencies (which constitute a huge part of modern software), both can be missing.
Error type is a mitigation, but you are free to e.g. panic in Rust if you think the error is unrecoverable.
A third option is to have effect types like Koka, so that all possible exceptions (or effects) can be checked at type level. A similar approach can be observed in practical (read: non-academic) languages like Zig. It remains to be seen whether this style can be adopted by the mainstream.
They make valid points, and maybe it makes sense to always prefer them in their context.
I don’t think exceptions always lead to better error handling and messages though. It depends on what you’re handling.
A huge bin of exception is detailed and has a lot of info, but often lacks context and concise, obvious error messages. When you catch in outer code, and then have a “inaccessible resource” exception, it tells you nothing. You have to go through the stack trace and analyze which cases could be covered.
If explicit errors don’t lead to good handling I don’t think you can expect good exception throwing either. Both solutions need adequate design and implementation to be good.
Having a top-level (in their server context for one request or connection) that handles and discards one context while the program continues to run for others is certainly simple. Not having to propagate errors simplifies the code. But it also hides error states and possibilities across the entire stack between outer catch and deep possible throw.
In my (C#) projects I typically make conscious decisions between error states and results and exceptional exceptions where basic assumptions or programming errors exist.
The guy keeps on picking on Go, which is infamous for having terrible error handling, and then he has the nerve to even pick on the UNIX process return convention, which was designed in the 70s.
The few times he mentions Rust, for whatever reason he keeps on assuming that .unwrap()
is the only choice, which’s use is decidedly discouraged in production code.
I do think there is room for debate here. But error handling is a hellishly complex topic, with different needs between among others:
- short- vs. long-running processes
- API vs. user-facing
- small vs. big codebase
- library vs. application code
- prototyping vs. production phase
And even if you pick out a specific field, the two concepts are not clearly separated.
Error values in Rust usually have backtraces these days, for example (unless you’re doing embedded where this isn’t possible).
Or Java makes you list exceptions in your function signature (except for unchecked exceptions), so you actually can’t just start throwing new exceptions in your little corner without the rest of the codebase knowing.
I find it quite difficult to properly define the differences between the two.
I find it quite difficult to properly define the differences between the two.
The handling is enforced by one while the other may be unknown to the person who calls the function. I think that’s a pretty clear difference.
Does the performance cost of error checking/result types they discovered in C++ apply to languages that have native result and option types like Rust?
I would hope they were able to find efficient, performant implementations, and that branch prediction picks the expected non-error branch in most cases.