In Ada, the overflow behaviour is determined by the type signature. You can also sometimes use SPARK to statically guarantee the absence of overflow in a program. In Rust, as I understand it, you can control the overflow behaviour of a particular arithmetic operation by wrapping a function or macro call around it, but that is ugly and too easy to omit.
For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check. Wraparound for ordinary integers is simply incorrect. You might want it for modular arithmetic and that is fine, but in Ada you get that by specifying it in the type declaration. Also in Ada, you can specify the min and max bounds, or the modulus in the case of modular arithmetic. For example, you could have a “day of week as integer” ranging from 1 to 7, that traps on overflow.
GNAT imho made an error of judgment by disabling the overflow check by default, but at least you can turn it back on.
The RISC-V architecture designers made a harder to fix error by making everything wraparound, with no flags or traps to catch unintentional overflow, so you have to generate extra code for every arithmetic op.
It sounds like you’re talking about dependent typing, then, at least for integers? That’s certainly a feature Rust lacks that seems like it would be nice, though I understand it’s quite complicated to implement and would probably make Rust compile times much slower.
For ordinary integers, an arithmetic overflow is similar to an OOB array reference and should be trapped, though you might sometimes choose to disable the trap for better performance, similar to how you might disable an array subscript OOB check.
That’s exactly what I described above. By default, trapping on overflow/underflow is enabled for debug builds and disabled for release builds. As I said, I think this is a sensible behavior. But in addition to per-operation explicit handling, you can explicitly turn global trapping behavior trapping on or off in your build profile, though.
In Ada? No dependent types, you just declare how to handle overflow, like declaring int16 vs int32 or similar. Dependent types means something entirely different and they are checked at compile time. SPARK uses something more like Hoare logic. Regular Ada uses runtime checks.
Whatever you want to call them, my point is that most languages, including Rust, don’t have a way to define new integer types that are constrained by user-provided bounds.
Dependent types, as far as I’m aware, aren’t defined in terms of “compile time” versus “run time”; they’re just types that depend on a value. It seems to me that constraining an integer type to a specific range of values is a clear example of that, but I’m not a type theory expert.