Safe Numerics |
1. |
Is this really necessary? If I'm writing the program with the requisite care and competence, problems noted in the introduction will never arise. Should they arise, they should be fixed "at the source" and not with a "band aid" to cover up bad practice. |
This surprised me when it was first raised. But some of the feedback I've received makes me think that it's a widely held view. The best answer is to consider the examples in the Tutorials and Motivating Examples section of the library documentation. I believe they convincingly demonstrate that any program which does not use this library must be assumed to contain arithmetic errors. |
|
2. |
Can safe types be used as drop-in replacements for built-in types? |
Almost. Replacing all built-in types with their safe counterparts should result in a program that will compile and run as expected. Occasionally compile time errors will occur and adjustments to the source code will be required. Typically these will result in code which is more correct. |
|
3. |
Why are there special types for literal such as
|
By defining our own "special" type we can simplify the
interface. Using |
|
4. |
Why is safe...literal needed at all? What's the matter with
|
So when an operation is performed, the range of the result is calculated from [INTMIN, INTMAX] rather than from [42,42]. |
|
5. |
Are safe type operations |
Yes. safe type construction and calculations are all
|
|
6. |
Why define |
Almost, but there are still good reasons to create a different type.
|
|
7. |
Why is Boost.Convert not used? |
I couldn't figure out how to use it from the documentation. |
|
8. |
Why is the library named "safe ..." rather than something like "checked ..." ? |
I used "safe" in large part because this is what has been used by other similar libraries. Maybe a better word might have been "correct" but that would raise similar concerns. I'm not inclined to change this. I've tried to make it clear in the documentation what the problem that the library addressed is. |
|
9. |
Given that the library is called "numerics" why is floating point arithmetic not addressed? |
Actually, I believe that this can/should be applied to any type
T which satisfies the type requirement |
|
10. |
Isn't putting a defensive check just before any potential undefined behavior often considered a bad practice? |
By whom? Is leaving code which can produce incorrect results better? Note that the documentation contains references to various sources which recommend exactly this approach to mitigate the problems created by this C/C++ behavior. See [Seacord] |
|
11. |
It looks like the implementation presumes two's complement arithmetic at the hardware level. So this library is not portable - correct? What about other hardware architectures? |
As far as is known as of this writing, the library does not presume that the underlying hardware is two's complement. However, this has yet to be verified in any rigorous way. |
|
12. |
According to C/C++ standards, |
The guiding purpose of the library is to trap incorrect
arithmetic behavior - not just undefined behavior. Although a savvy
user may understand and keep present in his mind that an unsigned
integer is really a modular type, the plain reading of an arithmetic
expression conveys the idea that all operands are common integers.
Also in many cases, |
|
13. |
Why does the library require C++14? |
The original version of the library used C++11. Feedback from
CPPCon, Boost Library
Incubator and Boost developer's mailing list convinced me that
I had to address the issue of run-time penalty much more seriously. I
resolved to eliminate or minimize it. This led to more elaborate
meta-programming. But this wasn't enough. It became apparent that the
only way to really minimize run-time penalty was to implement
compile-time integer range arithmetic - a pretty elaborate sub
library. By doing range arithmetic at compile-time, I could skip
runtime checking on many/most integer operations. While C++11
|
|
14. |
This is a C++ library - yet you refer to C/C++. Which is it? |
C++ has evolved way beyond the original C language. But C++ is still (mostly) compatible with C. So most C programs can be compiled with a C++ compiler. The problems of incorrect arithmetic afflict both C and C++. Suppose we have a legacy C program designed for some embedded system.
This example illustrates how this library, implemented with C++14 can be useful in the development of correct code for programs written in C. |
|
15. |
Some compilers (including gcc and clang) include builtin functions for checked addition, multiplication, etc. Does this library use these intrinsics? |
No. I attempted to use these but they are currently not
|
|
16. |
Some compilers (including gcc and clang) included a builtin function for detecting constants. This seemed attractive to eliminate the requirement for the safe_literal type. Alas, these builtin functions are defined as macros. Constants passed through functions down into the safe numerics library cannot be detected as constants. So the opportunity to make the library even more efficient by moving more operations to compile time doesn't exist - contrary to my hopes and expections. |