Codelet: A better saturate
/clamp01
function
Let’s talk about clamping a float between 0.0 and 1.0.
(If you’re wondering “What’s a codelet and why is it in the title of this blog post?”, it’s silly name I’m using for my less involved (nominally shorter, but, err…) blog posts that are on subjects not interesting or important enough to get their own post, like this function).
Let’s assume you need to clamp a float between +0.0 and +1.0. The function for this is conventionally named something like saturate
, or clamp01
, and it’s frequently performed before quantizing the float to an integer (more on that in a bit).
This is quite easy to write, especially if you don’t particularly care about the behavior in edge cases, but a bit subtle if you do. There are two edge cases:
NaN
s, such as the result of 0.0/0.0. These are “infectious” numbers which return false for all comparisons.-0.0
(yes, floating point numbers have a sign for zero) will generally behave identically to zero, but is less than 0.0 when using either: a. the total ordering of floats, b. the ordering implied bynextafter
/nexttowards
(if this were a more serious blog post, I’d double-check that these are, in fact, distinct orderings).
Now, the naive function (e.g. some_float.clamp(0.0, 1.0)
from the Rust standard library) will likely return NaN when self
is NaN, and seems to return -0.0
when self
is -0.0
.
This is kind of annoying for some cases, but also arguably fine1.
-
For the first case, if you’re going to continue working with the number, there’s an argument that it’s good behavior is to produce a NaN when the input is a NaN, so that you avoid hiding bugs.
-
For the -0.0, it probably doesn’t matter much (even if it is a bug), but there are good reasons to use total order comparisons in some sensitive places2, in which case caring about
-0.0
is not totally pointless.
/// Clamps `v` between `0.0` and `1.0`, mapping
/// `NaN`s and `-0.0` inputs to a `0.0` output.
#[inline]
pub fn robust_saturate(v: f32) -> f32 {
// This check is carefully phrased to return false
// our special cases as well.
if v > 0.0 {
if v < 1.0 {
v
} else {
1.0
}
} else {
0.0
}
}
Should Rust’s Stdlib Do This?
Now: a question is that “if this is so good, shouldn’t the Rust stdlib be implementing it this way?”
Uh, debatable, honestly. So, IEEE-754 defines two sets of min and max functions — minimum
/maximum
, which propagate NaN, and minimumNumber
/maximumNumber
which swallow NaN (… all of these functions are defined to treat -0.0 as less than +0.0, which Rust fails to do 😅).
Rust (in theory) defines clamp in terms of minimum
and maximum
. This is probably fine (makes errors show up sooner, supported in hardware), and while I think there’s a real case to be made for using the NaN-swallowing ones here (since it actually bounds the number to the range), it’s probably too situational…
Another elephant in the room is the question of performance: is that robust_saturate
ends up branching for these (in practice these will be exceptionally predictable, but it will confuse the compilers autovectorization if nothing else), whereas some_f32.clamp(0.0, 1.0)
is likely to use hardware min/max instructions3.
So, does that mean f32::clamp
is definitely faster? Well, on it’s own yeah, probably. If we win anywhere, it’s in a quantization case like f32_saturate_quantize_u8
below, particularly using to_int_unchecked
for the result. This essentially avoids doing another clamp, after the one we did, and can be measurable4.
// An example quantization routine that gets performance
// benefits from `robust_saturate`.
#[inline]
fn f32_saturate_quantize_u8(f: f32) -> u8 {
// centered quantization — follows gpu behavior for unorm,
// and (imo) behaves more nicely than floored quantization
let v = robust_saturate(f) * 255.0 + 0.5;
// Note: if we used `f.clamp(0.0, 1.0)` it would
// be unsound to use `to_int_unchecked` here.
unsafe { v.to_int_unchecked::<u8>() }
}
That said, really you probably should be using SIMD for this if you have enough floats for the performance of this to matter. And ideally, you should know in advance that you have no NaNs, and be using smarter float2int casting…
All that together probably comes out to: Nah, it’s a bit too situational.
Huh. That wasn’t as short as I thought it would be (although, it being a codelet frees me from the responsibility of writing a benchmark suite to go along with this and assert my claims).
(Now, time to alert someone about the min/max issue…)
-
… Well, which I think the -0.0 behavior is actually a violation of IEEE-754, at least, if
clamp
is defined in terms ofminimum
/maximum
, which seems likely. ↩ -
Physics and Rendering are both places where you can benefit from a radix/bucket sort, which for floats will tend to mean a total order sort (if you do it correctly). Now, I don’t per-se encourage using total order sorts in these places, but Rust kind of is a pain about it, and the performance boost from this kind of sort is often extremely worth it, if you’re going to sort a lot of things (more on that another time). ↩
-
The hardware will almost certainly implement
minimum
/maximum
from above — there’s not generally hardware support forminimumNumber
/maximumNumber
. ↩ -
Although, if there are two clamps happening here… perhaps there’s a way to push them in the other direction, so that all the clamping is done by the other clamp. ↩