VOID-in, NULL-out convention vs. LOGIC!-returning actions

We're now in a situation where VOID is rejected by most routines as input. In this world, "VOID-in, NULL out" has been showing off excellently as the replacement for what was long discussed under the title none propagation. I summarize the theory of the benefits here, and why routines taking "NONE!-in, NONE!-out" were a dangerous idea.

But it can't be followed blindly. As an obvious-if-you-think-about-it case, LOGIC!-returning routines have to handle void a different way.

Getting Tricked By Inverse LOGIC!

I wanted to write the following;

if exists? maybe some-dir: get-env 'SOME-DIRECTORY [

If GET-ENV returns null, the MAYBE turns it to void for EXISTS? to process. But...what if the routine were called DOESN'T-EXIST?, and it followed VOID-in, NULL-out? It would make it look like void inputs did exist, if you were just checking the result for truthiness or falseyness. :frowning:

This seems like a pretty solid proof that functions returning LOGIC! should not conflate their answers with NULL. (Note: I know exists? is currently conflated with FILETYPE OF, so it doesn't actually return a LOGIC!, but that's just a bug that hasn't been tended to. The point stands.)

But what what about NaN handling?

I had a theory that VOIDs and NULLs could act as the quiet NaN and signaling NaN forms of "not-a-number" (NaN). The goal of this is to allow math handling to be more graceful, without needing to set up TRAPs and such--you can be selective about which operations you are willing to have fail, and supply code to fill in such cases.

Wikipedia has a little table about how NaNs work with comparisons:

Comparison between NaN and any floating-point value x (including NaN and ±∞)

  • NaN ≥ x => Always False
  • NaN ≤ x => Always False
  • NaN > x => Always False
  • NaN < x => Always False
  • NaN = x => Always False
  • NaN ≠ x => Always True

Look at that last case. If VOID is the quiet NaN, you can't have that comparison returning NULL, because it would be falsey instead of truthy.

When these routines get VOID they have to decide whether to return true or false depending. It's a close analogy to how "exists?" and "doesn't-exist?" must use their discretion on blank input.

However, the math operations that normally return numbers, and feed into these situations DO follow blank-in-null out. This is the proposed behavior:

>> square-root -1  ; Note: `square-root void` is also null
== ~null~  ; isotope

>> maybe square-root -1
; void

>> 1 + square-root -1
** Error: + doesn't accept NULL for its value2 argument

>> 1 + (square-root -1 else [10])  ; selective handling
== 11

>> 1 + maybe square-root -1  ; propagation
== ~null~  ; isotope

>> 10 != (1 + try square-root -1)
** Error: != doesn't accept NULL for its value2 argument

>> 10 != (maybe 1 + maybe square-root -1)
== ~true~  ; isotope

So that demonstrates a bit of nuance involved in the "VOID-in, null out" rule. LOGIC!-bearing routines should still only return LOGIC!, and if for some reason they can't make a reasonable call one way or another, they need to error vs. ever returning NULL.

1 Like

Using NULL for refinements brings a somewhat unfortunate take on the idea of NULL being the signaling form.

 some-operation: [... /parameter [decimal!]] [...]

Now imagine you write some-operation/parameter ... square-root -1. While math operations don't take NULL, refinements are revoked by it. You actually get the opposite of what you want... the parameter acts as if it wasn't supplied at all, whereas if you used MAYBE you would get an error because the parameter does not take VOID.

I've asked for review by anyone who can throw in thoughts on these matters.

  • I know I like failed conditionals being VOID, e.g. fully non-valued. if false [...] => VOID

  • I'm fairly confident I like the void state for revoking refinements. That is to say append/dup [a b c] [d e] if false-condition [5]

  • NULL out for operations that pick items feels rigorous. If you write select [a 10 b 20] 'c and get NULL back you know it didn't succeed.

I've been conflicted because it's just counter to my intuitions to have NULL be so darn friendly that you can get it out of variables without error, and pass it to any refinement to take that refinement away, etc. But it seems that's what it needs to do, and it's like a more rigorous way of getting at what NONE! was going for initially...giving you the choice to decide when you care about value-ness or not.

Whatever the other implications--my proposal of using NULL for a signaling NaN pretty much won't work here... square-root -1 needs to be at least as unfriendly as a NONE! if it's not going to raise an error in the operation itself. Because there are more places that you pass bad numbers to than other math operations.