The benefits of a falsey NULL (any major drawbacks?)

There's been an ongoing "shift in attitude" about NULL.

What has not changed: null is still "not a value". You can't put it in a block. And a variable can't "be null", it can only be unset...which you can find out about via not set? 'varname. GET gives you null as a way of signaling the unset-ness, but that's different from saying the variable "holds a null value".

x: null
if x = null [ ;-- errors on access of x
    print "if a variable could actually 'be null', this would be legal"
]

What has changed: increased acceptance of null to mean "don't append anything" or "don't compose anything" or "don't print anything". So it's not the extreme "hot potato" that it was at one point aiming to be.

But it's still a slightly warm potato: it differs from BLANK! in the sense that operations won't generally accept it as a data source. Non-mutating operations take blanks in and give nulls out. (e.g. select _ 'foo is null, while select null 'foo is an error--this helps keep error locality vs. collapsing entire chains without a deliberate TRY)

And there's a pretty big stigma NULL will never avoid, by virtue of not being able to fetch it from a variable via "normal" variable access.

But what good is it to not let NULL be falsey?

I just tried implementing the proposal for making APPEND or COMPOSE of BLANK! a no-op unless you used /ONLY. This has the nice property of compose [1 (any [...]) 2] being able to vaporize the ANY expression on failure. But it also got me to thinking, why isn't ANY returning null in the first place when it doesn't find a truthy thing?

The "big reason" was that people write if any [...] [...], and null is "neither true nor false", so that would be an error should it return NULL. But in this day and age, what's the great argument for why null shouldn't be falsey? When it shifted from name/conception of "void" to the more fitting "null", that makes the suggestion more palatable. Certainly NULL is falsey in many languages, C included.

When the question of truthy/falsey of UNSET! was debated in the CureCode days, people looked at the behavior of:

 all [
     1 < 2
     print "got here"
     3 < 4
 ]

There was a desire to not interrupt the ALL while injecting debug output. Since PRINT returned an UNSET!, it wasn't counted.

I thought the example was a bit contrived. A change to PRINT (or whatever) so that it returned something (perhaps the PORT! it printed to? The data it printed)? would throw this off. Or if you were using some other routine, you'd have to say:

 all [
     1 < 2
     ((some-diagnostic-function ...) ())
     3 < 4
 ]

With Ren-C there is ELIDE, that can be used generically in these situations:

 all [
     1 < 2
     elide (some-diagnostic-function ...)
     3 < 4
 ]

The existence of ELIDE...and wanting to be careful to not blindly proceed in the case of things like failed selects...were incorporated into an argument for why ALL began treating nulls as errors. But what if NULL was just plain old falsey, as far as conditionals were concerned?

It would bring back the casual-use scenarios which people liked, with the twist that you don't always wind up with a set variable (you'd still have to use TRY to get that):

if x: select data item [
    ;-- x is known set and not blank or false
] else [
    ;-- x may be unset, blank, or false
]

NULL could be the result of a failed ALL or ANY. Again, you'd still want to throw in a TRY if you were going to put it in a variable you wanted to test later. But by having these constructs return null on failure, you could use ALL and ANY with ELSE, ALSO, !!...

It would make the current hacks which allow nulls for DID, NOT, OR and AND not-hacky.

The One Sacrifice

The balance of null tolerance other places has shown us that the "safety" aspects aren't really viable. Nulls happen. Where safety comes in is when you read from variables or when you put in asserts or ENSUREs or add type annotations to parameters for your functions.

But there was a sort of an idea that by not blessing NULLs as falsey, there could be an established "tristate" in the system. Unlike the case of PRINT inside the ALL above...which always wants to "opt-out"...what if you had some MAYBE-VOTE function that wanted to sometimes return truthy, sometimes return falsey, and sometimes abstain from a vote via NULL? ELIDE doesn't cover that.

Well, that's kind of weird, and none of these exist. In fact the one case I had that did exist stopped working, because I didn't want NULLs to be the "no vote", I wanted them to be effectively the "only falsey value" that could break an ALL-like construct, with all other values treated as "truthy".

Just to further the point on the subjectivity of this: it's come into question that if BLANK! is just "the reified form of null", why would they have a difference in their conditional behavior. What makes BLANK! so "falsey"? Why isn't it "neither-true-nor-false" like null?

It's hard to really see the downside

  • I know that treating null as falsey will mean simpler code inside the core.

  • I am nearly certain that treating null as falsey will simplify user code.

  • Failures of the "opt-out" voting model for nulls led to pushing for errors in ANY and ALL, that has caused usages of DID and TRY that are cluttery.

Can anyone speak up for the last time an error on a null really helped out?

Looking at things holistically, I think this is on the right track.

...and it means that a lot of APIs that used to return BLANK! as their "out of band" value should be returning NULL instead. This is particularly nice for the API, when things like MATCH and FIND react this way:

REBVAL *pos = rebRun("find", data, item, END);
if (not pos) {
    /* was NULL, so FIND failed... no rebRelease() needed... */
}

Because NULLs arose from trying to rethink UNSET! (and were originally called 'void'), they've carried over that stigma of being "armed" and difficult to work with. Whereas BLANK!s came from NONE! and were considered "nice". Plus with the _ form they became even more comfortable than #[none] (which in historical Rebol often confusingly displayed as just the word none)

But BLANK!s aren't necessarily ideal as a failure result. They each have identity. You could have a block with 1000 blanks in it...each one uniquely addressed by a unique pointer, each with its own distinct NEW-LINE flag, etc.

Yet NULL isn't a value, and can be distinguished with no false-positives from ANY-VALUE! by routines like ELSE and ALSO. They don't need unique addresses, so C's NULL is used for it, which can be tested for from the API without a separate function call.

So what's not to like about NULL? Isn't it punishment enough that non-type-annotated function parameters by default don't accept it, and it's not an ANY-VALUE! ?

When routines with "full-band" return results have to use NULL to distinguish from an arbitrary ANY-VALUE!, why should routines with "some-band, but not BLANK!" break rank and use BLANK!, just because they can? It's more consistent to standardize on NULL, and the API will thank you for it.

But here's the catch...

You knew there'd be a catch. :slight_smile:

As NULLs step up to the plate being the "light" failure signal we all need... evaporated by COMPOSEs, considered falsey by IFs and ALLs, vanishing in PRINTs... there may be one more thing you can't do with them (besides not putting them in blocks)...

You shouldn't be able to assign them to variables.

This will look a bit like a step back to historical Rebol, because it means x: () would not be legal. But by that token, x: all [... false] would not be legal, nor would x: find [a b] 'c be legal! You could still use direct assignments of this form, but you'd have to be confident that it would succeed...(and not succeeding will be automatically tested for you, by virtue of the error you'll get in the assignment if it doesn't).

(In other words, if you wrote x: all [...] it's effectively saying x: assert [all [...]]. Because if you weren't sure they were all going to be truthy, you'd have had to handle the null result somehow, e.g. x: try all [...] or x: all [...] else [...] or x: all [...] !! 10. Note again that if all [...] [...] is perfectly fine, as it's not trying to store that result in a variable, or an array slot.)

So if you weren't sure of success, all of these cases would require being handled it in some way. Either stick on a DID or a TRY... hook on an ELSE, or an OR, or whatever. But you won't be able to put nulls into variables.

There's even more cool things you might not have thought about, e.g. MAYBE. Maybe is like the anti-DEFAULT, it doesn't bother to do an assignment if the right hand side is null or blank. x: maybe find ... will leave x alone, for instance, if the FIND comes back null and not try to do the assignment. It's really all rather clever. :slight_smile:

Hence NULL's "hot potato" is "make a decision before storing".

This brings up some fairly existential questions of how you'd go about unsetting variables, and I have bigger thoughts on that. One thought is that perhaps BLANK! is not so friendly after all...maybe variables that hold BLANK! give back NULL when you access them, and you can only get them with GET-WORD! else there's an error. Because if they didn't hold blank they couldn't be fetched at all, you'd get a not bound error. so you can assume that a NULL coming back from a variable access means it held BLANK!?

The mechanics are still in the works. But I'm trying to outline the path of how it's getting there.