There's been an ongoing "shift in attitude" about NULL.
What has not changed: null is still "not a value". You can't put it in a block. And a variable can't "be null", it can only be unset...which you can find out about via
not set? 'varname. GET gives you null as a way of signaling the unset-ness, but that's different from saying the variable "holds a null value".
x: null if x = null [ ;-- errors on access of x print "if a variable could actually 'be null', this would be legal" ]
What has changed: increased acceptance of null to mean "don't append anything" or "don't compose anything" or "don't print anything". So it's not the extreme "hot potato" that it was at one point aiming to be.
But it's still a slightly warm potato: it differs from BLANK! in the sense that operations won't generally accept it as a data source. Non-mutating operations take blanks in and give nulls out. (e.g. select _ 'foo is null, while select null 'foo is an error--this helps keep error locality vs. collapsing entire chains without a deliberate TRY)
And there's a pretty big stigma NULL will never avoid, by virtue of not being able to fetch it from a variable via "normal" variable access.
But what good is it to not let NULL be falsey?
I just tried implementing the proposal for making APPEND or COMPOSE of BLANK! a no-op unless you used /ONLY. This has the nice property of compose [1 (any [...]) 2] being able to vaporize the ANY expression on failure. But it also got me to thinking, why isn't ANY returning null in the first place when it doesn't find a truthy thing?
The "big reason" was that people write if any [...] [...], and null is "neither true nor false", so that would be an error should it return NULL. But in this day and age, what's the great argument for why null shouldn't be falsey? When it shifted from name/conception of "void" to the more fitting "null", that makes the suggestion more palatable. Certainly NULL is falsey in many languages, C included.
When the question of truthy/falsey of UNSET! was debated in the CureCode days, people looked at the behavior of:
all [ 1 < 2 print "got here" 3 < 4 ]
There was a desire to not interrupt the ALL while injecting debug output. Since PRINT returned an UNSET!, it wasn't counted.
I thought the example was a bit contrived. A change to PRINT (or whatever) so that it returned something (perhaps the PORT! it printed to? The data it printed)? would throw this off. Or if you were using some other routine, you'd have to say:
all [ 1 < 2 ((some-diagnostic-function ...) ()) 3 < 4 ]
With Ren-C there is ELIDE, that can be used generically in these situations:
all [ 1 < 2 elide (some-diagnostic-function ...) 3 < 4 ]
The existence of ELIDE...and wanting to be careful to not blindly proceed in the case of things like failed selects...were incorporated into an argument for why ALL began treating nulls as errors. But what if NULL was just plain old falsey, as far as conditionals were concerned?
It would bring back the casual-use scenarios which people liked, with the twist that you don't always wind up with a set variable (you'd still have to use TRY to get that):
if x: select data item [ ;-- x is known set and not blank or false ] else [ ;-- x may be unset, blank, or false ]
NULL could be the result of a failed ALL or ANY. Again, you'd still want to throw in a TRY if you were going to put it in a variable you wanted to test later. But by having these constructs return null on failure, you could use ALL and ANY with ELSE, ALSO, !!...
It would make the current hacks which allow nulls for DID, NOT, OR and AND not-hacky.
The One Sacrifice
The balance of null tolerance other places has shown us that the "safety" aspects aren't really viable. Nulls happen. Where safety comes in is when you read from variables or when you put in asserts or ENSUREs or add type annotations to parameters for your functions.
But there was a sort of an idea that by not blessing NULLs as falsey, there could be an established "tristate" in the system. Unlike the case of PRINT inside the ALL above...which always wants to "opt-out"...what if you had some MAYBE-VOTE function that wanted to sometimes return truthy, sometimes return falsey, and sometimes abstain from a vote via NULL? ELIDE doesn't cover that.
Well, that's kind of weird, and none of these exist. In fact the one case I had that did exist stopped working, because I didn't want NULLs to be the "no vote", I wanted them to be effectively the "only falsey value" that could break an ALL-like construct, with all other values treated as "truthy".
Just to further the point on the subjectivity of this: it's come into question that if BLANK! is just "the reified form of null", why would they have a difference in their conditional behavior. What makes BLANK! so "falsey"? Why isn't it "neither-true-nor-false" like null?
It's hard to really see the downside
I know that treating null as falsey will mean simpler code inside the core.
I am nearly certain that treating null as falsey will simplify user code.
Failures of the "opt-out" voting model for nulls led to pushing for errors in ANY and ALL, that has caused usages of DID and TRY that are cluttery.
Can anyone speak up for the last time an error on a null really helped out?