I think the preponderance of evidence is on the side of unsetting.
Substitution principle
We know that we want you to be able to say things like print [... if false [...] ...] or otherwise have complex expressions evaluate to null and errorlessly signal an opt-out. But when this can be an arbitrarily complex expression, shouldn't you be able--without thinking about how to rewrite it--factor it out?
print [
...
some complex expression returning null
...
]
=>
sub: some complex expression returning null
print [... :sub ...]
It seems it shouldn't be harder than that. If you try to accomplish the same thing with TRY and OPT, then by definition you are losing information...since you conflated nulls with blanks just for the sake of getting things into a variable. That's an opportunity to screw this up--and it's more typing/code.
Safety injection requirement may make code LESS safe
For instance, imagine a world where null assignments unset:
all [
foo: select some-data item
bar: any [whatever whatever-else]
] then [
do stuff with foo and bar
]
; if foo or bar caused a failure, they'll be unset and trigger errors
This kind of pattern gets you into the THEN with the knowledge that FOO and BAR are not null (in this case, you also know they're not false or blank). But a null that caused the THEN not to run will leave whatever variable was involved in a state where accessing it gives an error.
Now think about the rote addition of TRY to dodge mandatory errors from set-word assignment:
all [
foo: try select some-data item
bar: try any [whatever whatever-else]
] and [
do stuff with foo and bar
]
; but if foo or bar caused a failure, they contain a "safe" blank now
Firstly, you can't use a THEN anymore...because you're not testing for value-ness. You need to use AND to test for truthy-ness so blank doesn't count to run the clause. To use THEN you'd have to get even hairier, with opt foo: try select ...
Plus, the TRY made the situation worse after it. Pursuant to some of the arguments about why "blankification" is dangerous for branches (and hence voidification is better), NULL causing "unsetification" in assignment is better than having people manually blankify with TRY. Being unset is a more ornery state for a variable, and ornery is good here.
(Note: Phrasing is important here, variables cannot "hold null". So "unsetification" is not such a ludicrous term.)
Will seem more natural to Rebol2 users, vs. needing to "junk things up"
You don't need a "fancy" example like the ALL above with multiple assignments to see how it looks more polluted. From Rebol2, people are used to writing if pos: find data item [...]
. Telling them they need a TRY to do so would likely seem like a step back, and you don't want to force them to use a THEN if that's not how they want to write it. Having lots of choices is the goal.
This way, they'll only need to throw on the TRY if they have some reason for reading the pos later. But knowing the POS isn't a valid position has merit.
It's easier than ever to trigger your own failures in conditionals
I brought up the idea that a switch statement that doesn't match could error by default:
num: switch x [
<bar> [1 + 2]
<baz> [3 + 4]
]
But switch is evaluative now, with an evaluative DEFAULT mechanism:
num: switch x [
<bar> [1 + 2]
<baz> [3 + 4]
default [fail "switch didn't match"]
]
But it goes further than that, because FAIL's argument is optional...it will just report an error where it is if you say default [fail]
. And even further than that, you don't need the default at all, just fail if you get there:
num: switch x [
<bar> [1 + 2]
<baz> [3 + 4]
fail
]
That actually pinpoints the error better, because if there's a problem outside the switch at the point of assignment you don't know what happened (e.g. did one of the branches return void?) This applies to CASE too, and anywhere else (end of a condition branch, if you like...just FAIL with no further args is fine)
SET-WORD! behavior can't be customized to be more lax
Programming constructs like SWITCH can be modified arbitrarily to error on more situations. You could change your SWITCH to require a special refinement or flag to allow fallthrough--for instance. Or you could make a switch that didn't match any conditions return a VOID! value, whose sole purpose in life is to be a pain and cause errors on assignments or tests for conditional truth and falsehood.
I think in the grand scheme of things, if you really notice you're having a problem, the language has tools to shape around that. But the behavior of SET-WORD! is part of the evaluator. It's a strictness you wouldn't be able to remove.
VOID! assignments are disallowed and cover several classic cases
While VOID! has nothing in particular to do with variables being unassigned, it comes up in other "no value" situations:
>> x: do []
** Script Error: x: can't be VOID! (use TRY, OPT, or SET*)
>> x: print "hi"
hi
** Script Error: x: can't be VOID! (use TRY, OPT, or SET*)
>> x: if true [print "hi"]
hi
** Script Error: x: can't be VOID! (use TRY, OPT, or SET*)
And GROUP!s don't synthesize values, but also prohibit x: ()
(albeit with a bad error message, that is a low-priority to come up with a clever way to improve without slowing down the evaluator).
Allowing NULL to do an assign is more akin to where Rebol2 casually allowed NONE! assignments these days. It still provides a bit more rigor, because if you try to use the variable on some code path where it wasn't set, you'll find out about that when you try to use it...vs silently having it accessible.
Gives a syntax to unset map keys
I already covered this and how Red wrote up trouble with it. Maps also may not be the only types where conveying an interest in "unsetification" is worthwhile.
And it's a nicer syntax for just unsetting variables. foo: null seems a pretty clear way to do it, as opposed to unset 'foo. It may confuse people that they can't follow up with if foo = null [...], and need to say if :foo = null [...] or if unset? 'foo. But if someone can't get past that they probably aren't going to be very successful in using the language.
Unsetting is better for Code Golf
While I don't want to ruin the usability of the language for the sake of code golf, it's clearly better for making shorter programs to remove the need for TRY.
Seems "unsetification" of null assignment is the winner
I've mentioned that unsetting variables on nulls was the original behavior in the design of null. It was useful and didn't really cause any problems. The main thing I wasn't happy about was that things like x: print "Hello" didn't error, because print wasn't supposed to return a result...and null was the only way to do that at the time.
VOID! values came along and picked up the responsibility for triggering those kinds of errors, while null took its special non-value status on to greater and greater duties. The existence of constructs like ELSE made it seem like maybe it was good to increase the safety by erroring on SET-WORD!s that weren't actually sets, so null became errors.
But I think the arguments above--especially the first two--show it didn't necessarily get safer overall. You're not improving safety if you're forcing people to generate values they have to turn around and transform to get the values they actually wanted...especially when that transformation loses information (conflating blanks and nulls). This is why blankification was changed to voidification, and manual blankification done by the user at callsites has all the downsides plus it junks up the code.
(As it happens, the R3-MAKE we are currently bootstrapped to still has the null-unset convention. So it's a good thing to have this decided before committing a new r3-make)