BLANK!-in, NULL-out, more universal (on mutating operations...)?

Today I hit something that was a bit of an annoyance. I couldn't use our fun "BLANK!-in-NULL-out convention" with SET.

So instead of:

set try my-variable-that-may-be-null value

I had to write:

if my-variable-that-may-be-null [
    set my-variable-that-may-be-null value
]

How irritating this is depends on the length of the variable name. Though it's irritating even if it's one letter.

The reason we have this situation is because we are honoring a concept that was put forth early in the days of so-called "none propagation", where BrianH and others believed that mutating operations should not let you "opt out". There was too much potential for confusion if you wrote append block [1 2 3] and it silently didn't append, because block was a NONE!:

r3-alpha>> block: none
r3-alpha>> append block [1 2 3]
** Script error: append does not allow none! for its series argument

red>> block: none
red>> append block [1 2 3]
*** Script Error: append does not allow none! for its series argument

Operations without side effects were deemed to be harmless by comparison:

r3-alpha>> block: none
r3-alpha>> select block 'a
== none

red>> block: none
red>> select block 'a
== none

But even that had a kind of nasty property. You could write a long chain of non-mutating operations, and at the end of the pipe you could get a NONE! without knowing where in that pipe the error occurred. (All of this compounded on the fact that a NONE! could have even literally been something that was selected, so your opt-out value could have been an actual selection result!)

NULL as "Soft Failure" Provided Error Locality

This gave rise to Ren-C's BLANK!-in, NULL out philosophy. It offers better error locality, by making opt-out tolerant constructs give back something that they themselves wouldn't take as a valid input. Then, TRY could be used to convert those "soft failure" NULLs into BLANK!s...putting documentation and intent on where tolerance was desired:

>> block: blank
== _

>> select block 'a
; null

>> select (select block 'a) 'b
** Script Error: select requires series argument to not be null

>> try select block 'a
== _

>> select (try select block 'a) 'b
; null

The convention was considered to possibly be a way of dealing with soft failures in arithmetic, in places where "Not-a-Number" had propagated before.

This created a fairly significant paradigm shift, where it didn't make sense to use BLANK!s in common places...such as unused refinements. They could carry meaning, a conscious desire to opt-out. This was different from a NULL.

Does "friendly" NULL reframe things by making BLANK! rarer?

If BLANK!s are relatively rare, and not used by people willy-nilly where NULL should be used...then what are we saving by not letting operations like SET and APPEND opt-out with BLANK!?

One thing is that if we consider the concrete issue of working with BLOCK!s, you can't put a NULL in that block, only a BLANK!

 obj: make object! [x: 10 y: null z: 'word]
 data: [x: 10 y: _ z: word]

It's all fine and well to imagine saying append obj/y [a b c] and getting an error. But the block doesn't have that luxury. So it's sketchy to imagine saying append data/y [a b c] and having it silently keep running without warning you.

But when I talk about issues like parity in object and path picking, it might be that parity comes from "scant" evaluation. e.g. everything is processed at one quote level, unless it has no quotes.

 obj: make object! [x: 10 y: null z: 'word]
 data: [x: 10 y: ' z: 'word]

So when you ask for data/y you get null back, and data/z you get word back but data/x gives 10.

But that single apostrophe looks pretty odd. Note that we could decide that single-apostrophe renders differently... it likely shouldn't be #[null] as that is misleading (you can't have a null in a block). But it could be #[nullify] or something of that sort to say that if it gets evaluated it will become a null.

But we could spin this around and say that it is BLANK! itself that evaluates to null, but a variable that is blank evaluates to blank.

>> x: _
; null

>> x
; null

>> x: '_
== _

>> x
== _

>> x: blank
== _

>> x
== _

That would make _ a shorthand for a single tick ' when used in evaluative contexts. This might make the "scant evaluation" I describe more appealing, which may help with the OBJECT! <=> BLOCK! parity initiative.

(This proposal should not be mistaken for a previously panned "wacky" proposal: that looking up a variable that was BLANK! evaluated to NULL. We don't believe in "decay" of this form, e.g. the infamous "lit-word decay" from Rebol2 and R3-Alpha that Red also got rid of.)

Just Thinking Aloud...

As I said at the top of the post, I think it would be nice if we could opt-out of APPEND or SET in a chain of operations...as we do with non-mutating operations. I just want to look at it from all the angles, as this jigsaw puzzle comes together.

Note I did not originate the term "scant" evaluation--that was a term that originated in CONSTRUCT:

r3-alpha>> help construct
USAGE:
    CONSTRUCT block /with object /only

DESCRIPTION:
    Creates an object with scant (safe) evaluation.
    CONSTRUCT is a native value.

ARGUMENTS:
    block -- Specification (modified) (block! string! binary!)

REFINEMENTS:
     /with -- Default object
         object (object!)
     /only -- Values are kept as-is