When has Stopping VOID! (or old "UNSET!") Assignments Helped You?

I think it's good that VOID! is not conditionally true or false:

>> if print "This error seems good" [<does it not?>]
This error seems good
** Script Error: VOID! values are not conditionally true or false
** Where: if console
** Near: [... print "This error seems good" [<does it not?>] ~~]

And I think it's good that you don't get it accepted as a function argument by default...nor can you dereference a variable containing it by default.

But with NULL being used to unset variables, I kind of wonder:

>> value: print "how much do we gain by stopping this assignment?"
how much do we gain by stopping this assignment?
** Script Error: value: is VOID!
** Where: console
** Near: [value: print "how much do we gain by stopping this assignme...

For access, you have the ability to say :VALUE and get it. But for writing, you have to turn this into a SET/ANY. That's inconvenient.

Any time you call DO on CODE that is arbitrary, you have to switch away from the use of a SET-WORD!...

switch type of x: do code [
    void! [print "What a pain you can't get here without SET/ANY 'X DO CODE"]

How often has this actually helped anyone vs. just being a PITA?

I'm sure it's probably helped catch some mistake somewhere. But my memories of it doing so are few and far between, while my memories of having to use SET/ANY when I didn't want to are pretty hard to count.

The issue is that if we take this away, we take away a tool for error locality. Most notably things like:

 x: if 2 = 1 + 1 [select [a 10 b 20] 'c]  ; null gets VOID!-ified

You won't find out until you try to read X that you didn't get the value you meant, because NULL had to be reserved for the signal that the branch did not run.

This is all explained pretty well in "Why VOID! is not like UNSET! (and why its more ornery)". But quoting myself:

A void! is a means of giving a hot potato back that is a warning about something , but you don't want to force an error "in the moment"...in case the returned information wasn't going to be used anyway.

Is the assignment itself an important "hot potato" moment, or is that not one of the cases? Do we really need to know that the X: assignment didn't go well right then? What if the variable is never accessed (or if you have a if void? x: ...something... to handle it?

Looking for any anecdotes...

All I can think about is how annoying it is that I can't write generic code using X: and :X and trust the premise of symmetry and substitution. That's the main anecdote I have. I dislike what SET/ANY does to the elegance my generic code that is trying to work with any value...including VOID!. And in a language where safety is not the raison d'etre, seeing that elegance lost is frustrating.

Taking away this mechanism would take away the ability to stop a plain SET-WORD! assignment of anything...they would all be legal...including NULLs. But you could still stop it a lot of other places (function arguments by default, variable reads, and SET and GET without /ANY). With the objectives of the language being what they are, should we just go ahead and allow it?

Tangential thought: Maybe we can make [X]: have a different property, e.g. enforcing non-null values? You can't put nulls in blocks. So you'd have to say [X]: try null and get [_] as the result, vs. an error on [X]: null. Or maybe [X]: null sets X to VOID! and evaluates to a block containing a void value?

(This would be going with the idea that SET-BLOCK! was part of some multi-return value scheme instead of a pure synonym for SET of a BLOCK! as written today (which could be under another name, e.g. assign [a b] [1 2] giving a as 1 and b as 2. Then SET could mean something else entirely when applied to blocks!)

Continuing the general question of competitive analysis with JavaScript, let's look at what they do:

> function nothing() { return; }

> let x = nothing()
<- undefined

> if (x) { console.log("it doesn't error even on dereference"); }
<- undefined

> if (!x) { console.log("because it's falsey"); }
because it's falsey
<- undefined

That's even more forgiving. (I definitely like the neither-truthy-nor-falsey status of VOID!, and the falsey status of NULL, so those are unlikely to change.)

But does this point to the idea that it's better to make all assignments work, to get a solid "substitution principle"? e.g. wherever you could say:

  some-func (some expression)

You could alternately say:

 sub: (some expression)
 some-func :sub

Is the value of being able to take this for granted more than the value of having a datatype which defies SET-WORD! assignment, and requires special handling e.g. SET/ANY?

I really am leaning toward saying that prohibiting assignments of VOID! values via SET-WORD! is probably not all it's cracked up to be. Conscientious programmers should be trying things like:

 >> x: ensure integer! add 1 2
 == 3

That's even stronger, and if you said x: ensure integer! print "Hello" you'd catch the problem if you were so concerned. Or even x: add 1 2 matched integer! if you want to say it in another order.

Similar reasoning led to the allowance for NULL to unset variables. I feel like going all the way to saying SET-WORD! is SET/ANY, and GET-WORD! is GET/ANY, is probably what to do...and then let plain WORD!-access catch the unset variables and voids is the better answer.

1 Like