NULL, BLANK, TRASH, VOID: History Under Scrutiny

UPDATE 2022: This thread covers issues that are also summarized in Shades of Distinction of Non-Valued Intents. Ultimately these should be refactored into one sort of "user guide" covering the modern state, and then one that helps explain the history.

To try and make this 2019 thread semi-comprehensible in modern terminology, I've sanitized the history to make it easier to absorb. e.g. NULL was initially called "void"...but for simplicity let's pretend it was just always called null. And for a time TRASH used the name "void" and for a time reused "none", so I've retconned it as if it was always trash. I'll try not to interrupt the flow by calling out such points inline.

Rebol historically had two "unit types": NONE! and UNSET!. Instances of both were values that could be put into blocks.

Ren-C started out with two parallel types: a simple renaming of NONE! called BLANK! (to match its updated appearance as _), and NULL (which I initially conceived as being a purified form of UNSET! which could not be put in a block).

The original NULL was "ornery" like an UNSET!. It was neither true nor false (caused an error in conditionals)...and it was the same state used to poison words in the user context against misspellings and cause errors. With NULL being so mean, BLANK! was the preferred state for "soft failures" was still the outcome of a failed ANY or ALL, or a failed FIND.

But as the point was rigorous technical consistency, something like a failed SELECT would distinguish returning BLANK! from NULL.

>> select [a 10 b 20 c _] 'c
== _

>> select [a 10 b 20] 'c
; null

This provided an important leg to stand on for the operations that needed it (crucial to those trying to write trustworthy mezzanines). While casual users might not have cared or been able to work around it, writing usermode code that could be reliable was quite difficult without routines that were able to make this distinction. Too many things had to be expressed as natives, otherwise the usermode forms to be "correct" would be circuitous and overworked.

Along with this, failed conditionals were made to return NULL too. This provided a well-reasoned form of the pleasing behavior that made it into my non-negotiables list:

>> compose [<a> (if false [<b>]) <c>]
== [<a> <c>]

There was no ambiguity as there was in Rebol2 (or as continues to be today in Red). They don't do my "non-negotiable" behavior, and has no way to put an UNSET! value into a block with COMPOSE...even though it's a legitimate desire, and you will get one with REDUCE:

red>> data: compose/only [(none) (do []) 1020]
== [none 1020]

red>> data: reduce [(none) (do []) 1020]
== [none unset 1020]

red>> compose [<a> (if false [<b>]) <c>]
== [<a> none <c>]  ; as with Rebol2, R3-Alpha, etc.

NULL's Prickliness Runs Up Against a Growing Popularity

As mentioned: by not being able to be put in blocks, NULL became the clear "right" mechanical answer to things like a failed SELECT. But it caused some friction:

>> if x: select [a 10 b 20] 'c [print "found C"]
** Error: cannot assign x with null (use SET/ANY)

Even if you had been able to assign null variables with plain SET-WORD! in those days, it would not be conditionally true nor false. You'd have to write something like if value? x: select ... or if x: try select ... or if try x: select ...

The rise of the coolness of ELSE also made it tempting to use NULL in more and more places. Those sites where BLANK! had seemed "good enough" since they didn't technically need to distinguish "absence of value" were not working with ELSE: ANY, ALL, FIND. Attempts to reason about why or how ELSE could respond to BLANK! in these cases fell apart--and not for lack of trying. This gave way to the idea of a universal protocol of "soft failure" being the returning of NULL.

NULL was seeming less like the pure form of UNSET!, but more like the pure form of NONE!. Its role in the API as actually translating to C's NULL (pointer value 0) became a critical design point.

The writing seemed to be on the wall that this non-valued state had to become conditionally false. Otherwise it would break every piece of historical code like:

if any [...] [...]
if find [...] ... [...]
if all [...]

This would start developing tics like:

if try any [...] [...]
if value? find [...] ... [...]
if ? all [...]  ; one proposed synonym for VALUE?, still a "tic"

NULL Becomes Falsey, TRASH Becomes the New Meanie

It felt like Ren-C was offering rigor for who wanted it. You now knew when something really didn't select an item out of a block. All functions had a way of returning something that really couldn't mean any value you'd want to append anywhere.

But with a popular NULL that required GET-WORD! access to read it, the illusion of greater safety was starting to slip. :my-mispeled-variable was NULL too.

A path of reasoning led to the argument of resurrecting a state which was not NULL that would be prickly. That took on the name TRASH. It's more or less like an UNSET!, but terminologically makes more sense...because variables are tested for unsetness, not values themselves.

TRASH became a preferred choice for when branches accidentally became null. "trashification" replaced the previous "blankification" which was harder to catch anything unexpected happened, because blanks were so innocuous:

>> if false [null]
; null

>> if true [null]
; trash

Then NULL Stopped Causing Errors When Read via WORD!

This brings things to a point where the "soft failures" of NULL are nearly as quiet as the "soft failure" of NONE! was historically. The key difference is that NULL can't be put in blocks.

Hence it's a good choice for variables which you want to easily test if they contain a meaningful value or not, but avoid accidentally treating as if they are meaningful by getting appended to blocks.

(Shixin had particularly harrowing experiences in Rebmake when he used BLANK! for yet-to-be-assigned variables, because they were always accidentally winding up in collections. It was easier to filter the blanks out at arbitrary moments than it was to find all the places where they could be accidentally appended.)

1 Like

As many objections to null stem from the foo: select [] 'word convention, what if foo: null was shorthand for unsetting a word? That way you still get some meanie behaviour when you subsequently try to use foo.

Creative ideas always welcome. But bending mechanics in such a way has pretty serious costs. I tried what I thought was a cool zany idea involving blanks turning into nulls...and quickly realized that any benefits it had were eclipsed by pulling the rug out from under people.

While it may seem that writing a REPL is "easy", it's actually pretty complex because it deals in the full bandwidth of the language. And you realize that if you can't say:

 value: do usercode
 print-value :value

Then it's a pretty big challenge to your system. I feel it's a slippery slope if you can't get an accurate reading out of that, and it would sacrifice a lot of the "" that NULL brings to the table.

I'm still not settled in my understanding of this concept. The reasoning appears to hold until I try it in practice.

If I can try and distill it in another way, it's as if BLANK! has split into two with a measure of "trashing" a positive branch that returns NULL for whatever reason, e.g. if true [null], the purpose of the latter is to enable ELSE/THEN.

I recall one other reason for a NULL/BLANK split not mentioned was using BLANK! as MAP! values without the associated key disappearing.

I'd be curious if we could further enumerate the ways in which the NULL/BLANK distinction offers more rigour as on the other hand, the finessing of these values does introduce additional complexity.

I should say that as of now, I don't have a favourable opinion of the ELSE/THEN idiom (on a stylistic/comprehension basis) and don't think they're worth the extra complexity alone, I'm looking to understand other circumstances where the distinction is critical. It may be too that I'm lazy and don't want to explicitly do the NULL <-> BLANK conversions where BLANK was implied before, e.g. value: any [first case | second case] ; or blank —that is to say, I'm willing to check my own long-hewn biases if that is indeed a bad practice.

It'd be conceivable in a TRASH/BLANK-only world to imagine select [] 'c returning TRASH as an indicator of no-match, similarly my-map/non-existent-key. In a world where val: trash does not bomb, then val: my-map/non-existent-key wouldn't in itself be a showstopper.

I accept being skeptical of trashification, which seems to me the crux of your complaint. I'm willing to look at concrete challenges and imaginative mechanics which may make this less painful.


I'd be curious if we could further enumerate the ways in which the NULL/BLANK distinction offers more rigour as on the other hand, the finessing of these values does introduce additional complexity.

NULL is simply non-negotiable. In contrast to my eagerness to delving into possible better ideas for trashification... I'm not particularly enthusiastic at this moment to sink more time writing up an extended defense of NULL, and feel that at some point you have to take my word for something that I've written about for years.

But... :man_shrugging: It started back with feeling I must be able to say compose [<a> (if false [<b>]) <c>] and have it unambiguously give back compose [<a> <b>] without concern that there is some meaningful variant I'm missing which I might have wanted to do. The failed control structure must return a non-thing, otherwise this is in question. We've discussed SELECT needing an unambiguous signal for "not there" vs "was there and was nothing". Maps not having a value, etc. etc.

NULL is a requirement at an API level, and a semantic level, and countless features will not work practically without it. Its existence is simply not possible to revisit. If you like features in Ren-C that are cool, then whether you know it or not, you really like NULL, a lot.

OTOH... we could question whether BLANK! itself is truly necessary. Does there really need to be a falsey placeholder type? Could people use the tag <nothing> or an empty string, etc.? But if such a beast does exist--and I think you'd want it to--there's interesting options for how it can be used as a sort of "reified twin" to NULL. Right now the only real way in which the system treats it special (beyond its falsey status) is the convention of letting it signal opting out of arguments, and getting a nice form of error locality in chains.

But back to the real issue...

Your valid complaint here is trashification, and whether it's worth the cost of distorting the output of control constructs for a feature you dislike.

Firstly, understand this feature goes beyond enabling ELSE and THEN. It's an answer to the question of "can someone from the outside of a control construct unambiguously know if it took a branch or not?" and "can someone from the outside of a control construct unambiguously know if it did a BREAK or not?"

Being able to glean this knowledge from the outside is important even for those who do not care to use ELSE and THEN. Imagine you are writing a wrapper for a loop construct out of two other loops, and you want that construct to be able to implement BREAK. With the NULL being a signal that unambiguously means BREAK, you can do so.

We could make NULL branches preserve falseyness but not nullness via "blankification" instead of "trashification". But I think trashification gave you more of an alert that you hit a distorted case and have to use another method.

As we're talking about falsey situations, note that falsey cases have always faced "distortion":

rebol2>> all [true true false]
== none

You lost the false, though not the falsey-ness. So blankification has that going for it.

I'd really like to look at concrete examples--as you know that I do not like it when there's not a "good" answer to a complaint. I'm really thinking about this idea of being able to get undistorted branches out:

 >> case [true [null]]
 == ~  ; none

 >> case [true :[null]]
 ; null

It seems like a little bit to waste for GET-BLOCK!, which I had hoped to have mean REDUCE. But that is only a shorthand, and not actually that helpful...since it would only work if the block you wanted to reduce was literal. This may be its higher calling.

Although the existence of such an escape would prevent wrapping (e.g. the wrapping of control constructs or loops to mirror out whether branches were taken or whether it hit a BREAK). Maybe this is acceptable in the sense of "not everything is going to work easily"...wrappers that want to support GET-BLOCK! branches will need to be more clever. Or maybe you simply understand that if you use a GET-BLOCK!, and a null is given, you are effectively un-signaling that the branch was taken...even though it was. But that's the meaning you understand you intend.

But overall: My own instincts are that it's probably on balance done mostly right, and that by merely converting code you're only seeing the needed workarounds as burdensome without appreciating the benefits.

I guess this is a big problem for me when it comes to it—handling blanks become burdensome and having lost their utility, they seem only to perform a placeholder role—a proxy for NULL within data structures, if you will—that has to be managed.

I need to give your response better consideration, but this part still jumps out at me

 >> flag: false

 >> print ["To me, this is" (if not flag ["not"]) "negotiable."]
 To me, this is not negotiable.

 >> compose [This is also (if not flag ['not]) negotiable.]
 == [This is also not negotiable.]

When you go down the road of BLANK!/("NONE!") trying to serve two both thing-and-not-thing...people will complain if there's no way to literally compose blanks into stuff.

When we talk about complexity that needs to be managed, you're going to have to manage it somewhere. The NULL/BLANK! distinction provides leverage in the implementation and puts the responsibility on you to know which you mean, at the right time. Otherwise you're just pushing the complexity into less obvious places.

It's not necessarily intrinsic that BLANK! needs to be falsey, if your complaint is about not preserving the value precisely...

>> any [null first [_]]
== _

>> any [null second [_]]
; null

But historical Rebol has a LOGIC! type, and a false, whose intent would be lost.

>> any [null first [#[false]]]
; null

That's just something you're used to. But it makes it seem that inhibiting BLANK!s use in compatibility layers seem not worth could have just used <null> if you wanted something truthy and placeholdery.

We have to be looking at more than just one annoyance. The number of creative solutions afforded is high, and one has to appreciate the benefits in a holistic sense to be willing to think it's worth it to rethink how old code is written. But the new way for that old code might be even better. Can't know unless I'm reading it too.

I will point out this debate rages on even as we speak--today, in Red Gitter.

Because they lack invisibles, they face an uphill battle with trying to make return results that opt out. So an already poor idea like making UNSET! truthy leads to questions of why things wouldn't return UNSET! so they could just opt out of an ANY, while they would end an ALL. Generic ELIDE becomes a perfect tool for this once you get comfortable with it.

Because they lack a NULL state, they only have the option of a reified UNSET!...which due to its reified nature, may be seen when enumerating blocks. Writing mechanically correct code is nigh impossible.


1 Like

I get that, that's why I'm asking. I have THEN/ELSE, the vanishing NULL—which I get and appreciate that to go about it from the legacy BLANK-only approach might get messy—and some other items (SELECT BLANK vs. nothing), and the fidelity of CASE/SWITCH/IF/etc.. However I see the BLANK/NULL split as messy too and I'm trying to justify all the ways it is necessary, because I'm having a difficult time balancing the pros/cons of each and wondering if one set of inconveniences do indeed outweigh the other. I'm as sincere as I can consciously be in approaching this with an open mind and apologies if this has indeed been enumerated elsewhere.

I've read this thread over and I feel I get the basic concepts. It just starts to hit me when I start to actually use them.


A post was split to a new topic: How to Subvert Voidification?

So I started with being sympathetic to the idea that trashification is a real sticking point in the user experience...

And your particular situation raises an important point about the complexity cost... that it's affecting even people who aren't using the feature.

Why would someone writing the following be affected by the needs of ELSE?

 pos: case [
     x > 0 [find [a b c] 'd]
     true [find [d e f] 'g]

"They need to be trained in case one day they use ELSE and get confused" didn't convince you. And having seen the epicycles of the problem, I've moved from empathy to agreement. In particular because it seems once you start using ELSE the mechanics of working around this guarantee you'd be confused regardless.

The separation of concerns is wrong; and this is bothersome enough that another mechanic is needed.

"Heavy NULL Isotopes": Measurably Better

Having done some fairly convincing tests, and putting the big picture together...I'm ready to embrace a concept that I had initially thought of as "too weird", which I'm putting under the label "NULL isotopes". That is to say that there are "boxed nulls" which embed a signal informing the reactions of things like ELSE and THEN.

e.g. an unboxed null means "there is no answer" and a boxed one means "there is an answer, and it is null". The box vanishes upon assignments to variables.

The analogy with atomic isotopes is pretty good, in that there is a "common form" to which the NULLs will decay (e.g. each WORD! or GET-WORD! fetch will lose the isotope status). Hence handling the isotopes is a somewhat delicate matter for those constructs involved with it. But the isotope form to the average observer is indistinguishable from the common usually don't have to care, unless you have a reason to care.

It meets the criteria that this complexity need not concern people who aren't using the features. Beyond just pleasing a reluctant audience for said features, it also is hints that the design is separating concerns correctly. Trashification has a fully parallel set of concerns when you look at the whole picture, but lacks this critical advantage.

Sounds Scarier Than It Is

Having been able to rig it up in a matter of minutes, including the feel-good purge the if foo @[null] branch cases... I was actually surprised by how clean it is.

One thing making it easier to swallow is NULL's already ephemeral nature. Since it can't be stored in blocks, you're not getting "values" with a sneaky bit on them that could bear data.

I really like that architectural aspect; that we're not talking about a generic bit that suddenly has to apply to all values. Contrast with if the proposal was that there was an isotope form of 7 that would trigger an ELSE, and a plain 7 that wouldn't. That would be a disaster (and reminiscent of the infamously bad LIT-WORD! decay.)

Even with only a little bit of experience with it...I think it's safe to say this idea is stronger than trashification. So the better solution is at least this, if not something better. I'd say trashification is all but dead.


It's not clear what this means in practice—can you explain it in some examples?

What I have called "heavy null" should--if it works correctly--not concern you, as someone who doesn't care about ELSE or THEN.

>> compose [a (case [true [null]])]
== [a]

That CASE returned a "heavy null", and not a "plain null". You don't need to care. It answered yes to NULL?.

If you look at the rest of the implications of the design, it has pushed any concern about NULL variations upon those who want to manage NULL-reactivity to THEN and ELSE. It does so by damping down the existence of heavy null as much as it can.


It's not that it bothers me, I'm just still trying to better understand the intersection of TRASH/NULL/BLANK/LOGIC as it pertains to values and flow—I don't yet have a settled story myself about why this is a necessary improvement over BLANK! as flow currency.

I'm not saying that's not my responsibility, but if there isn't a THEN or ELSE, is NULL still essential? Can't issues of discerning BLANK! as a value—say as a MAP! value with better use of TRASH ?

I'm also still curious how this drives THEN/ELSE as it pertained to my own misunderstanding on how I tried to use ELSE once—would need to look up the example, but I think it was something like:

is-more-than-five: func [value][
    return case [
        value < 6 [null]
        value > 5 [value]

is-more-than-five 1 else [print "Else"]

I'm not trolling here, I'm trying to understand under the terms of the topic title what are the essential problems breaking NULL out of BLANK/TRASH solve. I'm not intentionally being dense either—I believe to really understand this, I need to hash out all the flow cases and enumerate the advantages, a daunting prospect

UPDATE 2023: Shortly after Chris performed his test, there was an automatic feature of decay added to RETURN unless you used a refinement to suppress it. It's different I retconned my reply to describe the current behavior.

In your example, you did not have a RETURN: specification in your function spec that indicates whether you want to return "heavy nulls" or not. And you did not call DECAY explicitly. So it passed through the result as-is, meaning the ELSE behaved as if it were inside the body of the function. e.g. the null case would still trigger else.

>> is-more-than-five: func [value] [
        return case [
            value < 6 [null]
            value > 5 [value]

>> is-more-than-five 7 then [print "Else"]

>> is-more-than-five 4 then [print "Else"]

One way to get decay is to narrow the type spec to exclude heavy nulls, by only returning "normal" values:

>> is-more-than-five: func [return: [any-value?] value] [
        return case [
            value < 6 [null]
            value > 5 [value]

>> is-more-than-five 7 then [print "Else"]
; null

>> is-more-than-five 4 then [print "Else"]

Another way would be to explicitly call the DECAY function:

>> is-more-than-five: func [value] [
        return decay case [
            value < 6 [null]
            value > 5 [value]

>> is-more-than-five 7 then [print "Else"]
; null

>> is-more-than-five 4 then [print "Else"]

This addresses the subtlety and "radioactivity" and why I've pulled everything together here for fitness for purpose, after heavy consideration.

I see in an older version, it's this (and that which tripped me up before):

>> compose [a (case [true [null]])]
== [a ~]

I guess it's at this point where I don't understand the layering. I'll try and reread to understand what's going on.

I'm not saying you're wrong, I just don't understand why you're right

The elevator pitch for the addition of NULL is that you can add the new language constructs THEN/ELSE and that NULL vapourizes when reduced.

I don't have any code to show you because I'm reluctant to invest the effort to produce that code unless I can appreciate the reasons why it's worth doing so (besides attempting to kick the tires)—it's a non-trivial change to the way the language has worked. Can anyone else chime in on where the addition of NULL has clarified their intent?

This website is filled with long, lucid explanations. So I'm reluctant to respond to "I don't get it, can you please write more"...without citing specific points of misunderstanding for me to react to...

>> compose [a (case [true [null]])]
== [a ~]  ; trashification, a behavior that existed solely to please ELSE

This is the old behavior, which I was calling "bad" (and believed you to be calling "bad") that is now fixed. And ELSE still works. I'm not sure how much clearer I can get.

If you are still stuck on not considering this good:

>> compose [a (case [true [null]])]
== [a] offering an axis of flexibility vs. being able to put a reified thing there:

>> compose [a (case [true [_]])]
== [a _]

Then you are mistaken. That's an important axis. It's important in the internals (you can't not know the difference and write coherent block manipulation code, or it wouldn't compile and run so you could use it). That importance in the internals reflects anytime you want to write mezzanines.

>> third [1 _ _]
== _

>> third [1 _]
; null

I'm of the belief you shouldn't need to break into C code to know which situation you have. The part I don't understand is that being controversial.

Places which wish to gloss the difference have an easy mechanism for doing so:

>> null-to-blank third [1 _ _]
== _

>> null-to-blank third [1 _]
== _

I'm glad you've been persistent in questioning non-ergonomic behaviors and terminology. I think it has greatly helped aim the design toward meeting both the needs of system coherence (my specialty) and idiomatic Rebol user scenario aesthetics (your specialty). As I've said--if I've been irritable about it, mostly it's been that I can't solve a problem of you not relating to what I write by writing more of the same.

But things are different now, and I think it's mostly hammered out.

So the story to me all pulls together like this:

  • VOID - An intent of absolutely nothing, a desire for a no-op or vaporizing value. If you pass to a function there may be a sensible return result, and if not you may return another signal (probably an ornery one and not another void, in order to get errors in sensible spots vs. generating a confusing chain that opts itself entirely out with no sign of where the trigger was)

    >> (comment "hi")
    >> 1 + 2 void
    == 3
    >> append [a b] void
    == [a b]
    >> append void 10
    == ~null~  ; isotope
  • TRASH - The state of an unset variable, where you wish for it to cause an error merely by referencing a word. You thus can't test an unset variable like if asdfasdf [...] because there will be an error on the asdfasdf reference. But even if you get a TRASH, it is still logically neither true nor false and will generate an error.

    >> if asdfasdf [print "never reached"]
    ** Error: asdfasdf is unset
    >> get/any 'asdfasdf
    == ~  ; isotope
              ^-- trash is technically the isotopic state of void
    >> if get/any 'asdfasdf [print "never reached, either"]
    ** Error: ~ isotopes are neither conditionally true nor false
  • NULL - A signal of soft failure, such as a SELECT or PICK that did not succeed. It does not cause errors to fetch it from a word, and it is logically it's a good choice for initializing a variable that you plan to be testing in various places to see if it has been assigned yet or not. Most routines will error if you try to pass a null to an argument that it expects, so you're supposed to be using that convenient testability to avoid branches which don't apply:

    >> third [a b]
    == ~null~  ; isotope
    >> if third [a b] [print "Never reached"]
    ; void
    >> append [d e] third [a b]
    ** Error: APPEND cannot add a ~null~ isotope to a block

None of The Above States Can Be Put In Blocks

VOID opts out of being appended, and TRASH and NULL are isotopes. Isotopes (stable ones, e.g. not errors or parameter packs) are states that variables can have, but can't be in blocks.

Yet each of these cases reasonably needs a "proxy value" for representing their intent in a block. The idea behind these proxy values is that under evaluation, each will produce the state.

There is QUOTED VOID, which when evaluated drops the quote level and becomes void.

>> quote void
== '

>> '

>> 1 + 2 '
== 3

>> append [a b] '
== [a b]

There is QUASI VOID, which when evaluated becomes isotopic void (which is what I'm calling TRASH). Assigning it to a variable will unset that variable.

>> quasi void
== ~

>> ~
== ~  ; isotope

>> foo: ~
== ~  ; isotope

>> foo
** Error: FOO is unset

There is the QUASI-WORD! WITH THE SPELLING "NULL", which when evaluated becomes the isotopic word worm of ~null~. (The choice to make this literal english word isotope correspond to the null state is a pragmatic one, because something like ~_~ for isotopic blank being designated as null doesn't really offer appreciable benefit and would be a tax to explain.)

>> quasi 'null
== ~null~

>> ~null~
== ~null~  ; isotope

Mechanically, these proxy values are all as friendly as any other value in block manipulation. There's nothing special about them when you're TAKE-ing or APPEND-ing or INSERT-ing them. But functions can react differently to them in terms of the intent they are proxying.

>> append [a b] second [d ']
== [a b ']

>> append [a b] unreify second [d ']
== [a b]

>> append [a b] second [d ~]
== [a b ~]

>> append [a b] unreify second [d ~]
** Error: Cannot APPEND ~ isotopes to blocks

BLANK! Fills A Niche The Proxy Values Do Not Cover

The idea behind blank as a placeholder is that it can stand in for values and then routines can fold its handling in as part of that routine's natural handling of no value.

I believe for example that blank should report TRUE back from EMPTY?, and that it should react to SPREAD the same as an empty block would:

 >> data: [Foo [10 20] Baz _ Bar [30]]

 >> map-each [name stuff] data [spread stuff]
 == [10 20 30]

(By contrast, I'd think that SPREAD of reified ~ or ~null~ should pretty clearly be errors. A SPREAD of just ' might reasonably be argued to act like void, but I'm not sure, so it's an error for now and you'd have to at least DECAY it for it to act like a void.)

In any case--this isn't to say BLANK! is synonymous with empty blocks...because it's not. It's equally likely a placeholder for an empty string, or a slot which could be a WORD! or a TAG! or an INTEGER!.

So if its behaviors seem too "friendly" by answering a lot of basic questions and allowing FOR-EACH enumerations to be skipped without complaint, and you don't want such a friendly slot, then ~ may be a better choice for a reified nothing.

Cross-Cutting Behaviors Are Better Than Making BLANK! False

I've uncovered some pretty deep value from being able to say with confidence that anything you can pick out of a block or group is "truthy" from IF's point of view.

When you take a step out from mechanical block construction into the world of what's legal states for variables, then you get into isotopes... where we have isotopic ~null~ and ~false~ and so there are falsey states there. Things are very pleasing.

I think the much more interesting things to say are these other properties like that BLANK! is EMPTY? or that it SPREADs equivalently with an empty block, rather than that it is falsey. Those properties are where the leverage comes from. And the leverage of saying everything reified in a block is truthy is tremendous.

However, I am going to want Redbol emulation to work...even if I think reified things all being truthy is the better answer. I'll have to look into how to hook a TRUTHY? function used by a sort of "evaluative sub-universe"...maybe on a module level, and what hope that has vs. needing to rewrite every construct that acts on a concept of testing for truth and falsehood.

(You lose a lot of interesting correctness going against the all-block-reified-items are truthy rule... while [item: try take block] [...] is just the tip of the iceberg.)

Distortion Of Return Results For THEN/ELSE Are Cleaned Up

I won't rewrite it all here, but the idea of differentiating ~null~ isotopes signaling "no result" vs. being able to signal "there was a result and it was a null isotope" is folded in with the multi-return mechanics.

A parameter pack containing one null isotope will unpack that into a SET-WORD! or first multi-return in a SET-BLOCK!, and for almost any parameter it will act identically. Only some routines like THEN and ELSE will be sensitive to a difference between these two. Conditional constructs that wish to return null intentionally simply must box it up in a parameter pack to count as "an answer that is null" vs. returning a null in isolation.

Long story short: The various aspects of broken that were introduced by distorting return results in service of THEN and ELSE has been finessed and folds into a bigger story of multi-returns, which by leveraging isotopes are able to be handled gracefully in compositions.

Pleasingly, This Is Only THREE Types (if you think of it that way).

What you're seeing here is just VOID, BLANK!, and WORD!.

It just so happens that all datatypes now have isotopic forms, quasiforms, and quoted forms.

But there's full regularity in the behaviors:

  • all quoted values evaluate by removing one quote level
  • all quasiforms evaluate by producing the isotopic form
  • no isotopic forms can be put in blocks
  • attempting to put a void in a block is a no-op
  • all non-isotopic forms are legal in blocks and act the same w.r.t. APPEND, TAKE, etc.

There are nuances here, but, also some very strong anchor points for supporting the flow of engineering decision-making...

1 Like