Definitional Errors as BLANK!-in-NULL-out Alternative

In traditional Redbol, if you wrote a random chain like like first select block second options, there was the question of how you would manage the situation of any of these things failing.

People would request lenience... to say more operations should assume that if they got a NONE! input that they should just return a NONE! output. But this would give those chains no error locality... you'd get a NONE! at the output and not know what failed. The FIRST? The SELECT? The SECOND...?

You can see DocKimbel's response to a request that INDEX? NONE be NONE:

"There are precedents for built-in functions on series which yield none for none."

"Yes, but the less of them we have, the better, as they lower the robustness of user code, by making some error cases passing silently. The goal of such none-transparency is to be able to chain calls and do nothing in case of none, avoiding an extra either construct. In the above case, it still requires an extra conditional construct (any), so that is not the same use-case as for other none-transparent functions (like remove)."

But I Didn't Give Up So Easily...

The strategy cooked up for Ren-C has been called "BLANK!-in-NULL-out"...leveraging a new meaning of the short word "TRY".

There was an asymmetry created, in which certain functions that didn't have a meaning for BLANK! as input would accept BLANK!, but then just return NULL out.

Then TRY would turn nulls into BLANK!.

As long as functions would generally reject NULL as an input, this created a dynamic where you'd put as many TRY in a chain as you felt was warranted, if you expected any steps could fail. So perhaps first try select block try second options. A reader could tell which operations could potentially fail using this method.

The Achilles Heel: What if a Function Has Meaning for BLANK! ?

Think about something like PICK and POKE. It's legitimate to put a mapping from BLANK! to something in a MAP!. So consider these mechanics:

     => pick my-map key

 my-map.(key): value
     => poke my-map key value

You can't "opt out" of this with blank-in, null out...because blank is one of the legitimate keys.

This is unfortunate, because it would be nice to be able to use TRY with PICK.

An Atomic-Age Solution :atom_symbol: Let TRY Defuse Isotopic Errors

My new idea, which we might call NULL-In-TRY-Out, looks like a conventional error case:

>> first null
** Error: FIRST doesn't accept NULL for its SERIES argument

But this isn't a typical type-checking error (which would happen in the evaluation engine and hence not qualify for being "definitional").

The parameter convention would basically say that the function returned the isotopic error as its output, making it interceptable. In other words: It actually did accept the NULL parameter and call the function...but then the only action it took was immediately put an isotopic error in its output cell.

And one of the constructs that would trap it would be TRY.

>> try first null
; null

Will People Notice A Difference?

It flips it around, to where you put the TRY on the operation that's supposed to tolerate the NULL, not the NULL being tolerated.

So before you would write something like:

adjust-url-for-raw try match url! :source

That would say "if MATCH returns NULL, morph it into a 'less ornery' BLANK!, so ADJUST-URL-FOR-RAW knows not to complain"

But now:

try adjust-url-for-raw match url! :source

What you're saying is "I know ADJUST-URL-FOR-RAW can't use that NULL, but it expects that I might give it one, so keep the isotopic error it returns from being raised."

Each has its merits.

  • In the old way, you were pointing a finger right at which operation could potentially return NULL.

  • In the new way, you're pointing right at the operation that might be bypassed due to parameter conventoins and not run.

All things being equal you could argue for either. But things aren't equal: it's much cleaner to untangle this behavior from BLANK!, and push it out-of-band...freeing BLANK! to take its natural meanings.

On that note, another "nothing of value was lost" feature will be when someone would deliberately put BLANK! in variables to signal opting out of something, without needing to use a TRY on the first step.

>> var: _

>> find "abc" var
; null

>> if (find "abc" var) [print "This used to work"]
This used to work

>> if (first find "abc" var) [print "Here's where you got your error"]
** Error: ...

>> if (first try find "abc" var) [print "Only one TRY needed"]
Only one TRY needed

There'd be no such option here, because there's no state besides NULL being used for the opt-out behavior. So you start off with var: null and then have to say try find "abc" var. You can't "pre-game your TRY".

Who cares. :man_shrugging: This is better.

We Might Be Able To Loosen Up About Mutating Operations...

Hey, why not TRY an APPEND?

>> series: null

>> append series generate-data
** Error: APPEND doesn't accept NULL for its argument

>> try append series generate-data
; null

The conventional wisdom historically was that it would be too risky to let APPEND allow you to pass a none/blank/null as input, because you wouldn't find out your modification didn't take. But if you have to put a TRY on to avoid the error--and you can cleanly know the error came from that APPEND (not some weird problem deep in GENERATE-DATA) you've got some safety!

Just Thought of it, So I haven't Implemented it Yet...

I'm sure there'll be some quirks to come up.

My first instinct is to say that TRY will only respond to some narrow errors (like "NULL given for TRY-able input" or similar), and require you to use ATTEMPT if you want to suppress a wide range of problems. But it's early yet to know.

But I think this is a winner.

1 Like

It was a compelling enough idea that I stayed up til 5am looking into it. I got a system to boot and be able to run all the tests within 4 hours.

(Being able to try these things so quickly speaks to the acrobatic-ness of the codebase...)

Always. :face_with_raised_eyebrow:

Will Have To Proxy Multi-Returns

When TRY was on the arguments, it didn't get between SET-BLOCK! and the function with multi-returns:

[a b c]: something try arg1 arg2

If you put it on the operation as a whole, then it disconnects that...since TRY doesn't have multi-returns:

[a b c]: try something arg1 arg2

This isn't any particularly new concern, as things like APPLY are going to have to proxy arguments as well. But it at least needs a short-term solution to keep things working that worked before. Not hard.

Different Interactions with ELSE and THEN

Deferred enfix takes one expression on the left:

something arg else [print "This would work"]
(something arg) else [print "This would work"]

So adding a TRY has tended to mess that up:

something try arg else [print "Binds to the try"]
something (try arg) else [print "Binds to the try"]

Under the new rules, you get something else you probably don't want:

try something arg else [print "Binds to the something"]
try (something arg else [print "Binds to the something"])

If you write it this way, it actually won't run any ELSE or THEN branches...they'll just pass the error coming out of SOMETHING through, and TRY will suppress it:

>> try something null then [print "Won't run"] else [print "Won't run"]
; null

Maybe that's useful sometimes, but also maybe the old way was useful sometimes. But what you meant was most likely:

(try something arg) else [print "You probably wanted this"]

Some Functions Want TRY to Give Non-NULL

So there's an interesting aspect to how things like APPEND used to work, in that they would take in BLANK! values but not use the <blank> annotation for BLANK!-in-NULL out. Instead they'd process the blank manually and return some value.

APPEND behaved as "BLANK!-in, head of series out"

>> append "abc" _
== "abc"

>> append "abc" try null  ; would produce BLANK!
== "abc"

So what I experimented with was making the error that cues TRY more expressive, by adding an argument. It's called "Try If Null Was Meant", and if you use TRY it will evaluate to the error's argument.

Overall, It's Looking Good

It turns things around:

 keep try match integer! foo
 try keep match integer! foo

And it's cleaner. You don't have to involve BLANK! to explain what's going on: it's a NULL on the input with TRY as the referee saying that's ok...with the ability to make the output be whatever (though often NULL).


This kind of makes me wonder if the "I know this can fail" nature of ELSE (and THEN) should imply TRY, for convenience?

When you think about the point of TRY, it's "are you sure? this could mean you're not getting a result...are you cool with that?"

It's to stop you from thinking everything is hunky-dory and proceeding along:

keep [type:]
keep select type-map foo  ; if this select gives NULL, don't want silent no-op

Where TRY used to come in was to silence KEEP from raising an error on its input type being NULL, so you'd say keep try. Now it routes the error to the output definitionally, so you say try keep.

But if you've got an ELSE there, things aren't at risk of disappearing into the void with no handling. You're saying "I'm putting in explicit handling for the case of NULL."

It might be a little aggressive to say ELSE means you know you "meant" it. For example, ELSE also reacts to pure void...which happens when a loop never runs. Which means ELSE is already doing double-duty with BREAK:

 for-each item data [print mold item] else [
     print "The body never ran... or the loop body did BREAK"
     print "or... data was NULL if ELSE has implicit TRY?

Silently conflating the DATA variable being NULL with the body never running or executing a BREAK is a bit excessive.

This conflation would be a problem for any function that can legitimately return NULL for reasons other than NULL input. For instance SELECT. In that case it only conflates with one thing (the data was there and no selection could be made) but still maybe a problem.

It's probably not a good idea to do the conflation, but I thought I'd mention it.


There could be additional words, e.g. otherwise which conflates, or on-null which reacts only to null.

With refinements not being conflated with member selection via tuple dots, we may actually be able to have refinements on enfix functions. I've been considering narrowing refinements for ELSE, like ELSE/NULL or ELSE/VOID. There could also be ELSE/TRY...

On another related thought, I think MAYBE should turn these errors into VOID. Some code from HELP:

parse parameters of :action [
    args: opt across some [word! | meta-word! | get-word! | lit-word!]
    refinements: opt across some path!  ; as mentioned, these are at tail
; ... some code
print [
    _ _ _ _ (uppercase mold topic)
        maybe form args, maybe form refinements

So what's nice here is that ARGS and REFINEMENTS wind up being either NULL or a BLOCK!, so you can test for the null vs. having to use EMPTY?.

If MAYBE didn't suppress the complaint from FORM about receiving NULL input, you'd have to write maybe try form args to get the void for the form to vanish.

But I think just saying maybe form args should probably be enough, putting it in the same league as TRY.

1 Like

For completeness purposes, I'll mention there's a possible plan of attack that could keep the TRY call on the argument...using an inversion of the mechanism.

This would be if the TRY generated a definitional error that was intercepted by the <try> parameters.

>> try null
** Error: Tried variable was NULL but a <try> parameter can intercept this

>> collect [keep null]
** Error: KEEP does not accept NULL for its VALUE argument (unless you use TRY)

>> collect [keep try null]
== []

It's kind of instantly off-putting, just from the idea that TRY generates errors instead of suppressing them. But it does technically accomplish its goal out-of-band without having to repurpose something like BLANK! in the process.

You get the advantage that it doesn't interfere with enfix if you are trying something other than the last argument in a function:

for-each x try data [...] else [...]

But I think that's the only advantage. And the enfix will need a general solution anyway.

I'm already favoring try keep value to keep try's more coherent to think of yourself as "trying the operation".

Doesn't seem a good idea to bring back the idea of "trying a value" and turning it into "something weird that is accepted by an argument". Error suppression for TRY is an easier to grok idea than error generation (and connects a bit with the historical use of TRY and in other languages).

However... there may be other cases where the idea of an operation intentionally generating a specific error to be intercepted as a parameter would be interesting. It just seems a confusing approach here, when a clearer one is available.