In traditional Redbol, if you wrote a random chain like like first select block second options, there was the question of how you would manage the situation of any of these things failing.
People would request lenience... to say more operations should assume that if they got a NONE! input that they should just return a NONE! output. But this would give those chains no error locality... you'd get a NONE! at the output and not know what failed. The FIRST? The SELECT? The SECOND...?
You can see DocKimbel's response to a request that INDEX? NONE be NONE:
meijeru
"There are precedents for built-in functions on series which yield none for none."DocKimbel
"Yes, but the less of them we have, the better, as they lower the robustness of user code, by making some error cases passing silently. The goal of such none-transparency is to be able to chain calls and do nothing in case ofnone
, avoiding an extraeither
construct. In the above case, it still requires an extra conditional construct (any
), so that is not the same use-case as for other none-transparent functions (likeremove
)."
But I Didn't Give Up So Easily...
The strategy cooked up for Ren-C has been called "BLANK!-in-NULL-out"...leveraging a new meaning of the short word "TRY".
There was an asymmetry created, in which certain functions that didn't have a meaning for BLANK! as input would accept BLANK!, but then just return NULL out.
Then TRY would turn nulls into BLANK!.
As long as functions would generally reject NULL as an input, this created a dynamic where you'd put as many TRY in a chain as you felt was warranted, if you expected any steps could fail. So perhaps first try select block try second options. A reader could tell which operations could potentially fail using this method.
The Achilles Heel: What if a Function Has Meaning for BLANK! ?
Think about something like PICK and POKE. It's legitimate to put a mapping from BLANK! to something in a MAP!. So consider these mechanics:
my-map.(key)
=> pick my-map key
my-map.(key): value
=> poke my-map key value
You can't "opt out" of this with blank-in, null out...because blank is one of the legitimate keys.
This is unfortunate, because it would be nice to be able to use TRY with PICK.
An Atomic-Age Solution
Let TRY Defuse Isotopic Errors
My new idea, which we might call NULL-In-TRY-Out, looks like a conventional error case:
>> first null
** Error: FIRST doesn't accept NULL for its SERIES argument
But this isn't a typical type-checking error (which would happen in the evaluation engine and hence not qualify for being "definitional").
The parameter convention would basically say that the function returned the isotopic error as its output, making it interceptable. In other words: It actually did accept the NULL parameter and call the function...but then the only action it took was immediately put an isotopic error in its output cell.
And one of the constructs that would trap it would be TRY.
>> try first null
; null
Will People Notice A Difference?
It flips it around, to where you put the TRY on the operation that's supposed to tolerate the NULL, not the NULL being tolerated.
So before you would write something like:
adjust-url-for-raw try match url! :source
That would say "if MATCH returns NULL, morph it into a 'less ornery' BLANK!, so ADJUST-URL-FOR-RAW knows not to complain"
But now:
try adjust-url-for-raw match url! :source
What you're saying is "I know ADJUST-URL-FOR-RAW can't use that NULL, but it expects that I might give it one, so keep the isotopic error it returns from being raised."
Each has its merits.
-
In the old way, you were pointing a finger right at which operation could potentially return NULL.
-
In the new way, you're pointing right at the operation that might be bypassed due to parameter conventoins and not run.
All things being equal you could argue for either. But things aren't equal: it's much cleaner to untangle this behavior from BLANK!, and push it out-of-band...freeing BLANK! to take its natural meanings.
On that note, another "nothing of value was lost" feature will be when someone would deliberately put BLANK! in variables to signal opting out of something, without needing to use a TRY on the first step.
>> var: _
>> find "abc" var
; null
>> if (find "abc" var) [print "This used to work"]
This used to work
>> if (first find "abc" var) [print "Here's where you got your error"]
** Error: ...
>> if (first try find "abc" var) [print "Only one TRY needed"]
Only one TRY needed
There'd be no such option here, because there's no state besides NULL being used for the opt-out behavior. So you start off with var: null
and then have to say try find "abc" var
. You can't "pre-game your TRY".
Who cares. This is better.
We Might Be Able To Loosen Up About Mutating Operations...
Hey, why not TRY an APPEND?
>> series: null
>> append series generate-data
** Error: APPEND doesn't accept NULL for its argument
>> try append series generate-data
; null
The conventional wisdom historically was that it would be too risky to let APPEND allow you to pass a none/blank/null as input, because you wouldn't find out your modification didn't take. But if you have to put a TRY on to avoid the error--and you can cleanly know the error came from that APPEND (not some weird problem deep in GENERATE-DATA) you've got some safety!
Just Thought of it, So I haven't Implemented it Yet...
I'm sure there'll be some quirks to come up.
My first instinct is to say that TRY will only respond to some narrow errors (like "NULL given for TRY-able input" or similar), and require you to use ATTEMPT if you want to suppress a wide range of problems. But it's early yet to know.
But I think this is a winner.