The proxying technique had shown clear advantages for the authors of functions, to be able to do direct assignments vs. need to remember to always SET
the named variable passed in.
But underneath the hood, the names of the variables were still like inputs to the lower-level function. This went as far as trying to act compatibly with the variable-passed-by-refinement trick that historical Redbol used for multiple returns.
Among the many problems were trying to write something like ENCLOSE. An enclosing function could only influence the primary output, unless it went through some particularly convoluted work:
-
It would have to capture the variable name passed in before delegating, because the name would be repurposed as the proxied value slot
-
The delegated function would be called and would update the variable as part of its RETURN
-
The encloser would then have to read back this written variable with GET to see it, and use SET to update it...again.
Here's an example of what you could think you might have to do:
multi-returner: func [
return: [integer!]
extra: [integer!]
][
extra: 20 ; by this point, variable passed as /EXTRA hidden
return 10 ; stowed /EXTRA variable written back using EXTRA's 20 value
]
wrapper: enclose :multi-returner func [f [frame!]] [
let extra-var: f.extra ; capture var before call via DO moves it aside
result: do f ; callee proxies input variable during RETURN
set extra-var (get extra-var) + 1 ; get new written value and update it
return result + 1
]
>> [a b]: wrapper
== 11
>> a
== 11
>> b
== 21
The situation was actually even worse than that. All the complex logic for filling proxy slots with variable WORD!s was done during the frame building. e.g. extra-var
wasn't the hypothetical "parameter passed to extra before it got shifted into a hidden variable slot by FUNC", it was already the unset slot to be filled by the callee. And the actual variable name was private, known only to the enclosed function.
And it's worse than that if you want to preserve the ability to have behavior depending on how many inputs are requested, because there may be no variable at all... or an "opt in for the feature without a variable" placeholder. Correct code would be much more convoluted, if meaningful code could be written at all.
The headaches go deeper. Copying a frame and running it multiple times introduced semantic and ordering problems about the writing of these additional outputs!
Simply Put: Variable Names As Inputs Make Poor Outputs
All of this pointed to the inconvenient truth:
Implementing a function's conceptual outputs by passing named variables as input that are imperatively written during the function's body--anywhere, even at the end--is something that will break composition.
It's also horrible for atomicity, because a SET of an output variable may happen but then there's an error which occurs before the final return result can be produced... so any multi-return function working in this way is either broken or bearing an undue burden to do its own "transaction management", which is to say probably also broken.
Of course we knew this. But to get the desired effects (single return unless you use a SET-BLOCK!), there's no other choice, right?
The idea of making an ANY-VALUE! which tried to bundle values was nixed in the beginning. Because if we declared some new datatype to represent a multi-return pack that decays to its first value when assigned to a variable, you enter a catch-22, like this early puzzle when @[...]
was being considered to denote multi-returns:
multi-return: func [] [
return @[10 20] ; assume RETURN is "magic" and returns @[10 20] vs. 10
]
>> x: multi-return
== 10
>> [x y]: multi-return
== 10
>> x
== 10
>> y
== 20
The problems are apparent on even a trivial analysis. These "highly reactive" @[...]
values wreak havoc in a general system. If you walked across a block and encountered one, trying to work with it to store them in a variable would introduce distortions on assignment when they "decayed" to their first element.
for-each x [foo @[10 20] bar] [
if integer? x [...] ; INTEGER? sees @[10 20] as just 10
]
...but gee... if only there were some variation of BLOCK! which you could be guaranteed not to encounter when enumerating other blocks, and that couldn't be stored in variables... and a method for dealing with transforming them into and out of reified states so you could work with them...
Hey, waitaminute...
September 2022: Core Multi-Return via Antiform BLOCK!
Generalized Isotopes made longstanding problems seem to fall like dominos; like block splicing and error handling. And they could be applied here, too... solving several profound problems:
-
As with all antiforms, you wouldn't be able to put BLOCK! antiforms in blocks...alleviating many conceptual problems.
-
Yet even more severely, you wouldn't be able to put BLOCK! antiforms in variables. Antiform BLOCK!s would decay to their first value when assigned to any variable
-
This turned out to be the only antiform "decay" mechanism required, subsuming prior concepts like decaying "heavy null" to "light null". Heavy null would simply be a null representation in an antiform block.
-
Representing things like null would be possible since ^META values would be used in the multi-return convention, to afford the multi-return of antiforms themselves
-
Fitting in with the general rules of isotopes, a QUASI-BLOCK! would evaluate to an antiform block:
>> ~['10 '20]~
== ~['10 '20]~ ; anti
>> x: ~['10 '20]~
== 10
-
Functions that might be interested in the antiform state would need to take a parameter via a ^META parameter, in which case they would receive a QUASI-BLOCK! vs the decayed parameter
-
A good example of a function that would want this would be RETURN, in order to be able to have a forwarding mode that would return an antiform result vs. its decayed first value
-
This doesn't rule out building generators that proxy named output variables. In fact it fixes its problems, by limiting the relevance of the fact that proxying is being used to the interior of the function, and making its externals speak the antiform block protocol. If you want a proxying-FUNC you can have it...just define your own RETURN variation and wire it up.
-
It also opens the doors to many other conceptions of how to abstract the multi-return process
-
SET-BLOCK! assignments would have special understandings of how to decompose antiform blocks and assign the component variables
-
This would break the uneasy "backchannel" between caller and callee of variable names
-
The most obvious sign this had been a problem was that mere parenthesization would break historical multi-assignment:
>> [a b]: multi-return ; would work
>> [a b]: (multi-return) ; would act like `(a: multi-return)`
-
Now any expression that doesn't store a variable as intermediate can act pass-thru (such as a conditional), and if a variable wanted to capture the multi-return character temporarily it could META it...potentially manipulate the QUASI-BLOCK!, and UNMETA it back
Casualties of Composability
One casualty of this was be the feature of being able to make a function's behavior depend on how many outputs were requested. But the feature can still be achieved with enfix quoting left-hand-side and managing the assignment, it's just no longer something the core attempts to generalize.
Another casualty is legacy compatibility with passing in variable names via refinement. But again: this feature could be achieved by AUGMENT-ing the function with the refinement, then ENCLOSE-ing that with something that wrote the multi-return's output to the variable passed in via that augmented refinement.
But there's really no competition here. As I've hopefully made clear, passing in a named variable via refinement is simply not in the same league as a mechanism which legitimately makes additional outputs.
As usual with these things, I'll admit it may not be simple or obvious at first glance, but the results are speaking for themselves!