In bringing back a modernized positional APPLY as the default, there's some really cool possibilities at hand. We can do something people have wanted for a while, e.g.:
apply append [series value /dup n + 1 /part skip series 4]
Generalized quoting gives us an interesting ability because we can actually quote refinements in such a model:
apply append [series '/append-me /dup n + 1 /part skip series 4]
Since APPLY (without /ONLY) evaluates its arguments, the APPLY operation can tell the difference between a refinement meant literally and one not. (You might have done this with a GROUP! before, but that's noisier and also more costly in the evaluator, while this is practically free.)
That's pretty "exciting". But it got me to thinking about skippable parameters like predicates, or the label to COMPOSE with. Those things are looking like they're going to be a very big deal--paradigm shifts. So how could you specify those?
Even though they're not refinements, you could specify them by their parameter name expressed as a refinement:
block: [(1 + 2) (<*> 1 + 2)] apply compose [block /label <*>]
So inside of COMPOSE, label won't be a separate refinement variable...it will be the actual TAG! <*>.
This got me to wondering...
Why Do Refinements Need More Than One Value Anyway?
I'm always frustrated trying to name refinement arguments. If the function takes a /PART, why can't the variable be called PART? What's this other name for? Isn't that what BLANK!s ("none!s") are for in the first place? To indicate the absence of a value?
This might make it a little harder to discern if PART could be a LOGIC! or INTEGER!. You don't have as easy a test for discerning them. But why should refinements be any easier than the entire rest of the system on this matter? And how often does this actually happen?
Functions with refinements have historically been pretty confusing, and having a refinement that takes more than one argument is extremely rare. If you really need multiple arguments to a refinement for some reason, there's blocks and paths and such.
Having the function arguments be refinements themselves has been an interesting experiment. And it's very useful for refinements that don't have any arguments, because then the arguments themselves can't serve the "present or not" status.
But it's not like it would be that hard to write something like this:
>> used: function ['refinement [path!]] [ try if something? refinement [ ; not null or blank refinement ] ] >> foo: func [/a /b] [print [used /a used /b]] >> foo/a /a >> foo/a/b /a /b
(I think @IngoHohmann made something of the sort a while ago.) In any case, my point is that I think we can live without a separate "status of the refinement" and value.
How it would look in practice
Imagine this function interpreted under the new understandings:
foo: function [ arg1 [block!] /ref1 arg2 [string!] /ref2 [integer!] ][...]
What this would actually be saying is that you have a /ref1 refinement whose only value is its use or disuse. This would be like any refinement without an argument today. It would be blank if not used, and for good measure we could make it hold
/ref1 as its value if used (seems better than making something else up, and actually has applications for 0-arg refinements.
But then, arg2 is just another normal argument that comes after it. And ref2 is a refinement with an integer! argument--but that integer argument would arrive int the ref2 variable itself, or it would be a blank.
So what this function actually is doing would be like the following in today's world:
foo: function [ arg1 [block!] arg2 [string!] /ref1 /ref2 ref2arg [integer!] ][ ref2: ref2arg unset 'ref2arg ... ]
Already you can see that it wouldn't be that different from today. And Ren-C already has tricks up its sleeve for doing legacy emulations...the old behavior of getting multiple arguments would be emulated one way or another, without doing too much extra work. (The simplest emulation would allow the same notation for single-argument refinements, and error with more than one argument--and that is likely sufficient.)
It would save space and speed the system up
Right now when you have a refinement with an argument, that's two frame cells to fulfill. Collapsing it to one is obviously more efficient.
But saving on storage is only part of it. There's a lot of evaluator complexity trying to keep the state and worrying about there being more than one argument...looping, checking. A ton of complexity just vanishes with this.
The "refinement revocation" methods of today are more complex than they need to be as well. You can get in dicey situations where you've revoked one argument and not another. Specialization has to cover cases where you set a the refinement to false but the value to true. The fact that you can always make a parameter a block if you really want it to carry multiple arguments seems to solve a lot of problems.
You could put normal arguments after refinement arguments
I show in the example above putting an ordinary argument after a refinement argument. That may not look all that useful to you. Maybe it would have some help in putting related parameters together without worrying about whether they were optional or not...kind of letting you express things in the flow of your thought.
But there's a really compelling reason to do this mechanically for deriving functions that add new arguments:
Because of the way frames work positionally, you can't derive one function from another in a way that reorders its arguments. This means that if you try to derive from a function that has two normal arguments and one refinement, you can do it today because it's not implied that everything after that point is a refinement. But once you've entered the "refinement zone", it's a point of no return.
This would correct that weirdness and permit extending functions with more parameters, either regular or refinement, and not run the risk of a regular refinement getting picked up as an argument to something it didn't intend.
You could write your arguments in any order in APPLY
It helps make sense of the "The refinement names the argument you are about to give" situation. But why not let you put refinements anywhere in an APPLY?
>> block: [1 2 3] >> apply append [/dup 2 /value <x> block] == [1 2 3 <x> <x>]
The current refinement processing mechanics would be much easier to rationalize and simplify under this model and likely make such reimaginations possible--as well as other forms of lightweight skinning that let you reorder function arguments on a whim.
I haven't tried writing it yet, but...
When I think of all the various parts of the system that get bent out of shape over edge cases, I have to say I think this sounds like it may well be a winner.
For a while we could disable the ability to put normal parameters after a refinement, and just raise an error if you do that. So you'd know to convert /foo bar [integer!] to just /foo [integer!] In the future though, cases like bar would start working as being a normal parameter.
The only casualties I can think of are using blanks as refinement arguments, and being able to do partial refinement specialization inside of an APPLIQUE. So you couldn't specialize like this:
applique 'append [part: true ...]
That would assume you wanted PART to be the value true. For a partial specialization (e.g. one that says you get the behavior as if you'd written APPEND/PART at the callsite, getting a refinement as a normal arg) you'd have to say:
applique 'append/part [...]
I can think of some other mechanical complications, but nothing overwhelming off the top of my head.