There are two modes of partial specialization which Ren-C has supported.
One form would be like if APPEND takes something like :DUP and fixes it to a value:
>> append2: specialize append/ [dup: 2]
>> append2 [a b c] <d>
== [a b c <d> <d>]
Then there's a trickier kind of specialization, which is to ask for the parameter but not specify it... thus just increasing the arity:
>> appenddup: append:dup/
>> appenddup [a b c] <d> 4
== [a b c <d> <d> <d> <d>]
The second form is pretty unusual in the language world. There doesn't seem to be much prior art in "the conversion of optional parameters to required parameters" (at least I don't know any, and the AIs I asked don't know any).
Since it's pretty different from what what people think of as partial specialization, let's call it "refinement promotion".
Refinement Promotion Is Tricky
Something that has been non-negotiable in the design is the straightforward array of parameters and locals that specify a function interface.
Refinements are defined in some order in that array. But you are not required to use them in that order.
So consider a definition like:
/foo: func [a [integer!] :b [integer!] :c [integer!]] [...]
Someone might do refinement promotion of this as foo:c:b/
- this makes it seem to the caller a function originally written with spec:
[a [integer!] c [integer!] b [integer!]]
The techniques so far have tried to mimic the way that refinements work. So get $foo:c:b
would produce a function that was accompanied by a [c b]
array, that would get pushed when running the promoted function. The elements in the array would not just have the words, but those words would have binding information to say what index those words were found at in the parameter array.
But since the underlying array is left the same, this means every time you want to know something like "what's the first unspecialized normal argument" you have to mimic the refinement gathering process. It convoluted the process quite a lot, and really went against the idea of the implementation being "simple".
Q: Is This Really Required To Support? (A: Yes)
This may not seem like a super-common need. But if you're implementing a dialect that wants to support calling functions with refinements, it's pretty important.
Let's say you're implementing something like the feature in UPARSE that lets you call functions:
>> data: copy ""
>> parse ["a" "b"] [some [/append:dup (data) text! (2)]]
>> data
== "aabb"
Basically, if (get $/append:dup)
can come back with a function that you can query for its parameters and get answers just like it was any other function, then support for refinements comes basically for free.
Should The VarList Just Be Rewritten?
If you look at what a modern SPECIALIZE followed by AUGMENT can do, they can hide parameters...and then add back parameters with the same name. Which parameters are visible depend on the "phase" of the frame.
So why couldn't refinement promotion be done just by making a new function interface that removes the argument as a refinement, and adds it back as a regular argument... then has a dispatch phase that moves the argument data to its old position for the subsequent phases?
It's not particularly "cheap" to do that, space-wise. You'd need a new VarList* and a new Phase*, and the Phase would have to remember the new and old positions to do the rewrite. But it would make parameter enumeration blunt and simple, because you'd really just be enumerating the parameters in order.
There'd be some cases where the position of the refinement would allow it to just be naturally rewritten to be a regular argument, and that could be optimized for.
What About When You Have Lots of <local>
?
This is kind of the dark side of the simple FRAME! model, which is that if you use it to create a lot of local variables, then operations like SPECIALIZE and AUGMENT which do VarList manipulation have to make copies of everything for the new VarList...including a bunch of locals that aren't changing at all in each new form.
/foo: func [x [text!] y [tag!] <local> a b c d e f g h i j k l m n o p] [
... 19 frame cells (includes RETURN) ...
]
/bar: augment foo/ [z [integer!]] ; z is last item in new 20 item frame
Refinement promotion would become another one of these situations that would do seemingly unnecessary duplication.
It would be possible in cases like this to create smaller frames and then proxy the results into larger ones, essentially simulating what a user might do to manually call FOO from a new function BAR which had a frame with 3 elements.
/bar: lambda [x [text!] y [tag!] z [integer!]] [
foo x y ; imagine doing this, but with faster internal mechanics
]
Some calculation could be done where the size of the frame justified it. I have a feeling that the frame would have to be reasonably large before a technique like this would be beneficial.
What About Refinements At Head?
Well, in that case, you would have to build a new VarList* that extracted the arguments, and then proxy them into position for the new interface.
At least one wouldn't be worried about the "bloated copies of locals" situation.
So...Use The Auxiliary Array Simulating Refinements?
The code for simulating refinements when asking simple questions like "what's the first unspecialized normal arg" is unappealingly complex.
Making it further unappealing is that when you have this array of refinements "off to the side" but still allow people to fill in slots in frames to specialize out arguments, you end up needing to have "reconciliation"...because those frame slots that are referenced by this out-of-band array are no longer part of the refinement promotion.
>> f: make frame! append:dup:part/ ; has auxiliary [dup part]
>> f.dup: 3 ; what cleans up [dup part] to just [part] ?
I've talked about not knowing about what "moment" to do these kinds of fixes, and I'm increasingly looking for ways to avoid there being any such moment. If the physical experience of the frame was that DUP and PART were ordinary parameters and not refinements, then it "just works".
The "Dumb" Mechanical Answer Is Likely Best
I sometimes forget just how much I take for granted in Ren-C, regarding the ability to compose functions together.
The "inefficient" idea of making a new parameter list and then proxying the arguments into position would be more efficient than having to create and evaluate an interpreted function that had to manually copy the parameters.
There's a huge tax created by having to compose an off-to-the-side parameter reordering list in with the frame variables, and that tax is paid by any code that wants to interpret the list. It's just too big a tax to pay.
It pains me a bit to delete it, because it was hard to write and seemed clever at the time. But techniques have advanced...and while the auxiliary list may have seemed somewhat optimal for storage, it's no longer the right choice.