When you supplied a function spec block in R3-Alpha, it would preserve that spec as a copy of whatever was passed to the low-level MAKE FUNCTION! call. You could get it back with SPEC-OF. (If your code was abstract and the FUNC call was generated by code, this spec might not look familiar to you--since it wouldn't line up directly with what you'd written in source.)
This spec was preserved as commentary only: used by SOURCE and HELP. The MAKE FUNCTION! operation would create a second array as a cache--that was a summary of just what the evaluator needed to know: the order, types, and conventions of the parameters (normal argument, quoted argument, refinement). R3-Alpha called this the "args" array of the function, but Ren-C named the summary array the paramlist.
Redundant Interpretations of the Spec
If you wanted to know the order of parameters and their conventions in R3-Alpha, you could use WORDS-OF:
r3-alpha>> words-of :append
== [series value /part length /only /dup count]
This information was extracted from the cache (the "paramlist"), not from the spec block. Same for getting the accepted types (which is not very readable when the ANY-TYPE! renders expanded as all the types, for each parameter...!)
r3-alpha>> types-of :append
== [make typeset! [binary! string! file! email! url! tag! bitset! image! vector!
block! paren! path! set-path! get-path! lit-path! map! object! port! gob!]
make typeset! [unset! none! logic! integer! decimal! percent! money! char!
pair! tuple! time! date! binary! string! file! email! url! tag! bitset! image!
vector! block! paren! path! set-path! get-path! lit-path! map! datatype!
typeset! word! set-word! get-word! lit-word! refinement! issue! native!
action! rebcode! command! op! closure! function!...
The description strings--however--were skipped over by MAKE FUNCTION!. It only looked at ANY-WORD!s for the parameter names, and BLOCK!s for the typeset definitions. So anyone who wanted to extract help would have to do it from the saved SPEC-OF block.
This was made more dubious by the lack of a formal specification for what exactly the legitimate formats were:
func [arg [type!] {doc string}] [...]
func [arg {doc string} [type!]] [...]
func [arg {doc string1} {doc string2} [type!]] [...]
func [arg {doc string1} [type!] {doc string2}] [...]
Ren-C had some ambitions to define this, and even to let the help information that were broken up into multiple strings be merged together into a single string, so you could say things like:
func [
{Some description you write}
{that spans multiple lines}
arg1 "description of arg1"
...
][
...
]
This led to the idea of centralizing the code for function spec analysis so that the block was turned into structured information in one place. It wouldn't just be the argument names and types that would be processed at function creation time. Each parameter would be mapped to a completed description string as well.
And then the spec block wouldn't be stored at all. All queries about the function would go through to the structured information. If it wasn't part of what the evaluator needed (and hence didn't wind up in the paramlist), it would have to be stowed away in an object associated with the function known as the META.
What's Been Good About It?
The best aspect that I think has come out of this is the idea that the core focuses on function mechanics and is relatively unconcerned with how much or little information is maintained for HELP. This has empowered and enabled a large number of function derivation operations, that are critical to the operation of the system today.
In fact, I think saying the core focuses on mechanics should be pushed even further, to establish that the core does not ever see a function SPEC BLOCK! at all. So no description strings ever get passed in. Instead, whatever we think of as MAKE ACTION! takes a distilled parameter order and convention definition...essentially a FRAME!.
I've written about an idea approaching this concept in "Seeing All ACTION!s as Variadic FRAME!-Makers". And I think something like this is basically the future.
So in this coming future, not only are FUNC and FUNCTION synonyms, but the logic which transforms BLOCK!s into parameter lists and conventions lives in those generators. If you want to write a generator that uses the spec block logic, you build it on top of FUNCTION. Because if you are going to make a raw ACTION! on your own, you will need to speak a more "core" protocol...and you'll also be responsible for managing the information tied to HELP, which the core does not care about at all.
What's Been Bad About It?
I mentioned that there has always been a problem with using higher level generators that translate down to calls to FUNC, as R3-Alpha would only preserve what it got at the FUNC level:
Imagine if you come up with a relatively simple function generator, that takes a WORD! and gives you two variables corresponding to that one WORD!:
r3-alpha>> var-doubling-func: func ['var body] [
func compose [(to word! join var 1) (to word! join var 2)] body
]
r3-alpha>> f: var-doubling-func 'x [
print ["x1 is" x1]
print ["x2 is" x2]
]
r3-alpha> f 10 20
x1 is 10
x2 is 20
But SOURCE will not give you back your "source" as written with the VAR-DOUBLING-FUNC. It will just give you what MAKE FUNCTION! ultimately saw:
>> source f
f: make function! [
[x1 x2]
[print ["x1 is" x1] print ["x2 is" x2]]
]
That's the tip of the iceberg. But because Ren-C is factoring so much out into a generalized "language construction kit", the idea of being able to get at "source" is becoming more difficult. People who are doing what they used to feel were "simple" things are actually doing complex things.
There are some tools we can use here, one of the best ones is mapping back to real source in files via line numbers and file names. This would mean that in the f: var-doubling-func
example above, the function that is generated would be able to locate that invocation and point you back at it. There might be dozens of functions generated by an abstraction--all pointing back to that origin--but this could still be very helpful if you are inspecting one of them to see where it all kicked off from.
Another problem has been that as Ren-C's sophistication as an evaluator has gone up...with things like <skip>
-able parameters and variadics and such...the ability to detect and interpret that from the outside has gone down.
Even a simplifying change like the refinements-are-their-own-arguments now makes things a little harder for those doing spec analysis. Compare:
r3-alpha>> words-of :append
== [series value /part length /only /dup count]
ren-c>> parameters of :append
== [series value /part /only /dup /line]
There was a bit of effort involved in parsing out which refinements took arguments or not before. But at least you could tell, just from the word list, without consulting the types. Now there's no way of knowing.
I stand by the change of unifying refinements and their names 100%. But maybe there are moves that should be made here to distinguish the argument-taking refinements? Random brainstorms:
>> parameters of :append
== [series value part/ /only dup/ /line]
>> parameters of :append
== [series value /part/ /only /dup/ /line]
>> parameters of :append
== [series value /[part] /only /[dup] /line]
I don't know. But maybe this is kind of a losing battle; and really the only meaningful piece of information you should be concerned about is words of make frame! :append, and the parameter conventions are really between the evaluator and HELP.
This is more thinking-out-loud than anything...
But what I think the central point is, is that the interpreter core and "function spec blocks" are getting an increasingly distant relationship. I think that's a good thing, and I foresee something more like the post I made where parameter descriptions and HELP information are completely divorced from the language used to ask the core to make a function.
It's just going to require a much different way of thinking about SOURCE. It might mean more tricks like what @IngoHohmann was looking into, but do bear in mind my VAR-DOUBLING-FUNC above...which leads me to think that rather than snapshotting a certain level of abstraction, that backtracking through generated code to the "actual source" might be the more useful direction long term. But you might also make requests, e.g.:
>> snapshot f: 1 + 2
== 3
>> get-snapshot 'f
== "1 + 2" ; expression that made f