A few times I've talked about the potential of making it possible to call a COMBINATOR function from outside of PARSE.
This is to say that if some PARSE-specific parameter was missing (e.g. the "parse state") there'd be a mode in the guts of the COMBINATOR mechanic which cooked up something like a temporary parse session just for the input you passed in.
Would It "Combinate" Parsers For You?
The situations I had in mind weren't really combinators that take parsers as parameters. And now that I look at it, I think that suggests that... no, you probably shouldn't call these kinds of combinators outside of parse.
Here's one imagination of calling a combinator like SOME:
>> [value rest]: some "aaaabbb" [repeat 2 "a"]
== "a"
>> value
== "a"
>> rest
== "bbb"
This exposes how SOME is actually not arity-1. Though it takes a "combinated parser" as a parameter, it also takes an INPUT...but that's usually implicit...specialized in by PARSE. But calling directly from normal code it could offer that parameter being gathered normally.
it doesn't feel that compelling, since you're getting a synonym for:
parse "aaaabbb" [some repeat 2 "a"]
But also, why would it take that interpretation instead of:
parse "aaaabbb" [some ([repeat 2 "a"])]
One point of view would say it makes more sense to think of the expression as the product of evaluation, because the argument would presumably be otherwise evaluative:
>> [value rest]: some "aaaabbb" append [repeat 2] "a"
== ??? ; infinite loop?
But this would make rule-taking combinators nearly useless.
It Was Suggested For Sharing "Decoding", not "Combinating"
Seeing how SOME isn't a good example for this, maybe the right way of saying what I'm trying to say here this is that there's some category of functions we might call "decoders"...and PARSE would be willing to call these.
They'd fit a standard format regarding things like taking an input series and giving back an amount of progress or an error. But they would not be passed something like the parser stack or have any automatic composition of parsers as arguments.
Plain decoding operations--like ENBIN and DEBIN--were the motivating cases:
>> debin #{FEFFFF} [le + 3]
== 16777214
>> parse #{FEFFFFFEFFFF} [collect [keep debin [le + 3]]]
== [16777214 16777214]
The idea here was that you could write one version of DEBIN, and it would be able to implicitly pick up the INPUT when used in PARSE.
But because the input is an implicit parameter that you get automatically for all "decoders", then without extra information it would have to be at either the beginning or end of the parameter list. Above it's at the beginning, which is different from how DEBIN was defined originally:
>> debin [le + 3] #{FEFFFF} ; original DEBIN design took dialect block first
== 16777214
(Note: I have a post about parameter ordering which questions the series-first model.)
We could say that "decoders" have to manually mention their input parameter somewhere, and position it in the order that it would be consumed if it's used outside of PARSE...which would allow customization of this process. It could default to being the first parameter if not positioned explicitly. Not an idea-killer, in any case.
If All The Input Wasn't Consumed, It Would Error
One idea of calling these decoders on arbitrary input could be that if the end of input was not reached, it would give an error:
>> debin [le + 3] #{FEFFFF00} ; asking for 3 bytes of decode, passed 4
** Error: DEBIN did not consume all input, request remainder if intentional
Asking for a remainder could prevent the error:
>> [value rest]: debin [le + 3] #{FEFFFF00}
== 16777214
>> rest
== #{00}
So this is kind of where the motivation is. Once you've written the decoder version of DEBIN, you have everything you need to run a DEBIN operation inside or outside of PARSE. So why should you need to write a separate combinator and non-combinator form?
As usual, more thought needed.