Calling Combinators (Decoders?) as Normal Functions

A few times I've talked about the potential of making it possible to call a COMBINATOR function from outside of PARSE.

This is to say that if some PARSE-specific parameter was missing (e.g. the "parse state") there'd be a mode in the guts of the COMBINATOR mechanic which cooked up something like a temporary parse session just for the input you passed in.

Would It "Combinate" Parsers For You?

The situations I had in mind weren't really combinators that take parsers as parameters. And now that I look at it, I think that suggests that... no, you probably shouldn't call these kinds of combinators outside of parse.

Here's one imagination of calling a combinator like SOME:

>> [value rest]: some "aaaabbb" [repeat 2 "a"]
== "a"

>> value
== "a"

>> rest
== "bbb"

This exposes how SOME is actually not arity-1. Though it takes a "combinated parser" as a parameter, it also takes an INPUT...but that's usually implicit...specialized in by PARSE. But calling directly from normal code it could offer that parameter being gathered normally.

it doesn't feel that compelling, since you're getting a synonym for:

parse "aaaabbb" [some repeat 2 "a"]

But also, why would it take that interpretation instead of:

parse "aaaabbb" [some ([repeat 2 "a"])]

One point of view would say it makes more sense to think of the expression as the product of evaluation, because the argument would presumably be otherwise evaluative:

>> [value rest]: some "aaaabbb" append [repeat 2] "a"
== ??? ; infinite loop?

But this would make rule-taking combinators nearly useless.

It Was Suggested For Sharing "Decoding", not "Combinating"

Seeing how SOME isn't a good example for this, maybe the right way of saying what I'm trying to say here this is that there's some category of functions we might call "decoders"...and PARSE would be willing to call these.

They'd fit a standard format regarding things like taking an input series and giving back an amount of progress or an error. But they would not be passed something like the parser stack or have any automatic composition of parsers as arguments.

Plain decoding operations--like ENBIN and DEBIN--were the motivating cases:

>> debin #{FEFFFF} [le + 3]
== 16777214

>> parse #{FEFFFFFEFFFF} [collect [keep debin [le + 3]]]
== [16777214 16777214]

The idea here was that you could write one version of DEBIN, and it would be able to implicitly pick up the INPUT when used in PARSE.

But because the input is an implicit parameter that you get automatically for all "decoders", then without extra information it would have to be at either the beginning or end of the parameter list. Above it's at the beginning, which is different from how DEBIN was defined originally:

>> debin [le + 3] #{FEFFFF}  ; original DEBIN design took dialect block first
== 16777214

(Note: I have a post about parameter ordering which questions the series-first model.)

We could say that "decoders" have to manually mention their input parameter somewhere, and position it in the order that it would be consumed if it's used outside of PARSE...which would allow customization of this process. It could default to being the first parameter if not positioned explicitly. Not an idea-killer, in any case.

If All The Input Wasn't Consumed, It Would Error

One idea of calling these decoders on arbitrary input could be that if the end of input was not reached, it would give an error:

>> debin [le + 3] #{FEFFFF00}  ; asking for 3 bytes of decode, passed 4
** Error: DEBIN did not consume all input, request remainder if intentional

Asking for a remainder could prevent the error:

>> [value rest]: debin [le + 3] #{FEFFFF00}
== 16777214

>> rest
== #{00}

So this is kind of where the motivation is. Once you've written the decoder version of DEBIN, you have everything you need to run a DEBIN operation inside or outside of PARSE. So why should you need to write a separate combinator and non-combinator form?

I Was Thinking About This Now Because of TRY

I'm putting together some thoughts where if TRY is the way you say it's okay if a combinator doesn't succeed in PARSE, what would the behavior be outside of PARSE?

Up until now we've said that NULL is the reserved result for combinator failure. But I've put forth the idea that NULL might be a legitimate combinator result, and so definitional errors could be used to say that an operation did not meet its requirements.

So about these "decoders"? Should a decoder run in isolation raise an error if it doesn't work, or should it just return NULL?

It seems clear that at least some operations should raise errors instead of passively returning NULL. If you ask to decode an invalid stream, it should be noisy about that failure.

I believe that some of these errors shouldn't be defused by TRY. For instance, debin #{FEFFFF} [banana + 3]... that's an inability to understand what you're even asking. To me this is on par with typos or passing invalid types; they are a sign of downright incorrect code, not the kind of thing that something like ATTEMPT should be able to ignore...much less should TRY quiet them.

So...What Should Count As Things TRY Defuses?

At first, the TRY error was specifically tied to the idea of a special error that happened when inputs were NULL. This required special code in the FUNC typechecking, because by default a typechecking error does not produce a definitional error that can be trapped with something like EXCEPT.

Now I've expanded the idea that the word was short and useful enough that it might be used for any case where there were no adverse side effects, and a function wanted to say it simply couldn't do the thing it was asked to do.

Might we say that all definitional errors fit under this category? That you shouldn't do return raise [...] unless the error can be swept under the rug by a TRY? Does this suggest a "value surrogate" for TRY is a generic thing that any error can carry (if you TRY me, convert to this value). Or is that just a characteristic of errors that TRY is willing to defuse?

As usual, more thought needed. :face_with_head_bandage: