Enfix Arity 0: A Puzzle From 2017, Now Believed to be Solved!

Here's the kind of question @salotz may find informative, in light of the "how can we model the evaluator in a minimal way"...where putting the evaluator on one sheet of paper like a lambda calculus or Lisp is simply not the objective. Maybe this helps illustrate the difference?


In 2017 while trying to formalize some code I faced a question: How should the evaluator deal with a function that is marked as getting its first argument from the left (e.g. "enfix") if that operation takes no arguments?

R3-Alpha only had arity-2 infix functions, and didn't allow you to make your own. Red lets you make your own, but keeps the rule:

red>> foo: func [x] [print mold x]
== func [x][print mold x]

red>> bar: make op! :foo
*** Script Error: making an op! requires a function with only 2 arguments

But Ren-C took a different tack, trying to generalize the mechanic so that "enfix" functions could take any number of arguments...and it only spoke to the question of how they got their first. You might make a ternary operator where the condition was on the left and the branches on the right. Or you might make a postfix operator which took a single argument--punctuating an evaluation.

We have some of those today. SO is an interesting variation on ASSERT that just validates its left hand expression:

>> 2 + 1 = 3 so print "No problem."
No problem

>> 1020 - 304 = 421 so print "Math is broken"
** Script Error: assertion failure: [#[false] so]
** Where: so console
** Near: [... = 421 so ~~ print "Math is broken"]

But what--if anything--should happen when you make an enfix function which takes no arguments? How is that different from a non-enfix function with no arguments?

I remember where I was when writing down that issue. It was at one of a number of chains of a coffee place called "Koffe", but this one in South Palm Springs. And it has sat around as Issue #581 for a long while.

But with the passage of time, I think the answer to this has become clear. An enfix 0-arity function is simply sequenced in the same evaluator step as the left hand side.

If you want to see some cool tests that show the nuances, look here. But the issue sums up the general gist:

>> bar: func [] [<bar>]
>> enbar: enfixed func [] [<enbar>]

>> evaluate @var [1020 bar 304]
== [bar 304]  ; one step, did not run bar yet, still pending

; Note: I'm not totally thrilled with the @var skippable method of telling
; EVALUATE where to put the value if you care.  I'm thinking we probably should
; be biting the bullet and figuring out a smarter multiple-return-value idea.

>> var
== 1020

>> evaluate @var [1020 enbar 304]
== [304]  ; one step, ran enbar during that step besides being arity-0

>> var
== <enbar>

One nuance is how enfix functions which have arguments to their left which are marked <skip> will effectively degrade to becoming non-enfix and run in the next step. Without that, the clever DEFAULT operation would not work. I'm open to the idea that there might be a use case for a left-hand-skipping operation that runs in the same step as the left even when it skips. But... since I can't think of any--and I know DEFAULT needs otherwise--I'll wait until someone points out a need before getting too concerned about it.


These edge cases really shed light on the big picture. I've said time and again that you don't do yourself any favors in language design by throwing yourself softballs. You should be looking for pathological cases and if you can't solve them, building a rationale behind why--and articulating the shape of the things you can solve.

On Ren-C I've been chipping away at these issues a little at a time, and it feels good to see this old one get closed. I'm pretty sure that people using the evaluator to invent cool things will appreciate that it bends (to the extent coherence will allow it to!)

2 Likes