Parameter Order in Rebol

With the advent of Point-Free expression, things that were wordy using SPECIALIZE are now succinct. You don't have to name the parameters...which saves typing and also keeps you from having to look it up:

next: specialize 'skip [offset: 1]
;
; ...becomes
;
next: (<- skip _ 1)

Though that placeholder is a little bit unsatisfying. Which means one starts looking at questions like "in Rebol, why isn't the offset to skip by first". In the narrow case of this specialization, it would make it a bit briefer and seem more natural...

next: (<- skip 1)

And to play devil's advocate, if Rebol is to be English-inspired instead of traditional OOP inspired...wouldn't you say:

"Skip 10 pages ahead"

Instead of the more awkward:

"Skip pages ahead by 10"

It's almost like the underscore in the partial specialization is playing the role of "by". But, anyway--is there some kind of universal rule to guide such decisions?

Most-or-Least Important Thing First

Since this looks very Haskell-y, here's a good summary paragraph from a StackOverflow answer on why Haskell users would favor (<- skip 1) to (<- skip _ 1):

"It's common practice in Haskell to order function parameters so that parameters which "configure" an operation come first, and the "main thing being operated on" comes last. This is often counter intuitive coming from other languages, since it tends to mean you end up passing the "least important" information first. It's especially jarring coming from OO where the "main" argument is usually the object on which the method is being invoked, occurring so early in in the call that it's out of the parameter list entirely!"

One thing about the least important parameter is it is often briefer to express. Consider the example we hit with DELIMIT when the data to act on was first:

delimit [
    "here" "is" "my"
    long block of code stuff
    "and we go on for a while"
] ","

By the time you get to the comma, you might have forgotten what you are doing. It seems it's pretty rare for the thing you're delimiting with to be coming from a long and complex expression, while the thing you're operating on may well be a giant expression (like a COLLECT).

So @johnk and I thought it seemed better to say:

delimit "," [
    "here" "is" "my"
    long block of code stuff
    "and we go on for a while"
]

So that was changed...and as a consequence we get natural-looking specializations now like:

spaced: (<- delimit space)

What does this mean for old-school institutions like SKIP? I don't know. But I do know it's getting faster and clearer to bend the system to anything you want it to be...

3 Likes

I think it has been mentioned somewhere before:

If refinements are their own parameters, couldn't parameters also be their own refinements?

This would mean, you could change the order of parameters at the callsite.

It feels a little weird, that you would have to add a parameter as refinement to the call, to move it to the end.

>> f: func [a b][print spaced [a b]]

‌>> f 1 2
1 2

‌>> f/a 1 2
2 1
1 Like

There's a little bit of a mechanical reason why this is hard, but yes, I did say... I intend to make this happen... !

I Formulated This Question

If you were to look at it with fresh eyes, as someone who had never programmed before (but spoke English), how different would you perceive these orderings of parameters to some array mutation operations... would one seem preferable over the other?

>> append 'd [a b c]
== [a b c d]

>> append [a b c] 'd
== [a b c d]

>> insert 'a [b c d]
== [a b c d]

>> insert [b c d] 'a
== [a b c d]

Claude.AI Says...

Let me analyze these parameter orderings from a fresh, beginner's perspective.

The first thing I notice is there's a subtle but important pattern in these operations. If we look at the append and insert examples, there are two variations:

  1. append 'd [a b c] and insert 'a [b c d]
  2. append [a b c] 'd and insert [b c d] 'a

From a newcomer's perspective, I'd argue that the first set (1) feels more intuitive and readable. Here's why:

  • The target collection (the array/list) comes after the element being added
  • This reads more like natural language: "append 'd to [a b c]"
  • It feels like you're specifying "what" first (the element), then "where" second (the collection)

The alternative ordering (2) requires a bit more mental gymnastics. When you say append [a b c] 'd, you're essentially saying "append this whole list to this single item", which feels counterintuitive.

The same logic applies to the insert operations. insert 'a [b c d] reads more naturally as "insert 'a into [b c d]" compared to insert [b c d] 'a.

So if I were completely new to programming, I would find the first set of orderings (append 'd [a b c] and insert 'a [b c d]) more:

  • Readable
  • Predictable
  • Aligned with how I might describe the operation in spoken language

This suggests the design prioritizes clarity and cognitive ease for the programmer, which is crucial when learning a new programming language or concept.


I'm wondering if we didn't have our preconceptions, would it be obviously the case that the reverse order was more natural...or not? :face_with_diagonal_mouth:

It is true that it takes some gymnastics to write it out:

append [a b c] 'd
=>
append to [a b c] the value 'd

vs.

append 'd [a b c]
=>
append 'd to [a b c]

But along with that, this idea of the "lesser important" (configuring) parameters being first, is something that's really gotten into my head...the reverse-OOP argument from Haskell. I've made it possible now for generics to dispatch on any parameter, so it's technically not an issue any longer to choose a different dispatch.

1 Like

In Haskell, a lot of collection libraries define two operations, such as these ones from text:

cons :: Char -> Text -> Text
snoc :: Text -> Char -> Text

(The name snoc comes from being the reverse of cons… geddit?)

So really this parameter order is so intuitive that it overrides considerations of ‘less/more important’. If someone wants to specify the second parameter first they can always write flip cons or flip snoc.

Some thoughts here

It's really hard, though, to see this systemically with fresh eyes.

Today for instance REPLACE takes the thing to do the replacement in first:

>> replace [a b b a] 'b 'c
== [a c c a]

But if this philosophy were extended that would become either of:

>> replace 'c 'b [a b b a]
== [a c c a]

>> replace 'b 'c [a b b a]
== [a c c a]

It's hard to say which makes more sense. If the "how you would write it out in English" argument comes up, it seems briefest as:

replace 'b <with> 'c <in> [a b b a]

I'm not sure how that would hammer out in the other case:

replace 'c <wherever you find> 'b <in> [a b b a]

But once you move away from the "obvious-seeming" cases, the whole thing stops seeming obvious. There's a lot of inertia behind the historical choices...this is tough to reason about.

I think that considerations of parameter order in Haskell go more or less like this:

  1. If there’s an obviously intuitive ordering, use that.
  2. Otherwise, if there’s an ordering which simplifies the definition, use that.
  3. Otherwise, if there’s no other deciding factor, order them according to what is most useful for partial application.

I think to some extent these criteria are less helpful for Ren-C, because it doesn’t make as much use of eta-reduction, currying or partial application. But maybe there’s some other part of the language which is affected by parameter order. (Refinements? Not sure…)

Wanting to move refinements is actually a pretty big thinking point at the moment, if you saw this:

Sure, but that isn’t relevant to the order of the mandatory parameters, which is what you’re talking about here… right?