Custom Function Generator Pitfalls

I commonly write function generators that put in some boilerplate to make some variables and service routines available to the generated function. But these frequently have weaknesses, and I thought I'd write up an example to illustrate...and explain some ways to mitigate the problems.

Example: PROMISE Generator With RESOLVE and REJECT

Imagine I want to make a function variation that's like a JavaScript promise, with a RESOLVE and REJECT...which are defined per-promise.

Let's say the first cut of the new generator looks something like this:

promise: func [spec body] [
    body: compose [
        let resolve: lambda [result] [  ; lambdas lack their own return
            some code here
            return whatever  ; intending return of generated FUNC
        ]
        let reject: lambda [error] [
            more code here
            return whatever
        ]
        (spread body)  ; code here should see RESOLVE and REJECT 
    ]
    return func spec body
]

When the frame is created for the new function, it will run this body that's been extended with some boilerplate. But that frame's arguments could have the name of any of the functions you're using in the bodies of RESOLVE or REJECT. e.g. what if I said foo: promise [code /more] [...] ... the implementations of RESOLVE and REJECT would be disrupted from what they thought the words they had used meant.

Once you notice this, you might think the solution is to pre-compute more things:

promise: func [spec body] [
    body: compose [
        let resolve: ^(lambda [result] [
            some code here
            return whatever
        ])
        let reject: ^(lambda [error] [
            more code here
            return whatever
        ])
        (spread body)  ; code here should see RESOLVE and REJECT 
    ]
    return func spec body
]

(Note use of ^META group in order to turn the isotopic frame produced by FUNC into a quasi frame, so that under evaluation in the function body it becomes isotopic again. The compose would fail if you tried to compose the isotopic frame in directly.)

That's a bit better in terms of insulating the boilerplate code from stray bindings coming from the user-supplied spec (though there's still the weakness of LET if the user wrote something like promise [let [integer!]] [...], which if you cared you could address by composing :LET in as its literal function value).

But it does too good a job: the COMPOSE runs during the PROMISE fabrication time, and so the notion of RETURN used by RESOLVE and REJECT are is the return for the PROMISE generator itself... not the produced FUNC as intended. This is true of anything else you need to have picked up from the instance (let's say REJECT was implemented in terms of RESOLVE, or needed some other local).

One way of addressing this would be to slip the instance RETURN in as a parameter, e.g. via specialization of the precomputed code:

promise: func [spec body] [
    body: compose [
        let resolve: specialize ^(lambda [result ret] [
            some code here
            ret whatever
        ]) [ret: :return]
        let reject: specialize ^(lambda [error ret] [
            more code here
            ret whatever
        ]) [ret: :return]
        (spread body)  ; code here should see RESOLVE and REJECT 
    ]
    return func spec body
]

There you've got an added assumed term which can break things, e.g. promise [let [integer!] specialize [block!]] [...] or similar. But at least some code here and more code here are running under the understandings that the PROMISE generator author had of what those implementations meant.

Once you've separated out that which can be precomputed vs. that which can't, there's no need to make the precomputed part every time:

promise-resolve*: lambda [result ret] [
    some code here
    ret whatever
]

promise-reject*: lambda [error ret] [
    more code here
    ret whatever
]

promise: func [spec body] [
    body: compose [
        let resolve: specialize :promise-resolve* [ret: :return]
        let reject: specialize :promise-reject* [ret: :return]
        (spread body) 
    ]
    return func spec body
]

Could Some Kind of COMPILE Operation Help?

Weaknesses due to redefinitions of things like LET and SPECIALIZE makes me wonder if situations like this could be helped by an operation that would replace words that look up to functions with references to the functions. The cell can retain the symbol for the word, which can make debugging and errors more tolerable.

promise: func [spec body] [
    body: compose (compile [let specialize] [
        let resolve: specialize :promise-resolve* [ret: :return]
        let reject: specialize :promise-reject* [ret: :return]
        (spread body) 
    ])
    return func spec body
]

Merging with COMPOSE would be more efficient, and could help cue that you want to avoid compiling the things in GROUP!s. Maybe it could assume you wanted to compile references to actions unless you threw in some kind of escaping:

promise: func [spec body] [
    body: compile [
        let resolve: specialize :promise-resolve* [ret: $ :return]
        let reject: specialize :promise-reject* [ret: $ :return]
        (spread body) 
    ]
    return func spec body
]

I've thought about this kind of thing for a while, but never got around to writing it.

Paranoia Plus Efficiency: Body As GROUP! vs. Spliced

One improvement to this code is to splice the body as a group instead of spreading it itemwise in a block.

To see why this matters, consider something like:

func-with-a-as-one: func [spec body] [
    return func spec compose [
        let a: 1
        (spread body)
    ]
]

Now let's say someone wrote:

>> test: func-with-a-as-one [x] [+ 9, return a + x]

>> test 1000
== 1010  ; not 1001

Accidentally or intentionally, the function was defined as:

func [x] [
   let a: 1
   + 9, return a + x
]

You can avoid this by quarantining the body, using (as group! body) instead of (spread body) in the COMPOSE.

func [x] [
   let a: 1
   (+ 9, return a + x)
]

As an added benefit, the AS alias is cheaper memory-wise than copying the elements in item-wise (though it adds one extra GROUP! evaluation step to the function).

Another Loophole: What If RESOLVE/REJECT Are Args?

If you use LET, currently that will override whatever definition is in play. So if someone were to write promise [x y reject] [...] they'd not be able to see the REJECT argument, and wouldn't get an error.

You can force an error by dropping the LETs, and expanding the specification to include definitions.

promise: func [spec body] [
    body: compile [
        resolve: specialize :promise-resolve* [ret: $ :return]
        reject: specialize :promise-reject* [ret: $ :return]
        (as group! body) 
    ]
    return func compose [(spread spec) <local> resolve reject] body
]

So that's just sort of a peek into the effort it would take to make a relatively hygienic function generator. Some things like worrying about taking SPECIALIZE as an argument might be beyond the concerns of the average one-off task. But if you write a bunch of indiscriminate boilerplate using arbitrary words to refer to functions, it's very easy to get bitten when an argument reuses those words.

As a Haskeller, my immediate thought here is: currying!

promise-resolve*: lambda [ret] [
    lambda [result] [
        some code here
        ret whatever
    ]
]

promise-reject*: lambda [ret] [
    lambda [error] [
        more code here
        ret whatever
    ]
]

promise: func [spec body] [
    body: compose [
        resolve: promise-resolve* :return
        reject: promise-reject* :return
        (as group! body) 
    ]
    return func spec body
]

Is there any reason why this wouldn’t work?

EDIT: yes, I’ve tested this and it does seem to work:

>> test*: lambda [ret] [lambda [result] [ret result + 1]]
== ~#[frame! {test*} [ret]]~  ; isotope

>> test: func [spec body] [body: compose [xxx: test* :return (as group! body)] return func spec body]
== ~#[frame! {test} [spec body]]~  ; isotope

>> apply (test [x] [xxx x]) [10]
== 11
1 Like

For that part in this case, yes that does work. and is an improvement...

I’ve managed to get something even better working (I think):

with-return: func [name spec body] [
    let passthru: do compose/deep [
        lambda [return] [
            lambda (spec) (body)
        ]
    ]
    return spread compose [(setify name) passthru :return]
]

So with this one can do:

promise-add-resolve*: with-return 'resolve [result] [
    some code here
    return whatever
]

promise-add-reject*: with-return 'reject [error] [
    more code here
    return whatever
]

promise: func [spec body] [
    body: compose [
        (promise-add-resolve*)
        (promise-add-reject*)
        (as group! body) 
    ]
    return func spec body
]

To be quite honest, I’m not entirely sure how this works… somehow, that :return seems to correctly refer to the RETURN of the promise, while passthru is bound to the function defined in WITH-RETURN. I still have this horrible feeling that I’m doing something completely wrong, and the small example I tested only worked by accident. (That’s what happened while I was writing this thing, when I forgot the LET and accidentally made PASSTHRU global…!)

1 Like

Seems good. As you've perhaps absorbed by now, RETURN in Rebol and Red is not smart enough to do this, as it just climbs the stack and returns from the first non-native it finds...meaning a whole lot of things aren't possible.

Historical discussion of "definitional return": Proposed RETURN alternatives (archive.org)

I think a "strict mode" (parallel to JavaScript's) should be the default in modules, where you can't create new module-level declarations from non-module-level "scopes", at least without some special operator. I've called that operator EMERGE in the past.

Today only the special modules of LIB and SYSTEM use a kind of "strict mode", and it catches a lot of errors there.

Not sure what the implications would be for the console... there's a cost when it doesn't act the same as code in a script, but also people have come to expect certain things to work.

do compose/deep [
   lambda [return] [
       lambda (spec) (body)
    ]
]

You can do this more succinctly by moving the COMPOSE:

lambda [return] compose [
    lambda (spec) (body)
]

In the micro-optimization culture of historical Rebol, people would avoid COMPOSE when they could and write something less obvious with REDUCE:

lambda [return] reduce ['lambda spec body]

I myself find that harder to grok, but it's not insignificant when dealing with interpreted languages to apply these changes sometimes.

1 Like

Yes, I think something like this is very important.

I don’t know so much… I actually find it slightly easier to read. And it’s a lot more concise, which I like in and of itself.

So that becomes:

with-return: func [name spec body] [
    let passthru: lambda [return] reduce ['lambda spec body]
    return spread compose [(setify name) passthru :return]
]

I like it!

with-return: func [name spec body] [
    let passthru: lambda [return] reduce ['lambda spec body]
    return spread compose [(setify name) passthru :return]
]

This example illustrates a case of the binding contention I'm trying to stress that exists.

This FUNC has a RETURN. And so, that :RETURN used inside the COMPOSE is by default bound to it, e.g. the WITH-RETURN's RETURN.

You are then splicing this code into a body where it's supposed to intend the RETURN of the function generated by the promise. And there's clearly a mixture of intent here. You want PASSTHRU to stick regardless of definitions in the place where it's spliced, but you want :RETURN to pick up the new definition.

If you try to institute a rule that bindings "stick" unless overridden, it might suggest you'd need:

 with-return: func [name spec body] [
     let passthru: lambda [return] reduce ['lambda spec body]
     return spread compose [(setify name) passthru (unbind ':return)]
 ]

Perhaps you'd think you could avoid this here via using a lambda (and throw in the SET-GROUP! in COMPOSE trick, and even briefer GET-BLOCK! for REDUCE).

with-return: lambda [name spec body] [
    let passthru: lambda [return] :['lambda spec body]
    spread compose [(name): passthru :return]
]

But that :RETURN is still bound (to a dummy library RETURN that says "hey you called RETURN but there's not one in effect").

Today's rule, that FUNC will override the binding for all args and locals (including RETURN) in the body is how things work. So if you're considering a mechanic based on "things that have bindings have those bindings stick, and unbound things get bound" it's going to need at least one exception of function args and locals "overbinding" things that already have binding... but then you have to ask when (if ever) the "sticky" concept of binding that doesn't override existing bindings would be applicable.

Basically you have to think about all the various intents. Like what if I did want that :RETURN to refer to WITH-RETURN's RETURN?

Good work going after these non-trivial explorations, in any case! You're the kind of user I've hoped would come along at some point...

1 Like

Well, yes, that’s what I thought would happen… but when I test it in the REPL, it does somehow seem to work:

>> with-return: func [name spec body] [let passthru: lambda [return] reduce ['lambda spec body] return spread compose [(setify name) passthru :return]]
== ~#[frame! {with-return} [name spec body]]~  ; isotope

>> add-addone*: with-return 'addone [result] [return result + 1]
== ~(addone: passthru :return)~  ; isotope

>> promise-ish: func [spec body] [body: compose [(add-addone*) (as group! body)] return func spec body]
== ~#[frame! {promise-ish} [spec body]]~  ; isotope

>> instantiated: promise-ish [x] [print "got to 1" addone x print "got to 2"]
== ~#[frame! {instantiated} [x]]~  ; isotope

>> instantiated 10
got to 1
== 11

So what’s going on here? How does :return get bound to promise-ish’s RETURN, even though it should by rights be bound to with-return’s?

Well, not quite: my idea was that all words are unbound by default, and inherit their bindings from the environment of the containing block. So really you’d want to do almost the opposite:

 with-return: func [name spec body] [
     let passthru: lambda [return] reduce ['lambda spec body]
     return spread compose unbind [(setify name) (bind-to-current 'passthru) :return]
 ]

(Where bind-to-current is a hypothetical function which binds the given word to the currently active environment. You could probably implement it using bind plus a new native function get-current-environment.)

Conceptually, the unbind here expresses the intent: ‘these words should not be interpreted in the currently active environment, they should be interpreted wherever they land up’. This code makes an exception for passthru, which individually gets bound to its current environment.

It’s probably also helpful to step through the mechanics of that last line in more detail:

  • The block [(setify name) (bind-to-current 'passthru) :return] is created with a binding to the currently active environment. (It’s important to do this by default, since otherwise returned blocks wouldn’t retain the bindings they were created with.)
  • unbind removes that default binding of the block. The words within it must now be understood in the environment of the next highest enclosing block. (Which in this case happens to be the same environment.)
  • compose runs the groups:
    • setify name: produces an unbound SET-WORD! from the given name
    • bind-to-current 'passthru: produces a WORD! passthru, bound to the currently active environment (i.e. the environment of with-return’s body).
    • :return is not a group, so it remains unchanged (i.e. an unbound GET-WORD!)
  • spread turns it into an isotopic GROUP!, i.e. a splice. I haven’t yet worked out the interaction between splices and this kind of binding, but my hunch is that it should rebind the splice’s environment to each element of the splice when inserted into another block. In this case, it’s passed an unbound block, so that would be a no-op.
  • The unbound splice gets returned.

Then you would rebind that one too:

 with-return: func [name spec body] [
     let passthru: lambda [return] reduce ['lambda spec body]
     return spread compose unbind [(setify name) (bind-to-current 'passthru) (bind-to-current :return)]
 ]

Of course, this would get unwieldy if you wanted to bind nearly all words, except for one or two. But that situation is better handled by the caller anyway, since the caller knows which words it needs to rebind.

Essentially, there are four situations:

  • If the callee wants to maintain control over all the words in a block: this is the default, since the block is already bound to the current environment. (The caller might still override things.)
  • If the callee wants the caller to control the words in the block: unbind, so the words are interpreted in whichever context they might be passed too. (The caller might still rebind certain words it wants to stay the same.)
  • If the caller wants the callee to control the words in the block: do nothing, this is the default. (The caller might already have relinquished control over the block already using unbind.)
  • If the caller wants to override certain words: rebind the block to a new environment, which contains the new meanings of the desired words. (The callee might alrady have bound individual words so this doesn’t rebind them.)

Plus, of course, you can still do a deep traverse of the code and bind every word individually, if that’s really what you want to do. There’s just not much need for it in this model.

As I mention, the current rule for binding of arguments and locals in FUNC (including RETURN) is to override any existing bindings in the body at the time the function is created.

So when I've suggested thinking about adding an UNBIND, that's just playing along with the proposed idea that bindings would stick by default--a concept I did mention in "Rebol and Scopes..."

There I also put forth the idea that code would start unbound, with the binding flowing along from the top. We are talking about similar premises--but the devil's in the details. Glad to be talking through them.

COMPOSE is receiving just one block argument, and here we're saying that block is unbound. Inside the implementation of COMPOSE, where is it getting environment information from? Are you suggesting that when a function like COMPOSE is called, it receives an additional environment parameter of whatever its containing block had in effect at the callsite? Or is UNBIND doing something strange to tunnel the binding anyway, somehow?

This notion of "currently active environment" is an even newer beast than letting blocks carry along some kind of scope with them.

For this to do any good, COMPOSE has to apply that environment to any evaluated groups. Otherwise it doesn't know what SETIFY is, or NAME is, or anything else. In which case, BIND-TO-CURRENT is superfluous, right? It could just be ('passthru).

Then if you didn't put ('passthru) in a group, would it have been assumed that it should be left unbound? Is it only GROUP!s that have these applications of the secret slipstreamed environment? If not, why not just write PASSTHRU?

  • setify name: produces an unbound SET-WORD! from the given name

When thinking along these lines, I've generally felt that something like SETIFY NAME should preserve the binding as it's easy enough to SETIFY UNBIND NAME (or UNBIND SETIFY NAME) if you want it unbound. It's harder to get the original binding back if you want it.

  • unbind removes that default binding of the block. The words within it must now be understood in the environment of the next highest enclosing block. (Which in this case happens to be the same environment.)

If your goal is just to flip off a block's outermost "my bind is complete" bit, it seems it might be better to have an operation that does only that... maybe call it BINDABLE.

with-return: func [name spec body] [
     let passthru: lambda [return] reduce ['lambda spec body]
     return spread bindable compose [
         (unbind name): passthru (unbind ':return)
     ]
]

Hmm, I think I see… does this mean that with-return would suddenly stop working if the body of promise-ish were to define a variable passthru?

Indeed, there would have to be a notion of the ‘currently active environment’ from which unbound words lookup their names. Apologies for not making that clear.

That being said, I hesitate to call it ‘new’ when it’s already been used in nearly every other programming language in existence. Besides, surely specifiers require the same kind of concept already?

Indeed… but it’s quite sufficient for COMPOSE to evaluate the groups within that environment. It doesn’t have to rebind anything in the process. So this:

…isn’t correct, the way I’m thinking about it: ('passthru) would still construct an unbound word, since all words would start out unbound, and continue that way until explicitly bound to something. If you want the word to be bound individually to an environment (rather than inheriting the same environment it’s evaluated in), you have to bind it yourself.

(There’s an interesting duality here, now that I think about it: all words start out unbound but can be bound to environments, whereas all blocks start out bound but can have their bindings removed.)

Yeah, you’re right. I think I had a mental blind spot there, where I thought that a quoted name can’t take any bindings at all. (Yet more Lispiness creeping into my thoughts, I suspect…!) So indeed it would need to be setify unbind name.

I don’t see how this is any different to my UNBIND. In the text you quoted, my meaning was that removing the binding of a block requires its elements to be interpreted in some other context. (Because their containing block no longer has a bound environment in which they can be looked up in!)

To be clear, let me describe how I see this being evaluated:

  • The block is bound to the current environment when created.
  • COMPOSE evaluates the groups in that environment:
    • (unbind name): looks at name, UNBINDs it and SETIFYs it, making a SET-WORD! which is not bound to any environment in particular (and thus will be looked up wherever the splice ends up going).
    • passthru is a word, which like all newly created words is unbound, and it’s not in a GROUP! so it stays the same.
    • (unbind ':return) is the same story as unbind name: it creates a GET-WORD! which is not bound to any environment.
  • The result is a block in which all three words are unbound. You might as well have written [name: passthru :return] — that would have done the same thing.
  • Then BINDABLE removes the binding from that block.
  • Eventually that block gets spliced into the promise, and it all blows up when evaluated because passthru isn’t bound to anything and the current environment doesn’t have any definition for it either.

Now that I think of it, there’s an interesting wrinkle with COMPOSE (and REDUCE, etc.) which I hadn’t fully considered:

inner: func [] [
    local: 5
    return [(local)]
]

outer: func [] [
    local: 10
    return compose inner
]

In my model, how would outer evaluate? Firstly, local would be created in outer’s environment. Then, inner is called. It would create local in its own environment, then create the BLOCK! [(local)]. This BLOCK! would be bound to the current environment, whereas the WORD! local within it starts out unbound. (I suspect that GROUP!s would have to start out unbound, too.) This returned BLOCK! is then composed… but which environment is the GROUP! evaluated in? To get the semantics we want, compose would have to enter the environment bound to the BLOCK!, so that local is looked up in inner’s environment, not outer’s. To evaluate it in outer’s environment, we would have to get rid of the binding to inner’s environment by running compose unbind inner.

Incidentally, func itself needs to do something similar, though more subtle. Clearly, each function invocation must create a new environment to run itself in. Equally clearly, the parent of that environment needs to be set correctly to ensure that the body can access global variables. But… the function could be called from anywhere, so how does func find the environment to set as parent?

The answer is quite obvious: func should use the environment bound to the BLOCK! which is its body! When the function is defined at the top level, that BLOCK! gets created in the top-level scope, and bound to the same. If it’s a nested function, same idea — the BLOCK! ends up bound to the current environment, and the function will create a new scope descending from that environment.

Yes.

Discussion moved to new topic:

Functions That Capture the Current "Evaluation Environment"

Overall this was very useful to see worked through!

The original point I started the thread to make is now made more convincingly in "What Dialects Need From Binding".

I don't know about "obvious", but...

...do take note that's how I described it, walking through the steps in "Rebol and Scopes: Why Not".

"The virtual bind chain starts out with a module context that has global, x, and foo in it. This is all there is to stick on the BLOCK!s that gets passed to FUNC. So the spec and body are blocks with a module as the specifier."

"FUNC stows the body block away in an ACTION! that it generates. Later when it gets invoked, it creates a FRAME! with return and x in it...and puts that in a chain with the module context. So upon entry to the function body, that body is being executed with a specifier that looks in the frame first (would find that x) and then in the module second (would find global and foo). This compound specifier is what the evaluator state initially has for that body block."

So the concept of quotes shielding material from binding under evaluation does seem to be a winner! Good instincts, there.

(Though as I mentioned, that too has come up before. See "Breaking MAKE OBJECT! Into Component Operations". I wasn't "all in" on it yet, due to the impacts it would have on existing programming style. But as this has unfolded, breaking the existing expectations seems inevitable if binding is going to become more hygeinic.)

But I think when all is taken into consideration here, you have to write it like:

return spread compose [
     (in [] 'let) (name): (in [] 'passthru) :return
 ]

Presumed rule is that the block COMPOSE produces at the end will have the same environment its argument came in with.

GROUP!s are evaluated using the environment of the outer block, while other material is left as-is.

Using LET means you don't have to UNBIND the name you get in, because we assume LET ignores the binding. But generally speaking, you probably can't trust that callers don't pass you bound words, even if that binding is not meaningful for the usage.

You want PASSTHRU with its definition in the function. Or you can compose in PASSTHRU as a FRAME! (I'd like to make that as friendly as possible for debugging... it does cache the word of the name it was fetched from, but still, it will never be quite as friendly as a word).

I've added that for LET as well, which is presumably needed here... and to be safe about the calling environment you could protect it this way, or trust it's not redefined and just say LET and let it be unbound, to pick up the binding in the splicd environment.

You probably don't need spread bindable because it probably ignores the outer binding on splices.
The implications of losing the binding on things you SPREAD are not entirely clear.

As these patterns become more understood, dialect tools should be able to help make the more common cases briefer. A complement to COMPOSE which assumes you want hardened references to the existing environment with relatively few "escaped" slots--what I was calling "COMPILE"--is probably useful.

2 Likes

What I was thinking here was that SPREAD could take the binding of the block and reapply it to each individual element of the block. But the interaction of splices with binding is something I haven’t fully figured out yet.

1 Like

I have hit the first case in practice of SPREAD not applying the binding being a problem. It's essentially like this:

spaced collect [
    ...
    keep spread ["HTTP/1.1" response.status status-codes.(response.status)]
    ...
 ]

The SPACED operation does evaluation (strings are inert).

Reducing in advance could yield problems with double-reduction (e.g. if something evaluated to a WORD!, SPACED would then see the word and try to evaluate it again vs. consider it for its spelling).

>> term: 'something

>> spaced collect [keep spread reduce ["The term is:" term]]
** Error: something word is not bound to a context

Seems unavoidable. If you don't want it to happen, you'll have to unbind before you spread (or pass it quoted material so it doesn't get bound).

But an odd aspect of this is that it brings back something that was previously being done everywhere... e.g. structural operations were concretizing binding as they went. Most all of those structural operations have been changed to ignore specifiers, so this is an anomaly. It sort of straddles a line of being "binding preservation", I guess...

So this presents a bit of a conundrum, because if you let it apply to all elements then it applies to QUOTED!s too, and you may have explicitly put that quote on to avoid getting a binding.

If you follow the evaluator's rules for binding then it's kind of removing the agnosticism you would expect SPREAD to have about what you plan to do with the code.

Hence I think... probably it's a bad idea for SPREAD to do this.

One way this particular case could not break is for COLLECT to give back a block which has the same specifier as its input block by default, instead of unbound? :-/

I don't think I like that (though I'm not sure exactly what makes it different from COMPOSE in this respect). I'd probably prefer:

spaced in [] collect [
    ...
    keep spread ["HTTP/1.1" response.status status-codes.(response.status)]
    ...
]

That might be more palatable than having COLLECT producing blocks with attached environments.

It really is looking like those who build code programmatically need to be actively involved in binding...

...it feels a little sad to see examples like this becoming "more complicated". But as I've said before, the reason that Rebol examples seemed to "just work" at times is because nothing tricky was being tried. Once you do try something beyond the most basic of cases, you'll get bit by the assumptions.

At least if you find yourself repeating any pattern enough times you can generate an abstraction for that pattern. Automatic behaviors probably hurt more than they help.