Semantics of Predicates

Predicates are IMNSHO an extremely clever notational trick, taking advantage of the inert TUPLE! forms that lead with BLANK!. It's already out there for a few functions, but now I'm going to add it to ANY:

>> any .not.even? [2 4 6 7 10]
== 7

This hinges on the idea that ANY can be confident you weren't trying to pass a value to act as the BLOCK! to be ANY'd, because .NOT.EVEN? is otherwise inert:

>> .not.even?
== .not.even?  ; inert (e.g. evaluated to itself)

>> to block! .not.even?
== [_ not even?]  ; BLANK! is inert, and inert-headed TUPLE! are also inert

>> 3.also.inert
== 3.also.inert  ; Same rule would apply for things like INTEGER!

So a skippable callsite parameter can pick it up, and run it on each value as the test (vs. the default test just for logical trueness).

But...What Does it Mean?

In my ANY example, the predicate is determining which values "pass" or "fail" consideration.

But notice we return the input to that process, not the result of the test. If we'd returned the result, we'd have gotten the result of the NOT EVEN? call, e.g. a LOGIC!

>> any .not.even? [2 4 6 7 10]
== #[true]

That's not very useful in this case. Though we could have said you're supposed to pass in some kind of MATCH function, where it was both the test -and- the result.

>> any .match.tag! [1 "two" <three> #four]
== <three>  ; the result of `match tag! <three>`

But given that ANY is returning a single result derived from its contained items, is it really necessary to fold a potential transformation in with the act of picking? Can't you just apply the transformation after you've picked?

It seems that the best role of the predicate with ANY is to focus on picking, and have the understanding that its result will be either one of the items from the container or NULL.

But Wait--Generally Speaking, it's the REDUCED Item...

In the simple example I show, all the items in the ANY are inert. But they don't have to be.

This raises a question I've wondered about for a long time: What if you want to do an ANY operation on a block of stuff, but you don't want to reduce it?

In Ren-C all items have a quoted form, so you could MAP-EACH the block to a quoted version.

>> block: [1 + 2]

>> q-block: map-each item block [quote item]
== ['1 '+ '2]

>> any .word? q-block
== +

Seems a bit roundabout. But, at least it's possible.

But this might suggest an alternate axis for parameterizing ANY. What if instead of changing the test, you changed the evaluator, or took out the behavior of the evaluator altogether?

>> any .identity [1 + 2]
== +  ; or something like this

I Just Wanted To Introduce The Questions...

I don't know if we want to bend ANY all out of shape to get it to work just on inert data just as well as evaluated data. Maybe that's a job for another function?

But this is shaping up to be an important design issue, and it's every operation is going to have to deal with it.

Something odd occurred to me about this idea of "before-or-after" results, realizing that even IF could have a predicate. That might seem useless, but conditionals pass the condition to the what if you could split your test out from that branch?

>> value: 1020

>> if even? value (x -> [print ["x is" x]])
x is #[true]  ; IF received result of (EVEN? VALUE) as single argument

>> if .even? value (x -> [print ["x is" x]])
x is 1020  ; IF got VALUE but ran EVEN? on it, then still had VALUE

Whether you think IF needs such a feature or not, the decision of whether the branch gets the result of the predicate or the value that was tested is something that has to be answered for things like CASE.

case/all .not.even? [
    4 [print "This won't print"]
    7 [print "This will print"]
    9 (x -> [print ["x is" x]])

This seems pretty clear that you'd rather have the x be 9, and not #[true]

Anyway, I think this is all very cool...and I've managed to fiddle things around so that predicates are much faster. They no longer generate functions, they actually just splice the TUPLE! array directly into the evaluator feed. It's really neat stuff, so I hope people are looking forward to applying it.


:clap: Very impressive. The bag of tricks keeps expanding-- there's a whole 'nother level of Rebol to learn.

Where this is aiming is to pull together a model where you can really combine all the techniques and have it work.

For instance, with stackless:

 g: generator [
    yield 1, yield 2, yield 3  ; aren't COMMA!s great?

 case/all .equal?.g [
      1 [print "matched!"]  ; acts as if clause was EQUAL? G 1
      2 [print "this matched too!"]  ; EQUAL? G 2
      3 [print "this matched as well!"]  ; EQUAL? G 3

You start to raise all kinds of questions, however. Even the above has an interesting hidden issue... the generator is not finished. It has yielded three values, but it's sitting at the end of YIELD 3 and is waiting to be called a fourth time so it can give its NULL result. Until it does, that generator will keep its state hanging around forever.

The space taken up by the generator isn't much to be concerned about. More concerning would be if it were in mid-loop over something, and held a lock on that something indefinitely.

 data: [1 2 3]

 g: generator [
    for-each item data [yield item]

 case/all .equal?.g [
      1 [print "matched!"]  ; acts as if clause was EQUAL? G 1
      2 [print "this matched too!"]
      3 [print "this matched as well!"]

append data [4 5 6]  ; !!! g has DATA locked by the FOR-EACH

Worrying over this is kind of where I'm at.

One thing that would make life somewhat easier would be if these locks pointed back to the frame locations that locked them. Then you could navigate from the error you got on not being able to append to DATA to find the loop in the generator, and at least know where the lock is.

Still, it feels to me that a language which isn't using scope at all, loses one of the big tools for cleaning things up.

I wonder what the opportunties for scoping are. For example:

foo: func [value] [
   local o: make object! [x: value]
   print ["inside function, o/x is" x]
   return o

>> o-ret: foo 10
inside function, o/x is 10

>> o-ret/x
** Error: OBJECT! was freed due to being out of scope (see LOCAL)

bar: func [value] [
   let o: make object! [x: value]
   print ["inside function, o/x is" x]
   return o

>> o-ret: bar 10
inside function, o/x is 10

>> o-ret/x
== 10

So maybe it could be a mixture. I don't know, just thinking out loud here.


While cool...there are a few sticking points to be aware of...that I'm not all quite sure what to do about yet, I'll mention a few.


I'd wanted to make it so that you could say things like:

>> any .(<- match _ 10) [tag! integer! block!]
== #[datatype! integer!]

This funny-looking concept is letting you do a pointfree specialization that fixes the value at 10, and asks if the ANY finds anything that plugs into the type slot to fit it. So the datatypes are getting plugged in where the BLANK! is.

Now that the predicate mechanism is a direct splice, that doesn't execute the function without a reevaluate. This is kind of like how (func [x] [print [x]]) 10 won't print anything. The function definition just gets dropped on the floor, and then the 10 gets dropped on the floor. You need to say reeval (func [x] [print [x]]) 10 to get it to work.

When that's applied to predicates, you get something less satisfying:

>> any .reeval.(<- match _ 10) [tag! integer! block!]
== #[datatype! integer!]

So I'm leaning to saying that predicates are smart enough to assume that if the first thing is a GROUP! it should be reevaluated. There's not a lot of point to running code in a GROUP! that generates a value and tossing it.

Predicates and Refinements

Because PATH!s can contain TUPLE! and not vice versa, there's an unambiguous meaning that .foo/bar.baz is a PATH! with a tuple on the left.

That would mean if you splice this into the execution stream, you get [.foo bar.baz], and not [foo/bar baz]

Predicates won't accept these since they're PATH! would just error and say that's not a TUPLE!. So I think these cases would mean falling back on passing in a function.

With the above tweak on actions, you'd still have .(<- foo/bar baz) as an option which is not that bad.

Also, you could use APPLY...which would let you still put it at the front:

apply :whatever [/predicate (<- foo/bar baz), arg1 arg2 /other-refinement]

Or not.

apply :whatever [arg1 arg2 /other-refinement
                 /predicate (<- foo/bar baz)]

Enfix Functions and Positioning

Originally, I'd thought it would be interesting to put predicates between arguments, like:

 switch x .> [
     10 [print "X is Greater than 10"]

But whether that makes sense depends on whether you're using enfix or not:

switch .greater? x [
    10 [print "X is Greater than 10"]

The parameter isn't designed to be picked up from more than one spot, and the enfix version looks a bit "weirder", so I'd lean to the prefix position.

Multiple Parameter Ordering

Predicates that take multiple parameters (as in SWITCH) are a bit hard to keep straight.

One thing that seems really useful to be able to do in a switch is to MATCH on criteria, but you'd generally put the types in the switch block and switch on the value. That's not what MATCH's parameter ordering gives you:

 switch .match integer! [
      10 [print "this is less common to need..."]

Contrast that with the parameter ordering for PARSE, which makes more sense. Though it runs up against SWITCH not wanting to allow BLOCK!s for anything but code, so you currently have to quote the blocks:

 switch .parse data [
     '[integer! integer!] [print "Your data is two integers"]
     '[block! path!] [print "Your data is a block and a path"]

Anyway, my point is just that this is a bigger design space than you can shake a stick at. It's pretty overwhelming, so it'll probably just kind of phase in one piece at a time.