Why Isn't LOGIC (e.g. result of 10 > 20) Storable in BLOCK! ?

Rebol made an unusual choice in deciding that all things that "look like" words would be words, and reassignable. So true and false were kept following the general rules of words. Including that all words are truthy.

>> code: [flag: false]

>> second code
== false

>> if second code [print "the word false is truthy"]
the word false is truthy

There was a LOGIC! type, and it could be made via the #[true] and #[false] notation:

rebol2>> code: [flag: #[false]]

rebol2>> if not second code [print "the literal #[false] is falsey"]
the literal #[false] is falsey

So the default definitions are true: #[true] and false: #[false]. But the rendering would conflate with the word, despite not being a word:

rebol2>> code: [flag: #[false]]

rebol2>> code
== [flag: false]

rebol2>> type? second code
== logic!

rebol2>> true
== true

rebol2>> word? true
== false

There was puzzling about wondering what a better notation for LOGIC! literals might be. Considerations included things like $true and $false, among others.

I wanted to see there be $word $(gr o up) $[bl o ck] $tu.p.le $pa/th as additional pieces in the box of parts...so sacrificing $ for this wasn't appealing to me.

Hence for a long time, Ren-C just rendered them as #[true] and #[false].

Rebol/Red's Bad Rendering Reveals a Subliminal Truth

As it turns out, a lot of places where you're building up structures you don't want an ugly literal (however it looks). For a lot of scenarios you want to reconstitute the word.

When isotopes came on the scene it afforded the interesting choice to say that the logic-reactive ~true~ and ~false~ antiforms couldn't be put into blocks... and would have to be triaged.

>> false
== ~false~  ; anti

>> append [flag:] false
** Error

>> append [flag:] meta false
== [flag: ~false~]  ; evaluates to the right thing under DO

>> append [flag:] logic-to-word false
== [flag: false]

All BLOCK! Items Truthy, Out-of-Bounds NULL

This gave another benefit, which is that the null returned from out-of-bounds access of arrays gives the unique falsey result for various enumerations. For example:

 >> block: [a b ~false~ c]

 >> while [value: try take block] [print mold value]
 a
 b
 ~false~
 c

Or:

>> block: [a b ~false~ c]

>> third block
== ~false~

>> if third block [print "There's a third element in block"]
There's a third element in block

>> fifth block
== ~null~  ; isotope

>> if not fifth block [print "No fifth element in block"]
No fifth element in block

These kinds of scenarios present classic problems in Rebol and Red, because people will write code assuming that they can use conditional logic to decide if a value is there... but then one day they hit a LOGIC! or a NONE! literal and it breaks. Having nothing that's actually in a block be falsey is a good thing.

Eventually, ~false~ => ~null~ and ~true~ => ~okay~

While antiform LOGIC showed benefits, it didn't show benefit to having a separate ~false~ type from ~null~. The real value of TRUE and FALSE came from being the recognizable words themselves. Which weren't necessarily better than other choices (like ON and OFF, or YES and NO).

So the system migrated to something called "Flexible Logic". This introduced a new antiform complement to NULL called OKAY, and focused on the conversion between these forms and words when block representations were necessary.

No Answer is Perfect, But This Has Solid Benefits

The need to store things in blocks that are themselves directly testable as falsey isn't all that valuable in practice. And it frequently led to broken code when people were assuming a conditional test could be used to know whether an element was in a block or not.

Encouraging discipline in triage with whether you want a word or a meta-representation of a logic (which evaluates to something that has the "branch triggering" or "branch inhibiting" property) has--in my opinion--turned out to be a net benefit.

Hmm. So the issues seem to stem from the fact that booleans are not the only truthy/falsey values in Rebol… it’s like other dynamically typed languages in having a bunch of others. Except it’s not entirely like other dyamically typed languages, since you want all values stored in a block to be truthy. So that implies that if we have a falsey value, it must be isotopic!

This approach is logical, but doesn’t really appeal to me. For one thing, it means that REDUCE can only handle a subset of code. It makes no sense that reduce [1 + 1] should work, but reduce [1 = 1] should error out.

This also makes composition harder. Thus, for instance, AND conceptually does two tasks: it REDUCEs the block it’s passed to, and then it takes the first non-falsey result. I’d much prefer to split out those tasks into two different functions… except I can’t do that, because you can’t make such blocks in the first place.

(You may argue that there’s not much need to run the equivalent of AND without first REDUCEing. But that’s just one example. You could also imagine a function which, say, counts the number of truthy values in a block. This is a useful function to have… yet at the moment, I can’t see any way to implement it without also making it reimplement REDUCE. This is not great for code reuse, to say the least!)

Besides, I don’t see why it’s so immensely desirable to use a conditional to test for existence. I’d much prefer using an explicit check: if null? fifth block […]. And nulls can’t be in blocks anyway, so that check would work just as well. For that matter, I’d naturally tend to write this in a completely different way: if 5 <= length? block […].

Do note that this being an option is a by-product of Ren-C's groundwork by having isotopes/null in the first place.

R3-Alpha/Red don't have this:

red>> alpha: [1 2 #[none]]

red>> beta: [1 2]

red>> none? third alpha
== true

red>> none? third beta
== true

Only handling a subset of code is by design in Ren-C. It also doesn't allow reduce [pick [a b] 3] or reduce [:append :insert :change] or reduce [get/any 'somethingundefined]

People can make (and have made) arguments that allowing those things are good. It takes experience to know that they are bad.

You learn how bad they are by seeing errors and catastrophes go by, that preventing them solves. Then you look at what you need to do in order to accomplish the (few) cases where they represented some valid intent...and see it can be solved a better way.

I can empathize that it's not in your mindset at this moment to think of a logic literal as "gnarly" or problematic in the way the state of null or an unset variable would be. But in Rebol, due to its idioms, and the weird choice to not take true and false for it...it winds up being gnarlier than you think.

I can imagine a lot of things.

But I also deal with the largest and most tricky codebases of Rebolese that anyone has ever confronted. And I have a good sense of what problems are more and less prevalent... and where elegance is more in need vs. not.

The duality between true and false and #[true] and #[false] is a constant thorn which is assisted by not letting the non-words percolate too widely.

Counting the number of truthy values in a block simply comes up less often and is less useful than being able to trust that all items in blocks are truthy.

I'd say it's equally likely (if not more likely) that I'd want to count the number of red words in a block that contained instances of red or green or blue.

If your work involves a lot of boolean processing, and you wind up having to do a level of work that resembles enum processing for the words true and false (or quasi-words ~okay~ and ~null~, or integers 0 and 1) I don't think it turns out to be as big a problem as you might think. Let's imagine a DIGITAL function:

>> digital: lambda [x] [either x '1 '0]

>> digital 1 = 1
== 1

>> digital 1 = 0
== 0

>> reduce/predicate [1 = 1, 1 = 0, 2 = 3, 3 = 3] :digital
== [1 0 0 1]

>> sum: 0, reduce-each x [1 = 1, 1 = 0, 2 = 3, 3 = 3] [sum: me + digital x]
== 2

(Introducing a few features in my examples as we go, so you can see the use of quoted values as branches instead of blocks... e.g. if 1 = 1 '[a] is a synonym for if 1 = 1 [[a]]. Also the left-quoting construct ME, which evaluates to the prior value of the SET-WORD! on its left. It's more useful when the variable name is longer, e.g. my-long-variable-name: me + 1)

You seemed to prefer brevity e.g. with REDUCE over COMPOSE, and brevity is served better when you can fold the acquisition of value with the check:

 value: any [pick block index, <out-of-range>]

This kind of code is simply what idiomatic Rebol code is like historically. LOGIC! and NONE! resident in blocks have broken it, and people just kind of ignore that and hope that they don't hit them. Fixing it is good in my view, as I like the style.

And I'm just going to have to maintain that the systemic benefit of being able to write things like while [value: try take block] [...] vs. while [not null? value: try take block] [...] adds up to being more significant than the impositions on logic handling, when you consider the kinds of problems that Rebol is applied to.

1 Like

This is true, and I quite like isotopes in general. It’s just that I find it difficult to conceptualise boolean values, of all things — the most basic data type in existence! — being so ‘gnarly’ that they must be made isotopic.

Fair enough.

This is the argument which speaks to me the most: it’s a trade-off, and you just have to pick what things you value the most. You happen to like the idioms which are enabled by making all non-isotopic values truthy, so you consider this choice to be a good one. I don’t particularly like those idioms, so I prefer a design where booleans can be non-isotopic. Ultimately, it’s subjective, and I can accept the reasons you made this decision.

OK, I didn’t know /predicate existed. That immediately makes working with isotopes a lot easier.

OK, that is quite nice. I don’t think it’s necessarily worth sacrificing non-isotopic booleans, but like I said, it’s a subjective decision.

2 Likes