My Summary of the Big Picture

I wrote this to someone in a GitHub issue and thought it was pretty salient:

What has drawn people to Rebol historically has varied. But a large number who praised it and used it were less interested in the language itself, rather the properties of the executable. It was small, and you could run on any OS without installing anything came with a GUI built in.

But when serious language theorists look at Rebol, they notice it is riddled with design holes. The language itself wasn't composable the way one might like languages to be: mixing constructs in new ways that weren't specifically accounted for never worked. It was more like a "scriptable app" that had a few features that pleased its userbase...and had to be extended by the designer every time a new need came up.

So put briefly: If you don't understand what these holes are, then you won't appreciate the many issues that Ren-C is trying to solve. Starting from scratch inevitably makes the same mistakes.

Once you know that historical Rebol was fundamentally broken, there are basically 3 choices:

  1. Inventory and address the holes one at a time and try to fix or mitigate them
  2. Ignore the holes and just hope that if you add enough features and integration no one will notice
  3. Turn away and run from the crackpots using it, and work with a more solidly designed language

(1) is Ren-C's hard-chosen path. Energy is spent on identifying certain patterns in source that users must be able to write and have work, if the language is to justify its existence at all. While it would be nice if stack traces were beautiful and if building the sources was 100% easy, all of that would be meaningless if the punch line was "oh, and the language this is all supporting doesn't actually work"

(2) is chosen by people like Red and Oldes's branch of R3-Alpha, as well as some clones that have popped up over the years.

(3) is probably the most sensible choice, but if I didn't think there was some promise in the language I wouldn't be pursuing (1).


3 is also dealing with the shortcomings of the alternatives like there are ugly syntax and dependency hells.

Besides there are holes everywhere, they are more hidden or plugged with another library you do not control.
And most important there is no fun and nothing to learn from option 3 and that is also a rectification for following option 2. We need to remember the state original Rebol was at before Red project started, it was not open sourced and it was abandonware.
The effort of Oldes is imo to be placed as making it work for personal use mainly, like the World language and Boron.
Kaj"s Meta is an interesting alternative bringing Rebol back to some computing roots to get the basics right, without need for fixing R3 or Red designs.

1 Like

Not disputing this overview per se—however these statements aren't immediately elucidated without some digging. Is there a succinct citation for each of them?

  • When serious language theorists look at Rebol, they notice it is riddled with design holes

  • The language itself wasn't composable the way one might like languages to be

Would echo (I think) @iArnold's sense on 3—which more solidly designed language used by non-crackpots are people more likely to opt for vs. popular languages which may have the same or worse design issues but have the security blanket implied by that popularity?

I'd guess the sort of competition in the homoiconic space I'd have in mind would be people using things like Clojure (or ClojureScript). But I think if you look at things like Julia or Go, there are some real strengths there. And Rust and Haskell are good projects for those who want to step away from popularity but get rigor as a benefit.

The language itself wasn't composable the way one might like languages to be

I think if you look at the FOR-BOTH example in Ren-C, it really tells a story that isn't there in historical Rebol.

for-both: func ['var blk1 blk2 body] [
    unmeta all [
        meta for-each :var blk1 body
        meta for-each :var blk2 body

It's just across the board more solid. Not only does definitional RETURN mean that a return in the body will act as the person using the loop would expect, but BREAK and CONTINUE will behave correctly...returning the aggregate loop result.

 >> for-both x [1 2] [3 4] [print ["x is" x], x * 10]
 x is 1
 x is 2
 x is 3
 x is 4
 == 40

>> for-both x [1 2] [3 4] [print ["x is" x], if x = 2 [break]]
x is 1
x is 2
; null

>> for-both x [] [] [print ["x is" x], <result>]
== ~none~  ; isotope

All the ideas come together here. With META ~unset~ isotopes and ~none~ isotopes etc. become plain BAD-WORD!s which are truthy, NULL goes to NULL, everything else (including plain BAD-WORD!) becomes one level more quoted than it was, and all QUOTED! (including a quoted #[false]) are truthy. NULL is reserved uniquely for the signal of BREAK, and so everything... works.

Hmmm, I realize there is a deficiency here...

 >> for-both x [1 2] [] [print ["x is" x], x * 10]
 x is 1
 x is 2
 == ~none~  ; isotope

What I want is for that to be 20.

For that to happen, the second FOR-EACH would have to vaporize somehow.

This means either the second clause would be invisible -or- something about the nature of ALL would choose to throw out a product.

There is NONE-TO-VOID but then you get in the trap of what if the second loop ran but intentionally produced a ~none~. I guess we could say that ~none~ is skipped, which would manifest something along the lines of:

>> for-each x [1 2] [
      if x = 2 [~none~] else [x * 10]
 == 10

Basically saying that "none isotopes don't count". If that were the policy then turning nones into voids so they vanish in aggregating constructs would be considered acceptable.

I still maintain this is getting closer. Just have to keep looking at it.