Ugly Types: Less Ugly Than History, Can We Do Better?

The Limited and Ambiguous Historical Idea

People are used to being able to do things like:

 x: 10
 switch type? x [
     integer! [print "It's an integer"]
     block! [print "It's a block"]
 ]

 assert [parse [1 [second] 'foo] [integer! block! lit-word!]]

 assert [find any-word! (type? first [x:])]

But the historical DATATYPE! and TYPESET! were strange.

  • DATATYPE! rendered as a WORD! but was really wrapping an integer of 0-63

  • TYPESET! was a 64-bit bitset, one bit for each type (this is where the 64 types limit came from)

    • it lost its meaning in rendering (it kept no record of what the set actually was...just dumped words for each bit)

    • not preserving the name from a fixed list of typesets was based on the concept you could make your own or UNION/INTERSECT them

So it looked like this:

red>> type? 1
== integer!

red>> type? type? 1
== datatype!

red>> print mold any-word!
make typeset! [word! set-word! lit-word! get-word!]

red>> print mold any-type!
make typeset! [datatype! unset! none! logic! block! paren! string! file! url!
    char! integer! float! word! set word! lit-word! get-word! refinement! issue!
    native! action! op! function! path! lit-path! set-path! get-path! routine!
    bitset! object! typeset! error! vector! hash! pair! percent! tuple! map!
    binary! time! tag! email! handle! date! port! money! ref! point2D! point3D!
    image! event!]

The TYPE-XXX! Approach

So Ren-C attacked the ambiguity and extensibility with a new word type, TYPE-WORD!. Then typesets used TYPE-GROUP! and TYPE-BLOCK!, referencing functions to act as type testing predicates, and using groups for intersections and blocks for unions:

>> type of 1
== &integer

>> type of type of 1
== &type-word

>> print mold any-word!
&(any-word?)

>> print mold any-value!
&(any-value?)

This gives some realistic axis of extensibility, and gives distinguishable entities that can trigger behaviors in PARSE when something looks up to type-xxx!. (this shows why using WORD! or URL! or ISSUE! wouldn't work, because the type intent has to be carried by what e.g. INTEGER! looks up to.)

Calling functions to implement type checks vs. checks on a bitset, especially when an array of functions must be called when checking every parameter in every function call, is a difficult performance point.

Intrinsics and other magic are employed to rein it in. It's not particularly simple...but finding ways to speed up function calls where you can has systemic benefit.

New Consequence: FIND Must Find TYPE-WORD! Normally

Being a legitimate datatype that can be stored in a block, some interpretations of datatype by functions like FIND were problematic:

red>> block: reduce ["hello" integer! 1]
== ["hello" integer! 1]

red>> find block 'integer!
== none  ; rendering was a lie

red>> find block integer!
== [1]

You couldn't find a literal datatype in a block. Ren-C is approaching this by saying FIND has to find the TYPE-WORD! (as it does for all non-antiforms), but that you can use antiform actions as predicates.

>> block: reduce ["hello" integer! 1]
== ["hello" &integer 1]

>> find block integer!
== [&integer 1]

>> find block :integer?
== [1]

There was some thought that maybe you could create antiform TYPE-XXX! and call them "matchers", passing them to FIND.

  • But this is an isotope for each TYPE-XXX!, so it's not even like there would be one "matcher"

  • It also would be the only instance of antiforms of types with sigils, which doubles the sigil to make ~&integer~, which I find kind of displeasing

I feel that antiform actions cover it for FIND, and if you have higher level needs you should use something like PARSE which has richer options and isn't beholden to quite the "mechanical" answer that a series primitive like FIND has to abide by with its limited parameterization.

New Annoyance: TYPE OF Quotes And Antiforms

When there were only two datatypes with quotedness, the quote was part of their datatype:

red>> type? first ['a]
== lit-word!

red>> type? first ['a/b]
== lit-path!

red>> lit-word! = type? first ['a]
== true

red>> parse ['a 'a/b] [lit-word! lit-path!]
== true

Ren-C's approach affords the ability make type constraints to carry forward the PARSE behavior. But the TYPE OF all quoteds is the same... &QUOTED.

>> lit-word?!
== &(lit-word?)

>> lit-word?! = type of first ['a]
== ~false~  ; anti

>> type of first ['a]
== &quoted

So perhaps you see the motivation to decorate as ?! instead of just ! for the type constraints. People need to know that these aren't fundamental types. You have to use e.g. MATCH with them:

 >> match lit-word?! first ['a]
 == 'a

 >> match lit-word?! 10
 == ~null~  ; anti

 >> match [lit-word?] first ['a]  ; alternative as 1st slot known "typelike"
 == 'a

This is something of a pain point, and I'm not entirely settled on whether it would be good to delve into some kind of ambiguity where we are actually allowed to get back constraint functions as the answer to TYPE OF, and make that the fundamental:

>> type of 1
== &integer?

>> type of spread [a b]
== &splice?

>> type of ~true~
== &logic?

>> type of first ['a]
== &quoted? 

So I don't think this is a good idea for the quoted types, but for the antiforms it might be a narrow enough thing that it provides "what the people want".

>> switch type of true [
     splice! [...]
     logic! [...]
     integer! [...]
  ]

Barring that, what we have to do today is flip SWITCH over into a MATCH mode (currently called SWITCH/TYPE but should probably be SWITCH/MATCH... or maybe it should take the MATCH name):

>> switch/type true [
     splice?! [...]
     logic?! [...]
     integer! [...]
  ]

Note that the ?! distinction is a new idea which hasn't made it to all type constraints yet, e.g. ANY-VALUE! is still as it was. But because parameters use what is effectively a TYPE-BLOCK! you can say any-value? or splice? in them instead of going through the extra step.

Should TYPE Be A Bigger Concept?

One thing that has nagged me is if when we ask for the fundamental "cell type" of something, if we should avoid using the word "TYPE" for that at all...

Maybe there's some bigger idea in an object/class system where TYPE is meaningful to say something more than "this is an object" but rather "this is a book", where you can ask also "is a book readable". Etc.

Or maybe TYPE can be parameterized:

 >> type of matrix
 == &[matrix 10x10]

So this would mean there's a smaller question about the fundamental type, maybe call it "KIND":

>> kind of [a b c]
== &block

>> kind of matrix
== &object

It would be nice to just be able to say "64 types is enough for anyone" and say "there, it's done". I'd be happy to do that if I felt that it was enough. It wasn't, even when thinking along fairly limited lines that don't go in these fancier directions.

I don't think any near-term system will actualize on bigger visions of what TYPE might be, but it would help to know if that should be ruled out or not, just in order to pick the term KIND or TYPE! But even that question is murky.

Some Related Reading: %types.r

The dialected table used to construct the type testing macros and other things is kind of neat, though some comments are out of date and parts of it need updating (it's getting upgraded in an upcoming commit which finally breaks the 64-type barrier and introduces the $ types):

See %types.r

From a Haskeller’s perspective, this is the obvious solution. 'a could be &[quoted word], and spread [1 2 3] could be &[isotope group] (or even &[isotope group integer]), and so on. The elements of these series could simply be ordinary words, left unevaluated and unbound: one could test 'isotope = first type of spread [a b c], and so on.

But I’m not convinced a Haskell-like type system is a great fit for Rebol. The biggest issue is that we want to have union and intersection types, and there’s no easy way to integrate them into a system like this. One could possibly make it more ergonomic using type synonyms, but then you’d have to handle those as well when testing types, and it becomes more complicated than it should.

I much prefer your alternative suggestion of making constraint functions fundamental… but with some small changes. If type of ''a is &quoted?, then I feel that type of spread [a b] should be &isotope?. But then one could have other basic predicates too. I think it’s particularly important to have a set of types &any-word?, &any-block?, and so on, which would match ‘under’ isotopes and quotes (and other sigils). From these, it should be possible to create other types by combining the existing predicates: for instance, a splice would be a type which is both &isotope? and &any-group?.

This does leave me uncertain about precisely how those combinations should be accomplished. The best idea I can come up with is to allow constraints to take arguments, like so:

>> splice!: &all [&isotope? &any-group?]
== &all [&isotope? &any-group?]

; or equivalently:
>> splice!: &all [isotope! any-group!]
== &all [&isotope? &any-group?]

>> match splice! spread [1 2 3]
== ~true~

>> match splice! [1 2 3]
== ~false~

; another demonstration, with more combinators:
>> series!: &all [
     &not isotope!
     &any [any-block! any-group! any-path! any-tuple! string!]
   ]
== &all [&not &isotope? &any [&any-block? &any-group? &any-path? &any-tuple? &any-string?]]

>> match series! "foobar"
== ~true~

>> match series! '[a b c]:
== ~true~

>> match series! spread [1 2 3]
== ~false~

I’m not sure how feasible this is to implement, though. As I recall, the key innovation which allowed constraints-as-types was implementing them as intrinsics. Is there some way of getting these combinators &all/&any/&not to construct new intrinsics at runtime? I really don’t know. Possibly, these type constraints may need to have their own internal representations, different to that of ordinary Rebol functions.

1 Like

Prior to antiforms, it was the case that TYPE-BLOCK!s were used, though it attempted to compress by putting the quote levels on the word in the block:

>> type of first [a]
== &[word]

>> type of first ['a]
== &['word]

>> type of first [''a]
== &[''word]

Even that starts to incur performance penalties when you ask for a type. Because it's synthesizing an array on the spot to answer the question--not a huge deal, but it's something. (The only "stock" blocks were the unquoted forms, &[word] etc.)

When quasiforms/antiforms came along, it ran afoul of representation questions. Weird answers like having to do meta forms inside the block:

>> type of first [(a)]
== &['group]

>> type of first [''(a)]
== &['''group]

>> type of spread [a b c]
== &[~group~]

>> type of first [~(a)~]
== &['~group~]

That was rejected as too obfuscating. (Meta forms work all right for things like storing arbitrary values in PACKs, but the above sucks.)

With my performance-blinders on, I don't recall if I ever suggested attacking this via plain words that convey parameterized types as per your Haskell-like suggestion:

>> type of first [(a)]
== &[group]

>> type of first [''(a)]
== &[quoted quoted group]

>> type of spread [a b c]
== &[antiform group]  ; or &[splice], if e.g. &[logic] narrows &[antiform word]

>> type of first [~(a)~]
== &[quasi group]

It's tough here though to assume the arity of each type is known, if we imagine this generalizing it might be better to have it structured, where only terminal types aren't in blocks

>> type of first [''(a)]
== &[quoted [quoted group]]

>> type of quote matrix
== &[quoted [matrix 10x10]]

Which adds another performance penalty to grapple with, but it seems important if you're going to say that array destructuring is the method of type destructuring.

If the answer from TYPE OF came back immutable, then magic might be able to compress that behind the scenes. It only would work if it wasn't assumed you could change one reference to the result of TYPE OF and see that in another place.

>> t: type of first ['a]
== &[quoted word]

>> t2: t
== &[quoted word]

>> take t
== quoted

>> t2
== &[word]  ; only if cells for t and t2 variables point to common allocation   

To point to a common allocation, there has to be an allocation, which subverts some levels of optimization (at least, when one is trying to be competitive with code that does no allocations).

For a similar problem that's been solved, see: PATH! and TUPLE! compression, explained

I May Like The Parameterized Type Direction

I was already aiming to flip things back so that the &word and &tuple and &path could be used as prettier impromptu type constraints:

parse [1 3 5] [some &odd?]

parse [...] [some &tester?/refinement]

parse [...] [some &obj.tester?]

I think this needs to be done regardless. But if it is done, then switching around to TYPE-BLOCK! for the parameterized types would be available.

The idea of making terminal types equivalent to their predicates might be good or bad. Don't know.

>> group!
== &group?

>> type of first [''(a)]
== &[quoted [quoted &group?]]

Certainly some food for thought here.

Why do you say this? I like the idea of structuring, but it seems to go against conventional Rebol style.

I agree that the output of TYPE OF should be immutable.

(Though note that in the paragraph to which you replied, I was referring to the idea of generating new constraints at runtime, rather than using TYPE-BLOCK!s.)

It’s not necessarily about ‘good’ or ‘bad’, as such… I just don’t really see any alternatives, if everything is a constraint.

The direction of the proposal does accommodate either of:

 >> type of false
 == &[logic]

 >> type of false
 == &logic?

I'm leaning to think that TYPE OF always gives back a TYPE-BLOCK! (or null, for null input, if you indicate that's intentional).

Coding style and in dialects, yes. But this is more on the "data" side of the spectrum than it is "code". You won't be writing it out in source very much, I don't think. Just analyzing it.

1 Like

The direction of saying TYPE OF always returns a TYPE-BLOCK! and that it is kind of a "broad answer" that you can destructure into parts makes it seem like it could give a good baseline behavior... that if you ask for the TYPE OF two things, they won't be equal if they're not equivalent to the maximum level of specificity that is known.

I'm happy if this means--for example--that things with different quoting levels aren't considered as having the same type...or things at the same quoting level of different types aren't the same:

>> (type of first [''a]) = (type of first ['''a])
== ~false~  ; anti

>> (type of first ['a]) = (type of first ['1])
== ~false~  ; anti

But you could pick this apart and somewhere in the type of both of them is the notion that they wrap words, and if that were interesting you could suss that out. The information is there in the answer to TYPE OF.

If this extends, and there's a way that you ask an object that if it's a book if it's equal to an object that is an animal, that these wouldn't come back as equal either.

But would the TYPE OF an animal and a book come back with some component of the TYPE-BLOCK! mentioning they're both objects?

>> type of book
== &[book object]  ; or [object book] ?

>> type of animal
== &[animal object]

Or is the fact that they're both objects not part of the answer to TYPE OF, but something you have to find out from a different test?

If object is what's parameterized, that would give you an easier time destructuring two things that answered the TYPE OF question to see if they're both objects. But it suggests an object with no subclass would have to fill in that parameter as something like a blank (or quasivoid or whatever)

 >> type of {x: 10 y: 20}
 == &[object ~]

None of this has any design concept, and I'm certainly willing to borrow or steal from elsewhere if it can be made to work in the paradigm with everything else.

But it does impact questions like:

switch type of x [
    block! [...]
    object! [...]   ; does this mean only plain &[object ~] ?
]

Just some rambling there, but it points to my sticking point of whether we should be writing switch kind of x everywhere instead of switch type of x for common code. But I don't like that being common, so I really want a way for type of to be the go-to.

I tend to agree with this. Then we could get rid of TYPE-WORD! and all the rest.

I’m not quite sure what else it would come back with.

OCaml is probably the closest to these ideas that I’m aware of. It may be worth having a look at its class types.

On this point, it’s worth noting that ‘kind’ already has a well-established meaning in type theory. (Specifically, it’s the type of a type.) So it’s probably a good idea to choose some other word.

With TYPE-BLOCK! being the sort of "declarative expanded types" there's still going to be desires for some way to do type constraints, in the spirit of:

parse [a: $b c] [some any-word!]

Decorating constraint functions is an avenue of accomplishing this:

any-word!: &any-word?  ; confusing to make it look like a datatype
parse [a: $b c] [some any-word!]

parse [a: $b c] [some &any-word?]  ; coherent, fewer definitions, faster

One could say that it's PARSE's job to have a MATCH keyword when it means "do a constraint". However, BLOCK!s have meanings already, and ACTIONs have meanings, so this is a bit dicey... where the MATCH combinator has to quote, which is not impossible but I think it's the wrong idea:

parse [a: $b c] [some match [any-word?]]  ; BLOCK! usually means parse rule here

So I do not see type constraints as having a notation as something that needs to go away.

Indeed, TYPE-BLOCK!s make type constraints difficult (unions, intersections, etc.). Which is precisely why my original suggestion was that they might not be a good fit for Rebol, and we should focus more heavily on type constraints as the fundamental building blocks for types.

That is to say: if we’re relying on a system of type constraints in any case, then it makes sense to me that TYPE OF should return a type constraint too. I especially like the idea of this invariant holding:

>> match type of x x
== ~true~ (for all x)

(Where match specifically takes a type constraint.)

Note that MATCH returns the value:

 >> match integer! 1
 == 1

 >> match integer! "abc"
 == ~null~  ; anti

Other than that (and the exception for null as match try type of x, where the result is a "heavy" then-triggering null)...yes, I'd agree with the invariant...

But I don't see a problem with MATCH being willing to take either a TYPE-BLOCK! (in which case it looks for exact equality of the type) or a type constraint (in which case it calls the function)...or to take a BLOCK! (in which case it assumes you want to treat it as you would a function spec block).

>> match [even? text!] 2
== 2

>> match [even? text!] 1
== ~null~  ; anti

>> match [even? text!] "abc"
== "abc"

The existence of type blocks doesn't mean type constraints can't exist too. They may just be different parts of the solution.

Historical Rebol had DATATYPE! and TYPESET!, and for the datatype purposes I'm thinking TYPE-BLOCK! may work, while for the typeset purposes the type constraints are used.

Ah, didn’t realise that. But it looks like you understood me anyway.

Possibly… but I prefer to avoid multiplying entities when possible. (“A designer achieves perfection when there is nothing left to take away”, and all that.) I dislike the idea of two kinds of ‘type-describing things’ if one suffices.

Hmm... I was just reminded that the nature of type predicates is such that they can only check against instances, it can't check for relationships between types described by the predicate.

So if you have a question like "Hey, I have a type T. Is it in the category ANY-WORD?" there's no way to know.

Things like function specs and PARSE and such always have value instances to operate on, so it's not a problem there. But it comes up in other code.

Maybe it suggests that functions like ANY-WORD? that can meaningfully be applied to either might need a refinement to help with that:

 >> any-word? first [x:]
 == ~true~  ; anti

 >> any-word?/type set-word!
 == ~true~  ; anti

This would avoid saturating the universe with duplicate functions like ANY-WORD-TYPE?.

It's relatively rare to need to do this, but it comes up sometimes...and it's something typesets could do that we can't do easily at present.

Hmm. Interesting point.

Although it does remind me of one of my suggestions above:

If that internal representation does something along the lines of storing the set of types directly, it would become easy to test if one predicate is a subset of another.

I don't know if it's a good thought or not, but the narrowing could come from the idea that antform is an arity-2 parameterized type... which adds the subtype if applicable:

>> type of spread [a b c]
== &[antiform &[group] #splice]

>> type of true
== &[antiform &[word] #logic]

And then:

>> logic!
== &[antiform &[word] #logic]

>> logic! = type of true
== ~true~  ; anti

Maybe messy, but a definite improvement over saying you can't get the type of a logic. :-/

I'm thinking when types appear as parameters to other types, they should be whole (decorations/block), just so it makes more sense to decompose.

>> t: type of first ['1]
== &[quoted &[integer]]

>> all [t.1 = 'quoted, t.2 = integer!] then [print "quoted integer"]
quoted integer

Maybe. I guess I'd have to see in practice how it panned out.

Only have vague ideas about how to practically implement this so it isn't dog slow, but it does seem like a positive direction.

Ok... well maybe I guess that if you want to test the "kind" then maybe the whole thing should be set up so that really is the first type of whatever you have.

It feels a bit strange to say that the type of whatever object subclasses wind up becoming starts with object... but... well, I guess it makes sense.

Then OBJECT! could mean "just a plain object with no further elaboration".

>> object!
== &[object]

So if you SWITCH TYPE OF and check against OBJECT! then you wouldn't match against fancier things than the base untyped object. You'd have to SWITCH FIRST TYPE OF, and then you wouldn't be able to use things like SPLICE! or LOGIC! in what you test against because you'd be dealing with words, not types.

I'm not sure how you'd elaborate objects to get something knowingly distinct. Could go the Java route and use URLs when you create your classes. :frowning:

 &[object http://hostilefork.com/project1/book]

 &[object http://hostilefork.com/project2/animal]

As these are runtime concepts, maybe making a new object class would get a runtime ID.

 &[object @animal.17]

 &[object @book.32302]

That's less oppressive-seeming.

 >> book!: class @book [...]
 == &[object @book.32302]  ; e.g. class creates a type, not an instance

 >> b: make book! [title: "Ren-C (Ab)user's Guide"]
 == ...{@book title: "Ren-C (Ab)user's Guide"}...

 >> type of b
 == &[object @book.32302] 

Well, it's a thought.

This feels quite redundant to me. A group antiform is always a splice. A word antiform is… OK, sometimes it’s a logic, sometimes it’s something else, but I don’t think TYPE OF is the right tool to be testing value-level properties like that.

Do we even have to? Plenty of languages (most notably JavaScript and Lua) use prototype-based objects which are not distinguished at runtime. I see no problem with Rebol taking the same approach.

(Apologies, this post has ended up somewhat long and rambly. TL;DR: we should think much more carefully about how useful TYPE OF really is in practice.)

In trying to sort out my thoughts on this topic, I’ve come to think that the key question we should be asking is: precisely what do we want to use types for?

Starting with the most basic things, one very important use is within the interpreter itself. This is the HEART_BYTE, which (as I understand it) defines how to interpret the bytes making up a Rebol value. (Previously @hostilefork has called this the ‘kind’.) Obviously, this is vital to making the interpreter work. It’s also fairly limited, albeit less now than it used to be.

A second usecase is if you have some arbitrary value and want to find out what you can do with it — what Rebol calls TYPE OF. In most dynamically-typed programming languages, including historical Rebol, this gives you back the internal interpreter type… but there’s no reason it couldn’t yield something more generic, as we’ve been discussing.

A third usecase is if you want to match a value against some criterion. Rebol highlights this quite prominently: in average Rebol code, types are used most frequently to establish preconditions for function arguments. They’re used similarly in PARSE, amongst other places.

(In other languages, the most prominent use of types is to enable static analysis during compilation. This is an extremely useful capability, and in many modern languages, the type system is explicitly designed to make it tractable to check as many properties as possible before the program is run. But Ren-C isn’t compiled, and Rebol more broadly isn’t hugely amenable to static analysis anyway, so this isn’t a concern for us at all.)

Most dynamically-typed languages cover all three of these usecases with a single notion. Each value is stored alongside some type descriptor, which is returned when the programmer asks for typeof(value) (or whatever it might be). Then, you can check that against another type using an ordinary if expressions, same as checking any other condition.

Historical Rebol took much the same approach. It has a fairly unorthodox implementation of supertyping (using typesets), but otherwise, there’s one notion of ‘type’ which covers all usecases. The main wrinkle is that dialects can use special syntax for matching against types, most notably in function parameters.

Ren-C has already diverged from this approach, by recognising that ‘things you can match against’ is a broader category than ‘things the interpreter needs to know about’. Thus, it’s gradually extended the language to accept functions (a.k.a. ‘type constraints’) in places where it previously only accepted types. We’re now at a point where all type-like things, aside from the primitive ‘kinds’, are consistently represented as functions. And I think we’ve agreed that this is a good idea. By separating ‘types the interpreter knows about’ from ‘types we match against’, we free up the interpreter to support a lot more basic types, while giving function definitions a greater ability to express arbitrary preconditions.

But of course, that doesn’t cover all the places types pop up in Rebol. They also appear as the return value of TYPE OF… which, I think, is where our disagreement lies. I’ve been leaning towards unifying it with that idea of ‘types we match against’, meaning that users only have to deal with a single notion of ‘type constraint’. On the other hand, you want to make it a more structured system, focussed around those primitive types known by the interpreter.

However, thinking along these lines has led me to pose a slightly different question: how does TYPE OF get used in practise? I think the answer to this question should significantly influence what we choose it to return.

As a first step towards answering it, I did a quick search of the Ren-C source code. As far as I can see, it’s not used very It looks often. Indeed, I can only find three occurrences:

  • Two in UPARSE, where it’s used in type-block! combinator to test a value against a type in an if expression. In my opinion, this should really be replaced with MATCH, allowing it to deal with filter actions as well.
  • One in test/datatypes/varargs.test.reb, where it’s again used to match a value against a type, albeit in a significantly more convoluted way which I don’t understand.

(For comparison, when I search for MATCH, I count >40 occurrences in the mezzanine alone.)

At least to me, this suggests that TYPE OF is of significantly limited use in actual code. I take this as a sign that we shouldn’t waste our time thinking up elaborate schemes to encode information in its return value… rather, we should just make it as simple as possible.

Along those lines, maybe TYPE-BLOCK! isn’t such a good choice for its return type after all, and it should be returning a single TYPE-WORD!. On the other hand, that doesn’t work so well with my conviction that we should only have one variety of TYPE-*. I feel sure that there’s some better design waiting to be discovered for this.

1 Like

Nothing's too long for me to read here! :slight_smile:

Feel free to write long things and edit them down later for clarity, or if you just decide parts of it were distractions and aren't relevant anymore (and it doesn't break the continuity of the thread).

The reason for that is that most uses of TYPE OF had been SWITCH TYPE OF. Many of those would no longer work, because e.g. TYPE OF TRUE was an antiform and not LOGIC!, etc.

So all the switch type of instances (except for two, apparently) were robotically changed to switch/type where the values you're switching on are constraints (or types).

switch/type x [
    &logic? [...]
    integer! [...]
    &splice? [...]
]

But given the choice, people would prefer switch type of and being able to think of type as being a value instead of a constraint. And with the TYPE-BLOCK! answer, there's some inkling of a direction of going that way:

switch type of x [
    logic! [...]  ; wouldn't be locked in a 1:1 heart:type ratio
    integer! [...]
    splice! [...]
]

With more complex types, you'd have to use something like DESTRUCTURE to get at what you were looking for. And maybe that would be interesting. But of course that's a lot of hand-waving right now.

In the end, a good "realistic" choice might well need to be about coming to terms with something kind of simple. That may just be how it is.

64 types wasn't going to cut it for me, so I had to break that barrier for starters.

Now that the barrier is broken, it's a good time to let it simmer a bit. It's not going to resolve overnight--but I'm quite glad that you're thinking about it, and that you are able to take initiative and grok these problems well.

On the plus side, there's a lot of other fun things to work on right now while this sorts out...empowered by the new types (and some other realizations that are falling into place)...

I'm pleased and terrified that you're reading the source, but... there are some files that have decent organizations and comments to them, which can give you your bearings faster than reading something like a R3-Alpha or a Red.

This is motivating me to start pushing through some changes that had been on the back burner for a while, e.g. the death of REBVAL:

REBVAL => Value renaming · metaeducation/ren-c@71459d6 · GitHub

If you do have questions about things you see, feel free to ask them (or send PRs of things that are clearly just wrong or outdated).

2 Likes

In a way, I think this just goes to prove my point. Yet again, we see the most common thing to do with types is to match against them. And, in your words, when we’re matching types it’s nice if we’re not ‘locked in a 1:1 heart:type ratio’. To me, that again implies that it should be able to take constraints, like SWITCH/TYPE does… and you can’t do that simply by SWITCHing against TYPE OF.

I suppose the point I’m making is that TYPE OF is limited in a very fundamental sense: it can only ever return one value. Perhaps you can try to predict ahead of time what people will want to test, and add things like LOGIC! that to the output of TYPE OF… but then comes someone who wants to test if a number is EVEN?, and TYPE OF just doesn’t give you that information. That’s why I think it’s a good idea to keep TYPE OF a direct reflection of the heart-byte, and use other constructions for code which needs other things.

I’ll put my support strongly behind this. Pattern-matching is inordinately useful.

This is really where my expertise lies. I’m a Haskeller — I spend a long time thinking about types and type systems. Additionally, over the past year or two I’ve been learning a lot about structural type systems (i.e. ones which allow union and intersection), including making my own. So when I say things like ‘a Haskell-like type system isn’t a great fit for Rebol’, there’s some intuition behind those statements.

Sure, but I just wanted to get some statistics quickly, and Ren-C itself is the largest Ren-C codebase that I know of. (Also, sometimes I get curious how things are implemented internally.)

A post was split to a new topic: Raku (Perl 6) Type System