TYPESET! took advantage of 64-bit integers--and a limitation of 64 fundamental types--to fit 64 bit flags into a single value cell to represent a typeset. Besides the obvious lack of extensibility, this has a number of problems. One of which is that typesets render in a pretty ugly way...a seemingly simple concept like ANY-TYPE! expands out fairly monstrously:
r3-alpha>> print mold any-type!
make typeset! [unset! none! logic! integer! decimal! percent! money! char!
pair! tuple! time! date! binary! string! file! email! url! tag! bitset! image!
vector! block! paren! path! set-path! get-path! lit-path! map! datatype!
typeset! word! set-word! get-word! lit-word! refinement! issue! native!
action! rebcode! command! op! closure! function! frame! object! module!
error! task! port! gob! event! handle! struct! library! utype!]
red>> print mold any-type!
make typeset! [datatype! unset! none! logic! block! paren! string! file!
url! char! integer! float! word! set-word! lit-word! get-word! refinement!
issue! native! action! op! function! path! lit-path! set-path! get-path!
routine! bitset! object! typeset! error! vector! hash! pair! percent! tuple!
map! binary! time! tag! email! handle! date! port! image! event!]
People are used to typing in variables--like arrays or objects--and seeing what they look up to be a large amount of data. But a typeclass like ANY-TYPE! doesn't typically have this "explosive" character...and it impedes readability.
If you look closely you'll see another sneaking problem hinted at by R3-Alpha's never-implemented UTYPE! (for user-defined type). Imagining that it had an implementation, this would suggest that all user-defined types would be considered equivalent in a TYPESET!. If you took one user-defined type as a parameter, you would have to take them all, and do some filtering on it after the fact.
(In fact, extension types in Ren-C--which do exist--have this very problem. If you pass a GOB! to a native routine that expects a VECTOR! or a STRUCT!, it will currently crash. This hasn't really come up yet because few are working with those types. But it's something that a real answer to typesets would have to address, part of why I'm mentioning all this.)
This doesn't even touch upon the idea of "type-classes"...
e.g. if you decide to make a base object with something like book!: make object! [...] and later make book! [...], this "book!" is the kind of thing you might consider in some languages to be a class. You might want to write a routine like library-checkout: function [b [book!]] [...]. But there is no facility for this.
But..."Derived binding" in Ren-C set up some groundwork for understanding derivation. This was done for efficiency: to know the relationships in order to be able to forward references to base class members downward to the instance. That avoided needing to make deep copies of every member function of objects each time a new instance is made...just so those functions could refer to the variables in the derivations. Yet the relationship it had to encode to accomplish this could also be used as a type test to see if something came from a given type hierarchy.
So the mechanics are there...and it seems it would be cool to implement. But again, that depends on a notion of what a "typeset" actually is, which is the limiting factor.
And what about "type-tests..."?
Still another question comes along for tests that are basically functions. How about even-integer!, or block-2! where that's a block containing two elements? These seem very useful, though potentially dangerous if the function has side effects... leading one to wonder if there should be a PURE annotation for functions that promise not to have side effects, and that they take all their parameters as CONST and won't use any non-PURE functions in their implementations or look at any variables that aren't parameters that haven't been permanently LOCK'd.
I actually think maybe "type test" is the fundamental thing to be looking at, instead of some nebulous TYPESET! construct. If type tests can be held onto by name, and that name looks up a PURE function (approximated as a regular function with a pinky promise to make it pure in the near term, maybe), then maybe that is better?
This might point in a direction more like:
integer!: &[integer] ; fundamental
any-value!: &(any-value?) ; type function
You could then write a native whose implementation was something along the lines of:
set-typeset: func [name [word!] types [block!]] [
types: reduce types
m: make map! length of types
for-each t types [
if not sym-block? :t [fail [:t "is not a datatype"]]
m/(t): true
]
set name func [t] [m/(:t)]
return reduce &(name)
]
Then imagine you said something like:
>> any-scalar!: set-typeset 'any-scalar? [integer! decimal! ...]
== &(any-scalar?)
You'd end up with an ANY-SCALAR! definition that was not entirely illegible...an inert value that looks up to a type-checking function. Internally to the system there could be optimizations of this... imagine a generic MAPCHECKER function dispatcher which could be recognized by the evaluator to not even need to call it, but could go straight to the MAP!.