What Should TYPE OF an Antiform Be?

At this exact moment (in Sep 2022)...TYPE OF any antiform is an error, while both TYPE OF NULL and TYPE OF VOID give back NULL.

OF is a generic operation (the enfix form of REFLECT that quotes the word on its left. It may be desirable to honor the VOID-in-NULL out convention for all the other reflectors that aren't TYPE... and it's desirable to error on NULL more generically.

>> label of null
** Error: You didn't give anything

>> label of maybe null  ; MAYBE NULL is VOID 
; null

So if type of follows the same pattern as other xxx of, we'd surmise that you don't use TYPE OF to discern NULL and VOID. It errors on NULL input and gives you back NULL if you MAYBE it.

But what happens when you ask:

>> spread [d e]
== ~(d e)~  ; anti

>> type of spread [d e]
???

The Original Plan Was No Arguments Received Antiforms

In the original conception, function frames weren't capable of holding antiforms in the arguments. You physically could not receive a parameter that was an antiform.

I was also looking at the idea that some antiforms--such as raised ERROR!--would be completely impossible to get into a variable, ever.

The only workaround was if a function used the ^META parameter convention, in which case an antiform would come in as a QUASI! form of the value...while normal values would come in as one level quoted higher than they were:

 >> detector: func [^x] [print ["Meta of X:" mold x]]

 >> detector [d e]
 Meta of X: '[d e]

 >> detector spread [d e]
 Meta of X: ~(d e)~

Ultimately I backed down on this, instead allowing you to use type predicates to narrow which antiforms you'd be willing to accept:

>> splicetaker: func [x [any-value! splice?]] [
       append [a b c] :x
   ]

>> splicetaker [d e]
== [a b c [d e]]

>> splicetaker spread [d e]
== [a b c d e]

A primary driver behind this change was that operations which wanted to do things like ADAPT a function frame were having to become sensitive to whether a parameter was ^META or not. It seemed that standardizing the frame in a way that permitted antiforms as currency made more sense than having arguments be sometimes-meta'd, sometimes not.

(Note: A later driver of this was that LOGIC became implemented with antiforms, and needing to make a parameter meta to take logic was another bridge-too-far.)

What if OF (REFLECT) Didn't Take Antiforms?

So we could say that if you think you have an antiform in your hand, you're responsible for ^META-ing it yourself:

>> metatyper: func [x [any-value! splice?]] [
       print ["Metatype of X is" type of ^x]
   ]

>> metatyper [d e]
== &['block]  ; the TYPE OF received a QUOTED!, so e.g. answer incorporates quoted

>> metatyper spread [d e]
== &[~block~]  ; got QUASI!, so TYPE OF answer incorporates quasi

On the plus side of such an approach, we don't have to invent any type representations for antiforms.

Devil's advocacy here :japanese_goblin: would say that if parameter types need to be able to take antiforms, then the type mechanics have to be able to represent the "antiform types" somehow.

And if there's a representation, there would be uses in usermode for it. But it would have to be a special operator, and what it returned probably shouldn't be "a type" in the conventional sense.

For instance: It would be disruptive to do something like suddenly say all the types are actually "one level quoted up"...just to get this representation at quote level 0. This would give antiforms the &[~xxx~] representation, but at the cost of making everything else skew with an extra quote:

BAD WAY:

>> type* of first [(1020)]
== &['group]

>> type* of first [''(1020)]
== &['''group]

>> type* of first [~(1020)~]
== &['~group~]

>> type* of spread [d e]
== &[~group~]

That's pretty awful and misleading, just to be able to get that last representation...which you generally wouldn't use for anything beyond comparison anyway.

What if the result were itself a QUASI! of the type?

If only used for comparison and never for matching, then is there a reason why TYPE*'s answer has to be an actual TYPE-BLOCK! in the case of antiforms?

>> type* of first [(1020)]
== &[group]

>> type* of first [''(1020)]
== &[''group]

>> type* of first [~(1020)~]
== &[~group~]

>> type* of spread [d e]
== ~&[~group~]~

That's some fairly outside-the-box thinking (pun intended), which keeps the other types looking appropriate for their input. It has the nice touch that while all the real datatypes are unevaluative, the quasiform would evaluate into something that is sort of nasty, to show that you're probably confused. So while it isn't an antiform (and you wouldn't want it to be) it has that going for it, still allowing its raison d'etre of comparison:

splice!*: '~&[~group~]~
activation!*: '~&[~action~]~

switch type* of :x [
   splice!* [...]
   integer! [...]
   activation!* [...]
]

There I'm spitballing the idea that the antiform "types" are made to stand out by having an asterisk after them.

These quasi forms wouldn't actually be a datatype...

This would mean typical things designed to react to TYPE-BLOCK! wouldn't react to these.

But is that a bad thing? Consider trying to use it in PARSE:

>> splice!*: '~&[~group~]~

>> parse [? ? ?] [some splice!*]
; ** What were you expecting to do with it if it WAS a datatype? **

You can't match it against anything you'd find in a block, only evaluative products. No harm done in PARSE by not being a "real type".

On the other hand, you might imagine using it with something like MATCH:

 >> match splice!* spread [d e]
 == ~(d e)~  ; anti

So match could be written to say it takes these forms (let's say we call them "antiTYPES")

match: native [
    test [type-block! antitype? block!]
    value [any-value! any-antiform!*]
]

Should it go this far? I don't know.

I'm reluctant to see [any-value! any-antiform!*] typesets popping up everywhere. Part of the whole religion of antiforms is that they only show up in arbitrary outputs, and anything that takes antiforms in will only take very specific ones...as an alternative to trying to slipstream that state into a refinement or something.

But...I've struggled with some convolutions you get when you try to write code without being able to branch on types including antiform ones. And as I mentioned, typesets kind of need it too in order to work internally.

2 Likes

How Might TYPE* Inform TYPE OF VOID and TYPE OF NULL?

Above I suggest an XXX OF consistency, with TYPE OF NULL as an error and TYPE OF VOID is NULL.

If that were so, and TYPE* OF VOID is not NULL, then it would be the only case where TYPE* OF would change an answer from what the TYPE OF would say.

Does that suggest not allowing plain TYPE OF either NULL or VOID, and requiring you to use TYPE*, to get some kind of reified answer? (This would introduce null!* and void!*)

Or does it suggest allowing NULL and not VOID? (This would introduce null! with no *, and void!*)

>> type of null
== &[null]

>> type* of null
== &[null]

>> type of void
** Error: Use TYPE* to take type of antiforms if intentional

>> type* of void
== ~&[null]~

But I've Always Been Uncomfortable With NULL!

When discussing why I made the <opt> tag instead of null!, I said:

I Dislike The Idea Of TYPE* Changing Answers

Imagine you had code like this:

 switch type of maybe (...) [
    null [...]  ; input was void, so void in null out
    integer! [...]
 ]

But one day you realize the input could be a splice or something, and so TYPE* gets swapped in:

 switch type* of (...) [  ; special form, distinguishes void and null
    null!* [...]
    void!* [...]  
    integer! [...]
    splice!* [...]
 ]

It seems to me that the meaning of NULL should not change, all things being equal.

:exploding_head:

This Is Very Aggravating... but...

I think I like TYPE OF NULL being NULL

  • I like it even if it breaks a pattern in XXX OF where VOID input means NULL out, and NULL input is an error

  • TYPE feels like a fairly special kind of question, so it's probably okay

Internally we need a way to put nulls and voids into typesets, and I can imagine usermode cases where this is useful as well.

  • If TYPE* is able to give a reified answer of NULL!* and VOID!* this would provide some uses

  • This would imply that TYPE* OF NULL and TYPE OF NULL would be different answers

So what I'm going to try is a world where function parameters are powered by this TYPE* concept, give the same concept to usermode, and see what emerges from that.

I suspect I'm missing something here. But as with most things, I guess I just have to kind of try things and feel them out.

I believe what I was missing is that really, switch type of x [...] is broken because it goes through a middle-man... some concept of "type". You have to get a good answer for TYPE OF for everything, which you are then switching on.

Better is something like switch/type x [...] where type constraints (compositions of arbitrary functions) can power the process with access to the full value.

Type constraints are done with TYPE-WORD! (or TYPE-TUPLE! or TYPE-PATH!), and are just a way of referring to functions that perform the type test.

 >> parse [1 3 5] [some &any-odd?]
 == 5

There is no special "type" for a block that contains a single word... it's just "a block". Similarly I'd say there is there is no special "type" for an antiform of a group... it's just "an antiform".

>> type of 10
== &[integer]

>> type of [a]
== &[block]

>> type of first ['a]
== &[quoted]

>> type of true  ; it's an antiform of the word TRUE
== &[antiform]

>> type of null  ; it's an antiform of the word NULL
== &[antiform]

Things like logic constraints should not be written as LOGIC!

I think this would be a bad idea:

 logic!: &logic?

Right now in my prototype, integer! is defined as &[integer] so you can use it in a switch statement as was done classically. But this is likely to lead to problems, when people try to extend that to something named LOGIC!

switch type of x [
    logic! [...]  ; can't work, logic! is &logic?, not &[antiform]
    even-integer! [...]  ; more obviously couldn't work
 ]

So I propose just not defining these ending-in-! aliases for type constraints. If you're writing code in a function spec, just use the predicate function directly:

 something: func [param [integer! logic?]] [...]

And if you're writing a PARSE rule or a MATCH, just use the extra & character. It adds a small amount of noise, but I quickly became comfortable with it.

>> parse [a b: @c] [some &any-word?]
== @c

This can be made backwards compatible in bootstrap Ren-C, because it let you use ampersands in word names. So the workaround for bootstrap would be:

ren-c-old>> &integer?: integer!
== #[datatype! integer!]

We May Want to Call This KIND OF vs. TYPE OF

Over the long term I'm not sure what sort of type system a Redbol-type language could have. We could imagine TYPE OF being a rich answer that came back from an &[object] to describe its heritage.

kind of is a little narrower than implying some elaborate type system. It would leave TYPE OF for future use.

2 Likes

I should point out that there will still be functions that test for concrete types, like integer?. So people can use those functions in type specs, e.g. foo: func [x [integer?]] [...] instead of the type integer!. That's a bit of an annoying uninteresting degree of freedom. But they can be made to perform equivalently.

So I'm a little conflicted here. Because if null has an answer to TYPE OF, then I like that answer to be NULL. But I don't like the idea of casually giving back an answer to TYPE OF for null, because it's supposed to be a kind of failure signal and draw attention to itself.

The idea of having a special "TYPE-BUT-IT-CAN-BE-NULL" question (like type* of) is one possibility for cases when you are sure that you intentionally want to tell if you have a null in your hand.

But I had what is probably a better idea...

What If We Used Definitional Errors, and TRY?

Let's say every value gives an answer to TYPE OF, except for the one case of NULL. But if you ask for it, TYPE OF raises a definitional error. Then if you TRY it to defuse the error... giving you back a NULL!

 >> thing: null

 >> type of thing
 ** Error: TYPE OF NULL not legal (use TRY TYPE OF NULL if intended)

 >> try type of thing
 == ~null~  ; antiform

So you can write:

 switch try type of thing [
     null [... not a type, but you react to null ...]
     void! [...]
     splice! [...]
     logic! [...]
     etc.
 ]

You'd be guided by the TRY and the fact that NULL! does not exist to use the odd-man-out of NULL here.

That's actually pretty satisfying. But what if you're trying to get at the FIRST TYPE OF?

Well, that suggests that TYPE OF VOID has to be &[void] so that if you write first maybe try type of the TRY turns the error to null, the MAYBE turns the null to a void, and then FIRST gives null based on void-in-null-out. VOID in gives you the word VOID out in that case, NULL in gives you NULL out in that case.

It's a bit of a mouthful, but nulls are supposed to be "ornery". If people don't like it, they can test for null independently of the switch, which is probably what a normal person would do.

(There's some little nagging part of me that wants type of void to be something like &[]. I've always had some kind of urge to avoid reifying the types of voids and nulls... as too much of "something from nothing". But it might be irrational. Though not wanting to be too casual about allowing TYPE OF NULL is not irrational, I don't think.)