Isotopes started small, but are now an integral part of the design. (Give a close read to "A Justification Of Generalized Isotopes" if you are skeptical of weird new things, until you realize their importance).
They're showing amazing results, but they do raise a number of questions.
One thing is set in stone about them: you can't store them in arrays.
Sometimes that's because storing operations will interpret them as a kind of "instruction". For instance, isotopic GROUP!s are interpreted as "splices":
>> spread [d e] == ~(d e)~ ; isotope >> append [a b c] spread [d e] == [a b c d e]
Isotopic BLOCK!s are "packs"... which resolve to their first item unless used in contexts that specially understand how to unpack them (such as using them in combination with a SET-BLOCK!)
>> pack [d e] == ~['d 'e]~ ; isotope >> append [a b c] pack [d e] == [a b c d] >> [x y]: pack [d e] == d >> x == d >> y == e
But a generically weird isotope that the callee does not understand just gets rejected:
>> unmeta '~something~ == ~something~ ; isotope >> append [a b c] unmeta '~something~ ** Error: APPEND doesn't know what to do with a ~something~ WORD! isotope
First Question: Should Isotopes Be Assignable?
There's a nice aspect to being able to void out variables like this:
That's prettier than
unset 'my-var, and it is more obviously an assignment.
But what if you'd put a QUASI! of an isotopic block there?
my-var: ~[d e]~
The idea is that QUASI! isn't inert, but that it actually evaluates to an isotopic form:
>> ~[d e]~ == ~[d e]~ ; isotope >> pack [d e] == ~[d e]~ ; isotope
So the assignment would be like if you'd written
my-var: pack [d e] ... which pretty clearly should set MY-VAR to D... not the isotope ~[d e]~.
Right now, the trick is that a special exception is made when a literal quasiform is to the right of a SET-WORD!, and it's meant as a SET/ANY that doesn't do this kind of decay.
I've found this trick is brittle and difficult to abstract. If you write something like the EXPORT keyword, and you want
export var: ~foo~ to be compatible with
var: ~foo~ then it becomes difficult to inherit that tolerance. You can't get the SET-WORD! var, then evaluate the right hand side, and then expect a SET to tolerate the isotope assignments.
Going Further... What About Type Tests...or Equality?
If we look at historical Redbol, we can see there was some tolerance of "UNSET!" values by the type tests:
rebol2>> type? print "HI" HI == unset! rebol2>> integer? print "HI" HI == false
This tolerance did not extend to things like equality operators:
rebol2>> (print "HI") = (print "HI") HI HI ** Script Error: Operator is missing an argument
Red and R3-Alpha deviate on this, allowing "unsets" to be compared for equality:
r3-alpha/red>> (print "HI") = (print "HI") HI HI == true
In the Ren-C world, it's a slippery slope to permit isotope comparisons. To see why... if you think about something like an isotopic block representing a parameter pack, the semantics for the comparison should be of the first item only.
ren-c>> (pack [1 2]) = (pack [1 3]) == #[true]
My terminology would be "comparison forces decay". If you don't want that you need to ^META the things you are comparing, so you're comparing non-isotopic quasiforms:
ren-c>> ^(pack [1 2]) == ~[1 2]~ ren-c>> ^(pack [1 3]) == ~[1 3]~ ren-c>> ^(pack [1 2]) = ^(pack [1 3]) == #[false]
It seems like if this kind of comparison does not have fidelity for one kind of isotope, it is suspect for other isotopic forms. So it would not take its parameter as ^META, and thus error on non-decaying forms...although...
The Isotopic LOGIC! Proposal Shakes All This Up
I've proposed the idea that ~true~ and ~false~ isotopes are actually relatively friendly... as isotopes go. They would not cause errors when accessed from WORD!s, they would need to be legal to assign to SET-WORD!, and we'd assume they'd have to compare as well. But their inability to be put in BLOCK!s or similar would be an asset--stopping the accidental leakage of the "ugly" forms into data interchange...forcing resolution to WORD!s (or at least purposefully the quasi-words of ~true~ and ~false~)
It (probably?) doesn't seem like a great idea to have an exception for just those two WORD! isotopes...but to generalize the friendliness to any WORD! isotope. They'd have the array unfriendliness, but also a type checking unfriendliness by not being in the ANY-VALUE! class...only functions that wanted them would take them.
And Don't Forget ERROR!s...
There's been a huge improvement from definitional error isotopes, with the impacts felt on a daily basis. But of course, they throw some more curveballs in.
Assignment via SET-WORD! of an error isotope that hasn't been ^META'd needs to raise that error.
How To Tame This Zoo?
One idea I came up with was a notation for isotopic assignments:
That looks good at first... but what it's really saying is that if the right hand side was a pack with an isotope inside it, then the inside isotope would be tolerated in the assignment.
It gets a little more elaborate, but you can use this to represent any isotopic assignment. Let's say that assignment of a splice directly wasn't legal, you could put the splice in a pack, and then assign through that:
[~var~]: ~[~(d e)~]~
That may be convoluted, but it's better than not being able to solve the representation problem at all!
We wouldn't see such things that often if we decided to cave and say that voiding variables or WORD! isotopes were going to be legal. Much more common would be things like this:
var1: ~ var2: ~true~
You'd only see them when "problematic" isotopes were held in variables, e.g. assigned via SET/ANY (or assigned via a pack to begin with).
It's just slightly dismaying to think of people trying to write truly general code...ending up having to hammer out that ugly pattern:
compose/deep [ [~(name)~]: ~[(meta ...whatever...)]~ ]
I think that's just life. And really, it's the tip of the representational iceberg...we already know that you can't round-trip functions or objects or other things in this way. It's exaggerated for me because I try to think of how MAKE OBJECT! [...] renders in a general sense. But it's probably not the case that many people are going to need (or want) to do this kind of thing.
Does That Answer Anything About Type Tests?
What actually triggered me to write about this was dealing with some code like this:
if null? do code [...]
The code was returning VOID...which is the isotopic state of NULL. And the NULL? test was erroring, because it was only designed to operate on non-meta values.
Let's not consider the NULL? question as special--let's imagine it could have just as easily been INTEGER? or somesuch.
There is a bit of a problem with any function that takes an isotope as a parameter. So think of:
>> foo: func [x] [return pack [x + 1 x + 2]] >> pack? foo 10 == #[true] >> integer? foo 10 == #[true] >> x: foo 10 == 11 >> y: ^ foo 10 == ~[11 12]~
Does it make sense for there to be something like PACK? which has the isotopic X-ray vision? Or should PACK? only exist as a question you can ask of a meta-value ("metapack?")
Having more of the functions like INTEGER? and NULL? trigger alarms on more isotope types helps create a spectrum of warnings that guide generic code that it's likely asking dangerous questions.
But it seems to me that anything which is tolerant in plain assignment without error should be tolerated in type testing without error.
It looks to me like ~ and ~word~ fit into that set.
We know that packs decay, and errors will actively fail on those errors...without the "pack inside a pack" or "error inside a pack" loopholes. But is it worth it to stop other isotope classes, or should it be something that's only handled on the getting side?
A lot of this likely ties into the answer for how function isotopes pan out. So that needs revisiting.
However, I'm glad for the singular pack solution to the edge cases!