Compatibility Idea: IS and ISN'T replacing == and !==, not = and <>

I think it's untenable that == and != are in the language, but not a pairing (the complement to == is !==). And I've said that == does not look literate, it's too much like a divider.

What I had proposed was having = take strict equality, and is taking lax equality. The wordishness of is made its fuzziness seem more appealing, and that appeal would be probably to the types of people who would like laxness in their comparisons. Those who prefer the symbol might (?) prefer strictness as a default. (This is a hypothesis, not tested.)

But having written Making the Case for Caselessness, I argue that case insensitivity has some legs. Maybe enough that we need not be ashamed to have = be a "lax" equality operator, in the sense of comparing case-insensitively...anyway.

Which raises a question. What if is and isn't were the strict equality operators, and =, !=, and <> remained lax?

It still kills off == and !==. And it's much more compatible with history. With this being used implicitly as the comparison operator in things like FIND and such, it can be argued that laxness takes primacy for the = symbol.

All things being equal (ha ha), I would probably prefer it the other way around. Math explains what "equality" is, and it doesn't have a "stricter" form. But "is-ness" is something more vague to define.

But there's code and expectations out there. So perhaps this is the way to go. Once you've taken == and !== out of the picture, I think it may become one of those too-close-to-break-even changes. And Beta/One philosophy is that we just stick to status quo in that case.

When I looked at the examples on chat, this way looks much better to me.

1 Like

Looking at how it reads, is seems to imply an exact match more than =, i.e. "to be or not to be" that thing being compared to.

1 Like

It's hard to say whether I'd see it that way if I were looking at it from first principles or not.

Given that I can't really tell, the path of least resistance is to keep = lax, make is strict.

This has been tumbling around in my mind a while. And I guess it's a winner. The only gray area is where other languages have taken this to mean what same? means.

I prefer the prefix forms being something like is? and isn't? instead of strict-equal? and strict-unequal?. It seems clearer than trying to take same? and different? and figure out something else for SAME? (like ALIAS-OF?)

How lax should lax = be?

People keep wanting to say that 1 = 1.0 ... despite those being two different datatypes historically (vs. a unified NUMBER!). There has also been talk of letting "A" = first "abc", where a single character string is convenient to compare. (&A = first "abc" would be another possibility with my pet HTML entity for & proposal)

But even though I have made the case for caselessness, I am skeptical that = should be considering different casings of different type words/strings to be equal. foo: can equal Foo: and FOO:, but I am not pleased with:

rebol2>> (first [foo:]) = (first [:FOO])
== #[true]

It's a bit of a pain to have to canonize types to match for the cases when this is what you want to ask. Perhaps another concept that is not equality is needed here.

We could have x like y for this, and maybe it shortens to x ~ y for greater brevity. Then have UNLIKE and !~. So perhaps the only thing that is lax about plain EQUAL? is its tolerance for the human concept of casing?

Predicates may make it easy

If we remember the concept of predicates, you could get this pretty easily if you felt it was important:

>> find [a: 10 b: 20] 'b
; null

>> find [a: 10 b: 20] /like 'b
== [b: 20]

Remembering that these slots are hard literal <skip>-ables and only match PATH! (or whatever winds up being used), evaluations can defeat it (including quotes)

>> find [a: /like b: 20] '/like
== [/like b: 20]

So it could be LIKE that thinks of 1 and 1.0 being the same besides being different types, or FOO and fOo:.