I was trying to make some fairly generic routines that would accept either a WORD!, a TEXT!, a CHAR!, or an INTEGER!. Then it would give you either a WORD! or an INTEGER! out.
At some point I ran up against this:
>> to integer! "1"
== 1
>> to integer! #"1" ; I wanted 1...
== 49 ; ..but I got a codepoint (the ASCII value for 1)
That is the status quo. But I thought to compare this with another possibility:
>> to integer! "1"
== 1
>> to integer! #"1"
== 1
>> codepoint of #"1"
== 49
With this, you could imagine putting a TEXT! or a CHAR! in as input to to integer!
...not knowing which type of input you had...yet either way get a result that had some kind of consistent representational meaning.
But the status quo is unlikely to ever have a useful invariant like that. Even just considering these two cases, TO INTEGER! becomes a bizarre operation. If it had a name it would be something like convert-decimal-string-to-integer-value-unless-char-in-which-case-codepoint.
No one wants that operation. So of course you see it used in cases where people already know which they have... something like to integer! my-string or to integer! my char!. All that's happening is that the short word TO is being leveraged to get a frequently used integer property.
Yet it's not even that "short" when you have to add a type onto it. As I point out with CODEPOINT OF, might there be clearer ways to say that at little or no extra cost?
TO INTEGER!
CODEPOINT OF ; just one character longer... and more explanatory
If you take a look at what a difference ENBIN and DEBIN are making vs. trying to pick arbitrary TO conversions, I think it tells a similar story.
So might the TO conversions be studied in such a way that there's some actual chance that accepting multiple types as input could be an asset instead of a liability?
I think this case of TO INTEGER! of #"1" is a good talking point for that.