R3-Alpha had a lot of redundant code. If you said clear series you were effectively doing the same thing as change/part series tail series. And if you said remove/part series 4 you were doing the same thing as change/part series  4.
(Note: today the latter could also be said more "obviously" as change/part series null 4)
When you have redundant code, slightly different things happen on each path. It makes it harder to test, and it means any optimizations you apply don't get applied evenly. For instance: CLEAR would reset a series's "bias", but a REMOVE or CHANGE that effectively emptied a series also, would not.
I tried redefining REMOVE + CLEAR in terms of CHANGE, but...
...when I did try it, this pointed out some problems with the idea of taking narrower operations and trying to define them in more general ones.
As an example: using the null-specialization hack I described, I tried defining CLEAR to be CHANGE to NULL with a /PART that is the TAIL of the input. Yet that doesn't work for MAP!, because it doesn't have a notion of position to get the tail from. Similar problems exist for types like BITSET!, and operations like REMOVE when defined in terms of CHANGE.
It's possible to shuffle things around so that these types only support CHANGE with no /PART and assume you mean "change everything". But that's fairly inconsistent.
We wind up saying: Some things can be "cleared", but cannot be "changed". It gets shaky, because these words are being reused and applied to things that do not obey a common "series interface".
If sharing can be achieved in such an environment, it seems like there needs to be some kind of decision tree of fallbacks. e.g. CLEAR could try to be understood by a datatype directly, and if it doesn't know how to do that (or lacks a specific optimization for it) then it would see if CHANGE to NULL was available. Or you could tie the CLEAR to CHANGE translations to being an operation common on ANY-SERIES!, with other types hooking in differently.
I really don't know. What I do know is that it looks pretty complicated and we don't have great answers at the moment. On the other hand: it's nice that specialization is working and I can try things like this, but it doesn't feel like enough questions are answered yet about the bigger semantic model of how these generics work.
(If you missed my post on user-defined types and related issues, e.g. how ADD currently tries to "generalize" but fails, see User-Defined-Type Scenarios Solicited)
So I'm backing off on the shared specialization attempt for the moment. It's good experience, and every time we try it we get to see how well SPECIALIZE is holding up, etc. But not the most important thing to be doing right now, and I've already failed at one other thing I tried today... so... time to do something more feel-good.