(Update 2022: This circa 2019 meandering thread started with a simple question, that led to a lot of experiments that may have confused matters more than clarified them. So read at your peril. The current state of the art proposal involves "isotopic groups" and is a completely new direction.)
The following Redbol-ism long seemed unnatural to me, of defaulting to breaking paths apart unless you say /ONLY:
rebol2>> append [a b c] first [d/e/f g/h/i]
== [a b c d e f]
rebol2>> append/only [a b c] first [d/e/f g/h/i]
== [a b c d/e/f]
This raises people's questions about why blocks should default to that, which confuses many a newcomer...and which I was opposed to when I first saw it.
rebol2>> append [a b c] first [[d e f] g/h/i]
== [a b c d e f]
Ultimately it came to be that using Rebol in practice made me feel splicing was the natural default for many block operations. But something long remained uncomfortable with this pattern... which applies to other routines with this kind of /ONLY (for instance, FIND)...is how it's hard to tell at the callsite what's going to happen when you are talking about data indirectly:
rebol2>> item: [a b c]
;... then much later, callsite might suddenly stop working generically
;... when you suddenly switch item to a block from something else:
rebol2>> find reduce [1 + 2 item 3 + 4] item
== none
The known existing schools of thought
-
"splicing should be the special operation" - you should have to ask for
append/splice
to get it, with all other appends defaulting to being /ONLY. More generically the term might be something like/multi
so it works e.g. withfind/multi
-
"there's something special about BLOCK!" - this could be thought of as reinforced logographically by the distinction of using brackets in the [o], that makes it work, so you just know they are weird.
2b. "there's something special about datatypes where space is the delimiter, so BLOCK! and GROUP! both count - "it's too late to change any of it it, so leave it as is"
I was convinced #1 was probably not in Rebol's best interest. But then I disliked auto-splicing of PATH! enough that I rejected #3 as a least-favorite option. So at one point Ren-C was switched to using #2b. But...
Informing this further with a new use case
I just ran into a problem with my proposed usage of @[...] to be an irreducible capture of a datatype, that can also carry a quoting level. That doesn't work if @[...] blocks are considered a container that needs /ONLY for things like FIND:
find reduce [integer! quoted-word!] quoted-word!
Becomes:
find [@[integer] @['word]] @['word]
It today assumes the @[...] is to be handled the same a a regular block. So this is morally equivalent to:
find [@[integer] @['word]] ['word]
Which acts the same as:
find [@[integer] @['word]] lit 'word
That doesn't do what's intended, and doesn't match the datatype. But it feels ever more haphazard to just pick a random reasoning.
What it makes me feel is that there's just something fundamentally wrong being glossed over.
Concept: expect [...] always, splice always, leverage :[...]?
It feels like "at the source level", you want to be able to see whether what you're passing along is going to be treated atomically or not. This is a parallel to other problems, like what caused "Backpedaling on non-block branches". That was mitigated with soft quoted branching, which carries some other advantages.
One idea could be that appends, insert, finds, etc. always take BLOCK! and always presume splicing semantics.
>> append [a b c] [1 + 2 10 + 20]
== [a b c 1 + 2 10 + 20] ; compatible with history
>> append [a b c] 3
** Error: APPEND does not accept INTEGER! for its value argument
; Not compatible, but at least an error and not random new behavior
>> append [a b c] [3]
== [a b c 3] ; compatible with history
Also allow GET-BLOCK! to ask for reduction along with your splicing:
>> append [a b c] :[1 + 2 10 + 20]
== [a b c 3 30]
Block parameters (the only type which would be tolerated) passed in would be spliced--because as mentioned it always takes a block parameter, and they're always spliced:
>> items: [1 [2 + 3] 4]
>> append [a b c] (second items)
== [a b c 2 + 3]
But you could slip past this by using a GET-BLOCK! that has your expression in it...thus it would reduce and get spliced, but leaving the original alone...effectively an /ONLY:
>> items: [1 [2 + 3] 4]
>> append [a b c] :[second items]
== [a b c [2 + 3]]
Further radicalization - soft quote the second argument?
If you'd be willing to write append [a b c] (second items) always instead of append [a b c] second items, then all of the above is compatible with soft-quoting. You could then use literal material as-is, which could work for BLOCK! and other things:
>> append [a b c] '[1 + 2 10 + 20]
== [a b c [1 + 2 10 + 20]]
>> append [a b c] '3
== [a b c 3]
As with branching, non-quoted things would complain if you didn't give it a BLOCK!:
>> item: 10
>> append [a b c] (item)
** Error: APPEND only takes BLOCK!, GET-BLOCK!, or QUOTED!
>> append [a b c] [item]
== [a b c item]
>> append [a b c] :[item]
== [a b c 10]
>> append [a b c] '10
== [a b c 10]
>> item: [1 + 2 10 + 20]
>> append [a b c] (item)
== [a b c 1 + 2 10 + 20]
It would work how an IF doesn't have the rule on its condition but only the branch, it's just the thing to be appended:
>> target: "abcd"
>> append target ["efg"]
== "abcdefg"
>> append target '{ghi}
== "abcdefghi"
Pros
-
You aren't confused when you see
append x (y)
about whaty
is going to look up to. Because if it weren't a block, that would be an error. Being introduced from day one to APPEND+INSERT+CHANGE as operations that expect a block of things to be appended... and FIND+SELECT as taking a block of things to be found, might seem strange to us now...but I think the net complexity drops compared to /ONLY and the problems it causes. - Kills off the idea of /ONLY and all the mire that accompanies it. It is so easy to make mistakes with that, no matter how experienced in Rebol code you are. I don't feel any satisfying solution has been articulated about it.
- Has a good alignment with the [o] meaning BLOCK! is a special datatype. Sets up a psychological basis for working coherently, and hopefully not making mistakes down the road. Suggests people use blocks to represent groups of parameters as arguments to functions instead of single items systemically...a better principle to embrace than expecting them how to realize to design every routine with an /ONLY option...which was a bit like the /INTO virus
-
Can cover REPEND-style cases where the block being repended is source-level with expressions (likely most common). If it's quoting the second argument, it could blend the evaluator into the operation, even if that evaluation is just to reduce a variable name for you to be the item to append. There's been some amount of issue about performance when the natives are
reduce
andappend
and executed in two steps, whereas a reducing append that saw the source GET-BLOCK! could build a right-sized block more efficiently. -
Single element blocks are pretty efficient; moreso than refinements I've mentioned how the design has been set up such that
:[a]
fits in a single series node, so it's not horrible to need to sayappend data :[item]
instead ofappend data item
... it's actually better thanappend/only data item
. And quoted things are efficient too, soappend data '10
costs the same asappend data 10
.
And across the range of allowed inputs, it would be compatible with historical code. The Redbol emulation would be fairly trivial:
redbol-append: function [series value /only] [
append series case [
only [:[value]] ; evaluates to a spliced BLOCK! with one item in it
any-array? value [as block! value] ; force path/group to block
default [:[value]] ; put anything else into a block
]
]
Cons
Obviously asking people to write find data [<x>]
instead of find data <x>
seems blocky, and find data :[var] instead of find data var is uglier. Plus append string ["stuff"] feels a bit wordy and append string '{stuff} makes you change your string delimiter. This might be mitigated some by being able to say append string 'stuff and allowing WORD!-based appends.
...Or perhaps being more lenient when the target is a string type, so the /ONLY distinction wouldn't exist in the first place...so allow any type? e.g. enforce the "must be a block rule" for the argument only when modifying or searching in arrays?
While the soft quoting is not integral to the proposal, it has that problem of making you say append data (second items) instead of append data second items. Regardless of whether one thinks the soft quoting is a good idea, there's much more information there... you know the second thing in items is a block and it will be spliced. If you saw append data :[second items] you don't know what the thing is but you know it's not going to be spliced.
For those not going with Redbol emulation, it could be a fresh start
Killing off /ONLY feels like a noble cause to me.
This line of thinking reminds me a bit of the train of thought behind saying that if you have an argument that can take ANY-VALUE!, then you do not use the blank-in-null-out convention for such arguments...you have to switch to thinking of null as some kind of nothing, or accept erroring on null.
So maybe we could consider this a variant of 2 above:
2c. BLOCK! has a special use by convention in the language as a "generic container of N things". You should generally not make a parameter take ANY-VALUE! if that argument intends to use this meaning of BLOCK!. Instead you should always take a block, even if just to provide a single thing, for clarity at the callsites.
2c.1. If your use case fits it, you can use soft-quoting to allow QUOTED! values to indicate single items as a shorthand--at the cost of having to put expressions producing BLOCK!s into GROUP!s.
2c.???. (maybe?) If you have a span of target cases where multiple items cannot be handled differently from single items (e.g. find of a TEXT! in a TEXT! has no distinct meaning from find of a BLOCK! with that TEXT! in it, where an /ONLY refinement would have no meaning) then these cases may be tolerated as missing the BLOCK! container for single elements.