In one of my Philadelphia talks, I cited a quote from Paul Graham in The 1,000 Year Language
"I learned to program when computer power was scarce. I can remember taking all the spaces out of my Basic programs so they would fit into the memory of a 4K TRS-80. The thought of all this stupendously inefficient software burning up cycles doing the same thing over and over seems kind of gross to me. But I think my intuitions here are wrong. I'm like someone who grew up poor, and can't bear to spend money even for something important, like going to the doctor."
"The desire for speed is so deeply engrained in us, with our puny computers, that it will take a conscious effort to overcome it. In language design, we should be consciously seeking out situations where we can trade efficiency for even the smallest increase in convenience."
I've been struggling some with this fight. For instance, the scenario of:
count-up x 1000000 [ let y: x + 1 print [y] ]
What I have explored is the question of making LET a dynamic construct. This dynamism would mean it wouldn't be scanned for in advance (like "FUNCT" looked for SET-WORD!). It would actually bring a new variable and binding into existence... basically just syntax sugar for if you had written:
count-up x 1000000 [ use [y] [ y: x + 1 print [y] ] ]
It's uncomfortable to induce the thought that you'd be making a new object each time through the loop...an object with one field (y). That's a million objects being made, that the GC will have to grapple with.
I'm working on making these as small as possible...and the GC will sweep them up. But people who have seen LET or LET-like constructs from other languages would not generally assume that it would cause so much pain to the GC. It feels so much more catastrophically worse than creating a single object:
let y count-up x 1000000 [ y: x + 1 print [y] ]
And if you were using
func [... <local> y] that's even more efficient. It doesn't create a separate object identity at all, but piggy-backs on the frame (where the arguments are already stored).
Of course, we're dealing with an incredibly dumb "mark and sweep the world" GC right now, that only starts cleaning up when it hits a wall. If we were more clever, I'm sure there could be ways that the GC could localize most LETs and do zoning cleverness...doing light pick-ups of GC objects.
I May Be Worrying Too Much
The size of the variable is smaller than the size of the frame created by an addition, e.g. by the +. If someone is at the level of optimizing for the LETs, they'd make a much bigger difference by eliminating the block:
count-up x 1000000 [ let y: x + 1 print x ]
(Note: PRINT requires blocks on non-strings, non-newline characters for a pretty good reason, but I'm just trying to make the point about the relative costs of things.)
The creation of FRAME! (e.g. for a BLOCK!) has been optimized to reduce the GC load to just about the minimum that it can, but that minimum is the same minimum as what we're looking at for a LET. So when the cost of a LET is more or less on the same level as the cost for a GROUP!, how much should we really be asking people to worry about it?
When you think about the forces in play here--the ability to do rearrangements to optimize it if you need to, and Paul Graham's remarks about expressivity being the long tail--I think making LET dynamic is probably the winning bet.
Luddites who don't like it can use
<local> and be no worse off.