The ability to pick up a Rebol executable and move it to another system has been a hallmark of the project.
Setting up the build process in the modern world to enable this is not trivial: even if the OS itself tries to remain backwards-compatible to run older executables, modern installations of that OS (or its dev tools) refuse to maintain switches that let you build for older platforms.
e.g. You might think you're building on the "same" platform, but in effect you are cross-compiling...a lot like trying to build a Windows executable on Linux or a Mac. Cross-compilation is rarely easy on most systems...and the powers-that-be certainly don't think cross-compiling from Ubuntu 10.20 back to Ubuntu 3.04 (or whatever) is interesting enough to make easy.
A Real Example From Yesterday To Help Illustrate
Suppose you have Ubuntu 18.10 ("cosmic") installed. This distribution was released in October 2018.
The C standard libraries--and their associated files in /usr/include
--are defined in such a way that the library function fcntl()
now no longer compiles to link to a standard library function called fcntl
. Instead, the header files redefine it to compile to something called fcntl64
. That function was introduced in GLIBC 2.28...which was announced in August 2018.
This means if you call fcntl()
anywhere in your code and compile it on this October 2018 cosmic distribution, your resulting binary "cannot possibly" run on a distribution released before August 2018. There are no compiler switches, linker switches, or #defines
to turn back that clock.
What Can Be Done About This?
When a toolchain only compiles for itself-or-newer...and each distribution enforces use of a toolchain that is paired with its release...you are stuck keeping copies of the old OS and toolchain around if you want to keep this classic transferability property intact on such a platform.
...and keeping old ones around is what we're actually doing. It's easier now with virtual machines--on Travis we can pick an archaic Linux, Windows, or old OS X to build with. But it does create a level of overhead that most people won't want. And in Travis's case, they expire these old images when they feel like it.
The implication may (or may not) be clear: While Rebol core developers may try to keep old VMs around to build executables with good transferability properties on older or newer systems...the Rebol that an average person builds themselves on a recent Mac/Linux/Ubuntu download PROBABLY can't be made to have those properties to be able to run on older platforms.
I say "probably" because in the fcntl() case, I managed to hack around it. So your Rebol built on cosmic or whatever can still run on older systems. This time.
Another option: you could use your own header files. Don't #include <fcntl.h>
where the devious #define fcntl fcntl64
lives. Maintain your own headers that speak to just the libc functions you want. (This is essentially what Red has to do, as they mechanically can't #include <fcntl.h>
in Red/System). Not necessarily easier than snapshotting old toolchains in their entirety, and neither approach can protect you from when those old APIs are deemed not relevant on newer platforms.
What About Static Linking?
One answer that might come to mind is that if you were willing to ask a build of Rebol to be a little bigger, then you could bundle up the functions that you use into the executable. A cosmic distribution of Linux could pack up its fcntl64() code and use that embedded version instead of the one in the system, letting it run on older Linuxes that only have fcntl().
To address the question of why this may be problematic, here's one guy's writeup:
Static Linking Considered Harmful
"The often used argument about statically linked apps being more portable (i.e., can be copied to other systems and simply used since there are no dependencies) is not true since every non-trivial program needs dynamic linking at least for one of the reasons mentioned above. And dynamic linking kills the portability of statically linked apps."
His main points are:
- Static linking means you're freezing your code in time so it doesn't get security updates from the platform it is running on
- Memory is used less efficiently since your app isn't sharing the same code pages all the other programs on the system are
- In the GLIBC world, several features of non-trivial programs simply won't work with static linking; they are mechanically dependent on dynamic linking. (Other libc implementations like musl are designed differently so that they CAN be static linked, and it may well be that we should consider using them instead... Rust does.)
Where Does This Leave Rebol?
Picking the one case I mention as a sort of focal point: The absence of the ability to say "I don't want this new fcntl" reveals that the GNU libc developers do not consider this to be a matter of much concern. I'm skeptical they'd be very interested in hearing "fringe" thoughts on the matter; and have their own agenda.
So one likely has to go outside of the Linux community to find the sort of people who this doesn't sit well with. And that wouldn't be Apple (cough Catalina cough)
One point of view would be Carl's idea that Rebol needed a "phase two" of an operating system to complete the vision. But, no, there are not resources for that.
Without a culture shift from OS developers to think that the reverse direction is important, it's simply not going to be practical (or even "possible") to make sure building with a newer OS will get you a binary that will run on an older OS. The best we can do here is make sure that when the OS developers who care about this show up, we can be up and building on their platform right away. We don't have to be them, we just have to be ready for them!
Hence I propose thinking of the C source code as Rebol's formal means of exchange--with binaries being secondary. The most important thing is that the Rebol source code of today still builds on old systems (allowing for a potential "prep" step that must be done on a newer system for bootstrap).
If you find you really need a recent build for an older machine, let's do everything in our power to make sure that if you have a C compiler of that era on that machine that you can still build it. And if there comes a time where being a distributor of binaries is relevant again, we can use various old virtual machines to make ones that are likely to run on newer ones too. Though for now I want to not get too tied down being an .EXE distributor, and stay focused on the web build!