Back To Personal Computing


#1

Carl wrote this essay in 1997, I thought it would be good to cache it, and perhaps add some insights. (It explicitly encourages reprints if copyright is included.)

http://www.eskimo.com/~goody/links/back.to.prsnl.cmptng.html

Back to Personal Computing

A Message from Carl Sassenrath
20-Jan-1997

Are You Satisfied?

We live in the age of tremendous personal computing power. Our desktop systems run hundreds of times faster than the large, expensive mainframe computers of years past. Yet, what has been the end result of this unbelievable power? Are you now satisfied with the operation of your system? Does it operate and respond as you expect?

Over the past decade the benefits of increased hardware performance have been offset by an excessive growth in the size and complexity of the system software. Or perhaps it is the opposite – the driving force behind improving hardware performance was to overcome an ever-growing ineptitude in software technology. After all, how usable would Windows95 be on a 8 MHz computer?

The Complexity Problem

The developers of modern software don’t understand the consequences of their bloated systems on their users. Operating personal computers now requires us to devote as much time to set-up menus, installation programs, configuration “wizards”, and help databases as we do running productive applications. Companies like Microsoft mistakenly think that we either have plenty of time to burn or perhaps actually enjoy endlessly fooling around with their system.

This mindless attitude seems to manifest itself in every aspect of modern software, from the development systems needed to create it, to the application libraries (APIs) required to interface it, to the operating systems necessary to run it. This plague has swept through all aspects of computer software – as is evident when you download a 10MB C++ shareware program, install an 80MB OS update, or receive a 10 CD-ROM developer’s kit.

Many developers defend their software by arguing: “What is the harm with a 10MB program? Don’t you know that memory is cheap?” What they are really saying is: “So what if it takes some time to download. Who cares that it consumes disk space and half the RAM. Perhaps configuring it is a little too complicated. All right, it does have many useless features. But, after all, it has less than a dozen obvious bugs, and it will run at least an hour before crashing.”

These developers fail to recognize the core problem: software complexity. In recent years it has become universally acceptable for software technology to be absurdly complex. Systems have grown both out of control and out of proportion to their benefits, becoming wasteful, brittle, clumsy and slow. Like our federal government, these complex software systems are now perpetuated by thriving bureaucracies of non-thought, propelled by their own markets of desperate, inexperienced consumers who see no alternatives.

Back to the Future

I have reached my limit when it comes to “modern” software practices. Over the past few years I’ve been dreaming not of the future, but the past. Perhaps you remember those days… when a word processor was distributed on a single floppy and what seemed like a huge OS took two. Remember being wonderfully productive on a 7MHz system with a 10MB hard drive? If something went wrong, you felt that there was a good chance you could fix it yourself.

To me this is all about Personal Computing, not Personal Enslaving. It is about being the masters of our own computers, not the reverse. A decade ago this was true, but we are not the masters any more. Is it possible to reclaim that position? Or, has it been lost to history like the Tucker Automobile? Everyone tells me that the world of personal computing is now totally dominated by a single system – one which I believe lacks not only a consistent, efficient, reliable architecture, but an intelligent vision of the future.

Perhaps we are at a pivotal point in personal computing, and this is where we must take our stand. It is my sincere hope that there are enough scattered outposts of rebels who believe as I do and refuse to bow to the “empire” (or have done so under duress and seek an opportunity to flee.) With a critical mass we can build our own future and return to what Personal Computing was meant to be.

My Part

For years after creating the Amiga’s multitasking OS architecture I assumed operating systems would continue to improve. I figured that with five million people using the Amiga and valuing its design, I had made my contribution. I set aside my new OS visions, naively thinking that others would carry the torch onward toward the best possible future. I know now that I made a mistake, and I have come to regret it.

I am now prepared to develop the system that I have been contemplating for the last decade. I’m not talking here about making a clone of any existing system (including the Amiga). What I want is a personal computer that I would like to use: a system that is genuinely easy-to-operate, consistent, flexible, powerful, small, and fast.

My plan involves two phases. The first phase is the completion of a new scripting and control language. I have worked on the design of this language part-time for many years. Within the last few months my efforts have been full-time, and the language is nearly ready for its prototype (alpha) release. Versions will be available for each of the major platforms over the next month.

Why a language? Because I believe that the core of computing is not based on operating system or processor technologies but on language capability. Language is both a tool of thought and a means of communication. Just as our minds are shaped by human language, so are operating systems shaped by programming languages. We implement what we can express. If it cannot be expressed, it will not be implemented.

Once the language is complete and in distribution, the second phase is to develop a small and flexible operating system which is integrated in a unique way with the language. Attribute settings, control scripts, configuration, installation, interprocess communications, and distributed processing will be facilitated through the language. Applications can still be written in C and various other languages, but some aspects of their system interface will be done through the OS language. This system is slated for prototype release later in the year and will be targeted at a few different hardware platforms.

Your Part

The language and system described above are huge projects and will require my best efforts for some time to come. This is my sole mission, and I have no other jobs or contracts to help pay the way. Yet, I have absolutely no intention of selling out to a big corporation or being driven by Wall Street greed. To do so would be to risk losing control (again) to those who lack the insight and understanding to make the best decisions in the years ahead.

Instead, my approach is to determine if there are enough of you out there who feel as I do – who want a choice, who want a system that makes you the master, and who would be willing to help support it through financial contributions.

I’ve been considering this for many months, but I’ve never done a user-funded project like this before, and I don’t know what to expect. Right now I am hopeful, but also a little nervous. It’s a big risk. If you like what I am proposing, please take to it to heart and consider what I have said, because I cannot do it without you.

It’s time to do something different. It’s time to do something for ourselves. I hope you will join with me, rebel against software complexity, and return us again to being the masters of our own Personal Computing.

Yours as always,

Carl Sassenrath

You can email comments to carl@rebol.com
Mailing address is: PO Box 268, Calpella, CA 95418

Keep an eye on this web site, www.rebol.com, for more information. From this point on you can expect to see changes and announcements weekly as things develop.

Copyright © Carl Sassenrath 1997
Permission is granted to copy, distribute, and repost so long as the copyright is preserved.
Translators: there are numerous English idioms in this document, if you need help with a clarification, please contact me.


Mission statement
Ren-C 2018 Roadmap and Retrospective
#2

Red has taken Carl’s abandoned mission with new funding and extended it so that the binaries are self-contained, and aim to be compiled using its own stack where possible vs the interpreted Rebol that we have become used to since 1997. However, Carl’s execution may well have had flaws, and what guarantees do we have that Red isn’t going to make the same mistakes.

And what evidence do we have that Carl’s dream was actually capable of producing a language that could do everything that developers need? Wouldn’t we be better off aiming for 90% of what a developer needs with the other 10% being developed in C, and aiming to provide an accessible path to app development for the majority of non-professional programmers?


#3

We don’t just lack a guarantee. We have explicit proof that making the same mistakes is the raison d’etre of the project. :stuck_out_tongue: In Red:

>> if-not: func [condition branch] [if not condition branch]
>> is-two: func [x] [if-not x = 2 [return "not two"] return "it's two"]
>> is-two 10
== "it's two"

“It’s all just a little bit of history repeating…”

I liked what this person from hackernews had to say:

The problem with a lot of Red’s features is that core language/runtime features are added to the language last.

  • GC is added after a lot of language is already there.
  • Actors are planned to be added after I/O, and after GC, and after all the GUI capabilities (Trello board lists “figure out how to rewrite event loop with actors”)
  • A lot of FFI is already there, so GC, and actors are coming after FFI.
    These are not features you add on top of a language and hope they magically work. Actors directly affect how I/O and GC work. GC should be very aware of how FFI works. etc.etc.etc.
    A lot of these issues are dismissed by the core team as “it’s not difficult to write a GC, we have it in a branch somewhere” (referring to a naïve mark-and sweep stop-the-world GC).

That person is not me, but I couldn’t really have said it much better.

But as I’ve said, I think it’s best for everyone if they are free to throw things against the wall and find what sticks. Red/System can be useful even if Red is incoherent. And the right answers have a way of bubbling to the top over time.

I feel like Carl’s claim was more about the 90%. I don’t think he was suggesting that large DNA sequencing database programs could forego whatever specific FPGA programming or what-not, or code being built by large teams to do financial trading wouldn’t need the underlying math of pure functional programming.

It was more saying that complexity had invaded into problems which should be much simpler. The 80mb shareware program, or OS update, is typically not a revolution.

“I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone.” --Bjarne Stroustrup


#4

But Rebol demonstrated that there was still a benefit if you avoided those areas where the interpreter exhibited some ambiguous behaviour. However, it’s not a great foundation to build a language upon though many Rebol users found it better than nothing at all.


#5

So there’s a general rhetoric in here…about what the “rebellion” really means. It’s the kind of thing echoed in “Fight Software Complexity Pollution”:

http://www.rebol.com/cgi-bin/blog.r?view=0497

I won’t dwell on all the agreements I have. But I thought there were some interesting remarks, worth critiquing in a historical context:

Once the language is complete and in distribution, the second phase is to develop a small and flexible operating system which is integrated in a unique way with the language.
…
Applications can still be written in C and various other languages, but some aspects of their system interface will be done through the OS language.

I believe this goal was something that limited Rebol’s success. It gave rise to the table of functions that represented the “host” abstraction. These were functions named OS_Some_Thing(), that spoke in terms of C types, e.g. here’s the POSIX OS_Get_Time:

This was the abstract OS. It wasn’t Windows or POSIX, it was a set of C functions that represented the things Rebol might want to do… e.g. OS_Get_Time() would be the service routine that the Rebol command NOW would call.

But its hands were tied. Don’t be fooled by REBOL_DAT into thinking that this abstract OS was allowed to inspect and manipulate the internals of Rebol data structures. Quite the opposite, REBOL_DAT was a structure that the Rebol language had no need for that was defined purely for the purpose of communicating with this abstract OS. It was just jumping through hoops, which would be much easier if the implementation could assemble Rebol values.

Ren-C’s goal was to tear out this abstract OS and implement extensions differently. The work is starting to pay off… but it certainly is a lot of work.

For years after creating the Amiga’s multitasking OS architecture I assumed operating systems would continue to improve.

I was among the kids who thought the Amiga was great. (Though I didn’t actually have one until someone gave me one when they were cleaning out their closet, and it was a piece of history by then.) I experienced it secondhand, when things like MOD file players were ported to PC. But the influence of the Amiga and demoscene was still quite present.

But I think the difference between multi-tasking and multi-threading is very profound…and there were a numerous indications that Carl did not have experience with the practical matters of the difference.

People have asked me about the TASK! concept and how that would go, and I hated to be the person to break it to them that it was all simply was too unsophisticated to actually work. The only way it could have been made to work would be something I think most people would find unsatisfactory–exposing semaphores and mutexes–which seems to cut across the intended use case of Rebol at a high level. Basically, Rebol programmers would be C programmers at that point.

But I dissected many other mistakes:

For myself, my time on StackOverflow has been very humbling–to try things and get a real-time education from someone who knows a lot more than you do about whatever you were asking. It happens often. I like to write about these experiences… things I thought I knew, and then I look at it another way and learn something. Carl’s Rebol has given me opportunities on both sides of this–I have thought some newfangled technology would make things better, but tried it and performance analyzed it… considered the “real cost” of the abstraction, and realized my conventional wisdom was wrong. But I’ve also found a lot of problems, and been the correcting factor.

I understand that after 15 years he had to move on. On my end, it’s been a bit over two years now since the beginning of Ren-C. I still have little puzzles I want to solve… I think great strides have been made. But I have to shift into other work now, so prioritization of what time I spend on Rebol-oriented tasks is going to be much more important.


2018 Retrospective: Elevating the Art
#6

Except I don’t think he has really moved on. The word is that he’s working on Rebol4, and presumably, he’s realised that he made too many mistakes with Rebol3.

He’s also acquired several more years of coding in a team environment so again I’m guessing that he’s gained a lot from working at Roku that hopefully he can bring back to the revolution.