Back To Personal Computing


#1

Carl wrote this essay in 1997, I thought it would be good to cache it, and perhaps add some insights. (It explicitly encourages reprints if copyright is included.)

http://www.eskimo.com/~goody/links/back.to.prsnl.cmptng.html

Back to Personal Computing

A Message from Carl Sassenrath
20-Jan-1997

Are You Satisfied?

We live in the age of tremendous personal computing power. Our desktop systems run hundreds of times faster than the large, expensive mainframe computers of years past. Yet, what has been the end result of this unbelievable power? Are you now satisfied with the operation of your system? Does it operate and respond as you expect?

Over the past decade the benefits of increased hardware performance have been offset by an excessive growth in the size and complexity of the system software. Or perhaps it is the opposite -- the driving force behind improving hardware performance was to overcome an ever-growing ineptitude in software technology. After all, how usable would Windows95 be on a 8 MHz computer?

The Complexity Problem

The developers of modern software don't understand the consequences of their bloated systems on their users. Operating personal computers now requires us to devote as much time to set-up menus, installation programs, configuration "wizards", and help databases as we do running productive applications. Companies like Microsoft mistakenly think that we either have plenty of time to burn or perhaps actually enjoy endlessly fooling around with their system.

This mindless attitude seems to manifest itself in every aspect of modern software, from the development systems needed to create it, to the application libraries (APIs) required to interface it, to the operating systems necessary to run it. This plague has swept through all aspects of computer software -- as is evident when you download a 10MB C++ shareware program, install an 80MB OS update, or receive a 10 CD-ROM developer's kit.

Many developers defend their software by arguing: "What is the harm with a 10MB program? Don't you know that memory is cheap?" What they are really saying is: "So what if it takes some time to download. Who cares that it consumes disk space and half the RAM. Perhaps configuring it is a little too complicated. All right, it does have many useless features. But, after all, it has less than a dozen obvious bugs, and it will run at least an hour before crashing."

These developers fail to recognize the core problem: software complexity. In recent years it has become universally acceptable for software technology to be absurdly complex. Systems have grown both out of control and out of proportion to their benefits, becoming wasteful, brittle, clumsy and slow. Like our federal government, these complex software systems are now perpetuated by thriving bureaucracies of non-thought, propelled by their own markets of desperate, inexperienced consumers who see no alternatives.

Back to the Future

I have reached my limit when it comes to "modern" software practices. Over the past few years I've been dreaming not of the future, but the past. Perhaps you remember those days... when a word processor was distributed on a single floppy and what seemed like a huge OS took two. Remember being wonderfully productive on a 7MHz system with a 10MB hard drive? If something went wrong, you felt that there was a good chance you could fix it yourself.

To me this is all about Personal Computing, not Personal Enslaving. It is about being the masters of our own computers, not the reverse. A decade ago this was true, but we are not the masters any more. Is it possible to reclaim that position? Or, has it been lost to history like the Tucker Automobile? Everyone tells me that the world of personal computing is now totally dominated by a single system -- one which I believe lacks not only a consistent, efficient, reliable architecture, but an intelligent vision of the future.

Perhaps we are at a pivotal point in personal computing, and this is where we must take our stand. It is my sincere hope that there are enough scattered outposts of rebels who believe as I do and refuse to bow to the "empire" (or have done so under duress and seek an opportunity to flee.) With a critical mass we can build our own future and return to what Personal Computing was meant to be.

My Part

For years after creating the Amiga's multitasking OS architecture I assumed operating systems would continue to improve. I figured that with five million people using the Amiga and valuing its design, I had made my contribution. I set aside my new OS visions, naively thinking that others would carry the torch onward toward the best possible future. I know now that I made a mistake, and I have come to regret it.

I am now prepared to develop the system that I have been contemplating for the last decade. I'm not talking here about making a clone of any existing system (including the Amiga). What I want is a personal computer that I would like to use: a system that is genuinely easy-to-operate, consistent, flexible, powerful, small, and fast.

My plan involves two phases. The first phase is the completion of a new scripting and control language. I have worked on the design of this language part-time for many years. Within the last few months my efforts have been full-time, and the language is nearly ready for its prototype (alpha) release. Versions will be available for each of the major platforms over the next month.

Why a language? Because I believe that the core of computing is not based on operating system or processor technologies but on language capability. Language is both a tool of thought and a means of communication. Just as our minds are shaped by human language, so are operating systems shaped by programming languages. We implement what we can express. If it cannot be expressed, it will not be implemented.

Once the language is complete and in distribution, the second phase is to develop a small and flexible operating system which is integrated in a unique way with the language. Attribute settings, control scripts, configuration, installation, interprocess communications, and distributed processing will be facilitated through the language. Applications can still be written in C and various other languages, but some aspects of their system interface will be done through the OS language. This system is slated for prototype release later in the year and will be targeted at a few different hardware platforms.

Your Part

The language and system described above are huge projects and will require my best efforts for some time to come. This is my sole mission, and I have no other jobs or contracts to help pay the way. Yet, I have absolutely no intention of selling out to a big corporation or being driven by Wall Street greed. To do so would be to risk losing control (again) to those who lack the insight and understanding to make the best decisions in the years ahead.

Instead, my approach is to determine if there are enough of you out there who feel as I do -- who want a choice, who want a system that makes you the master, and who would be willing to help support it through financial contributions.

I've been considering this for many months, but I've never done a user-funded project like this before, and I don't know what to expect. Right now I am hopeful, but also a little nervous. It's a big risk. If you like what I am proposing, please take to it to heart and consider what I have said, because I cannot do it without you.

It's time to do something different. It's time to do something for ourselves. I hope you will join with me, rebel against software complexity, and return us again to being the masters of our own Personal Computing.

Yours as always,

Carl Sassenrath

You can email comments to carl@rebol.com
Mailing address is: PO Box 268, Calpella, CA 95418

Keep an eye on this web site, www.rebol.com, for more information. From this point on you can expect to see changes and announcements weekly as things develop.

Copyright © Carl Sassenrath 1997
Permission is granted to copy, distribute, and repost so long as the copyright is preserved.
Translators: there are numerous English idioms in this document, if you need help with a clarification, please contact me.


Ren-C 2018 Roadmap and Retrospective
Mission statement
Make Your Own Safety?
#2

Red has taken Carl's abandoned mission with new funding and extended it so that the binaries are self-contained, and aim to be compiled using its own stack where possible vs the interpreted Rebol that we have become used to since 1997. However, Carl's execution may well have had flaws, and what guarantees do we have that Red isn't going to make the same mistakes.

And what evidence do we have that Carl's dream was actually capable of producing a language that could do everything that developers need? Wouldn't we be better off aiming for 90% of what a developer needs with the other 10% being developed in C, and aiming to provide an accessible path to app development for the majority of non-professional programmers?


#3

We don't just lack a guarantee. We have explicit proof that making the same mistakes is the raison d'etre of the project. :stuck_out_tongue: In Red:

>> if-not: func [condition branch] [if not condition branch]
>> is-two: func [x] [if-not x = 2 [return "not two"] return "it's two"]
>> is-two 10
== "it's two"

"It's all just a little bit of history repeating..."

I liked what this person from hackernews had to say:

The problem with a lot of Red's features is that core language/runtime features are added to the language last.

  • GC is added after a lot of language is already there.
  • Actors are planned to be added after I/O, and after GC, and after all the GUI capabilities (Trello board lists "figure out how to rewrite event loop with actors")
  • A lot of FFI is already there, so GC, and actors are coming after FFI.
    These are not features you add on top of a language and hope they magically work. Actors directly affect how I/O and GC work. GC should be very aware of how FFI works. etc.etc.etc.
    A lot of these issues are dismissed by the core team as "it's not difficult to write a GC, we have it in a branch somewhere" (referring to a naïve mark-and sweep stop-the-world GC).

That person is not me, but I couldn't really have said it much better.

But as I've said, I think it's best for everyone if they are free to throw things against the wall and find what sticks. Red/System can be useful even if Red is incoherent. And the right answers have a way of bubbling to the top over time.

I feel like Carl's claim was more about the 90%. I don't think he was suggesting that large DNA sequencing database programs could forego whatever specific FPGA programming or what-not, or code being built by large teams to do financial trading wouldn't need the underlying math of pure functional programming.

It was more saying that complexity had invaded into problems which should be much simpler. The 80mb shareware program, or OS update, is typically not a revolution.

"I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone." --Bjarne Stroustrup


#4

But Rebol demonstrated that there was still a benefit if you avoided those areas where the interpreter exhibited some ambiguous behaviour. However, it's not a great foundation to build a language upon though many Rebol users found it better than nothing at all.


#5

So there's a general rhetoric in here...about what the "rebellion" really means. It's the kind of thing echoed in "Fight Software Complexity Pollution":

http://www.rebol.com/cgi-bin/blog.r?view=0497

I won't dwell on all the agreements I have. But I thought there were some interesting remarks, worth critiquing in a historical context:

Once the language is complete and in distribution, the second phase is to develop a small and flexible operating system which is integrated in a unique way with the language.
...
Applications can still be written in C and various other languages, but some aspects of their system interface will be done through the OS language.

I believe this goal was something that limited Rebol's success. It gave rise to the table of functions that represented the "host" abstraction. These were functions named OS_Some_Thing(), that spoke in terms of C types, e.g. here's the POSIX OS_Get_Time:

This was the abstract OS. It wasn't Windows or POSIX, it was a set of C functions that represented the things Rebol might want to do... e.g. OS_Get_Time() would be the service routine that the Rebol command NOW would call.

But its hands were tied. Don't be fooled by REBOL_DAT into thinking that this abstract OS was allowed to inspect and manipulate the internals of Rebol data structures. Quite the opposite, REBOL_DAT was a structure that the Rebol language had no need for that was defined purely for the purpose of communicating with this abstract OS. It was just jumping through hoops, which would be much easier if the implementation could assemble Rebol values.

Ren-C's goal was to tear out this abstract OS and implement extensions differently. The work is starting to pay off... but it certainly is a lot of work.

For years after creating the Amiga's multitasking OS architecture I assumed operating systems would continue to improve.

I was among the kids who thought the Amiga was great. (Though I didn't actually have one until someone gave me one when they were cleaning out their closet, and it was a piece of history by then.) I experienced it secondhand, when things like MOD file players were ported to PC. But the influence of the Amiga and demoscene was still quite present.

But I think the difference between multi-tasking and multi-threading is very profound...and there were a numerous indications that Carl did not have experience with the practical matters of the difference.

People have asked me about the TASK! concept and how that would go, and I hated to be the person to break it to them that it was all simply was too unsophisticated to actually work. The only way it could have been made to work would be something I think most people would find unsatisfactory--exposing semaphores and mutexes--which seems to cut across the intended use case of Rebol at a high level. Basically, Rebol programmers would be C programmers at that point.

But I dissected many other mistakes:

For myself, my time on StackOverflow has been very humbling--to try things and get a real-time education from someone who knows a lot more than you do about whatever you were asking. It happens often. I like to write about these experiences... things I thought I knew, and then I look at it another way and learn something. Carl's Rebol has given me opportunities on both sides of this--I have thought some newfangled technology would make things better, but tried it and performance analyzed it... considered the "real cost" of the abstraction, and realized my conventional wisdom was wrong. But I've also found a lot of problems, and been the correcting factor.

I understand that after 15 years he had to move on. On my end, it's been a bit over two years now since the beginning of Ren-C. I still have little puzzles I want to solve... I think great strides have been made. But I have to shift into other work now, so prioritization of what time I spend on Rebol-oriented tasks is going to be much more important.


2018 Retrospective: Elevating the Art
#6

Except I don't think he has really moved on. The word is that he's working on Rebol4, and presumably, he's realised that he made too many mistakes with Rebol3.

He's also acquired several more years of coding in a team environment so again I'm guessing that he's gained a lot from working at Roku that hopefully he can bring back to the revolution.


#7

What are those mistakes you talk about gchiu?

I do not know what is the status of R3 Alpha compared to Rebol2. It looks like development is stopped along the way and there are missing ports, etc. Carl seems discouraged or reevaluates the situation.

I totally agree with his observation of the explosion of complexity and the consumption of resources.

There is a criticism on the order of implementation of GC but is there also no problem in the order of creation of the OS / command language?

An OS needs a compiled language. The command language comes over the OS, unless the three are one.

I do not see how we can make a rebolutionnary OS by first writing a multi-platform interpreted language.


#8

Carl need publicly mentioned those mistakes but he has stated that renc looked to be the natural progression of r3. So, you could say the mistakes are defined by the fixes that have occurred since.


#9

Ok, and for docs? Is R3 docs on rebol.org enough up to date?


#10

They are for R3, but here we have Ren-C which is R3 advanced by solid continuous development by a few years.

The only things missing from ren-c vs r2 are: network proxies, native oracle client, and fast-cgi for Apache.

Notably ren-c now can run in the browser, has an inbuilt C compiler (TCC) and TLS 1.3. There are also numerous semantic changes.


#11

I also see that FTP, FINGER and WHOIS ports are missing or broken.


#12

If you can read C, I would advise you to look over the R3-Alpha PORT! code, Reb_Device, REBREQ, etc. until you understand it. Once you understand that it should all be tossed into the nearest dumpster and set on fire, you might understand why I've given rather little attention to it...besides trying to keep what was in R3-Alpha working. It is what @earl called "The Life Support". But it's all going to be pulled.

There's no gold in them thar hills. The web demo represents the better angle--which could apply equally to embedding in anything else. It's a goal to airlift a radically improved evaluation engine into a new host via a new and better API.

Ultimately I hope this means the new rebirth will be able to do all desktop Rebol2 could and more. But the conversation can't be a feature table without looking at language first. e.g. why even bother to use the thing and worry about whether it interfaces to anything or not?

http://paulgraham.com/hundred.html

Anything can be used to FTP, FINGER, or WHOIS. Bash is quite serviceable, and exceeds Rebol in the domain of...making templated code whose only ambition is to templatize a bash script.

So we have to study the merits and aspects of Rebol as a language before we even care what it can do. That's my angle, and I'd like to stay on that topic.


#13

I agree with this point of view otherwise I would be more interested in Red. It's good to know why it's abandoned. Is it the code of its extensions that is rotten or the port system?


#14

Ooh, you're officially my favorite person of the day right now. :clap:

Reading between the lines of the port and device code, you see an abstract desire to draw up some architectural lines...and say "the language core shouldn't know about these various things". It was a desire for simplicity that said "just make a function that takes two parameters, define it for each platform, and call it". In a way Rebol is like the Emperor's New Clothes, or the Toaster story

(Sidenote: both Carl and I have EE degrees.)

But ports and devices hit no particular sweet spot of their own. One problem I see in general with the code (and the patterns of many developers who work in this style) is the absence of asserts and invariants. There's a "cowboy coder" pattern where people will introduce a table of flags and fiddle bits and obsess over whether they paid for a copy or passed by a pointer--and all the while never add any asserts that show what the flags mean or check their assumptions. And they don't start from trying to solve the most pathological example they can think of first and then draw the lines of what their code doesn't do... people must understand "Freedom To And Freedom From"

For specific critiques of R3 PORT!s from a user, I doubt Shixin will mind me quoting this from an email from last week:

Spending more time working on projects in Rebol actually gives more
chance to find out what's missing in Rebol. Right now, I'd say that the
half-duplex port is giving me a lot of pain to work with. There are a
few things that I think made it hard to work with: 1. half-duplex, you
need to know exactly what you need to do: read/write, because it
requires an "action-event-action-event" chain: you can't get any action
wrong, otherwise the data might be messed up (read and write share the
same internal buf in the port), or the event chain will be broken (if
you miss an action) and you won't receive any event. But a lot of time,
you can't be sure if you need to read or write at some point: Say, most
of the time the server responds to a client's requests, but it can also
send out unsolicited messages to report some events: so the client
doesn't know if it needs to send the next message or read some potential
events from the server that might never come. Also the "action-event"
chain prevents you from calling multiple actions in a row as in "write
something, write something else ..." because it will then have "WROTE,
WROTE" and can easily mess up your FSM (finite state machine).

async event handling is painful. The awake function in the port
is not called immediately as the event happens, but it waits until the
WAIT function is called, which could cause out-of-sync problems between
the event and its port. It's uncommon you'll see somebody do something
like this:

If error? Try [
    ;do something with the port
][
    ;close it and reopen it
    Close p
    Open p
]

It will generate two events: CLOSE and OPEN, but they share the same
port, which means the close event will have an already re-opened port.

Basically the main benefit of Reb_Device, REBREQ, REBEVT, and PORT! is it allowed Rebol to be built and used on many platforms without linking to anything beyond the lowest levels. By being not thought out much at all it wasn't relied upon very much. It makes it easy to let it go, and easy to think in terms of migrating the evaluator into a new host.

I think WebAssembly marks a very good time to be doing that migration, and being ready for it. It catapaults Rebol from looking like a dinosaur to seeming quite ahead of the curve. But if it can do that, then it should be easy to fit in anything. And if people have time to evolve the R3-Alpha minimalism into something that has an actual design, it can be done on a case-by-case, extension-by-extension basis.

(Note on me sounding like a harsh critic: a lack of defensive programming is definitely one of my pet peeves. But it's just one of many bad habits people can have--and if any of us are put under a microscope, we fail. Your recent bug reports show me being overwhelmed to the point of just having a failing test and getting used to "1 failed test" meaning 0, and in that sense we are all flawed. There's always an excuse, and we tend to accept our own excuses more easily than those of others. Lest anyone accuse me of not being self-aware...I think the main thing is that if you're going to screw up admit it and be willing to let anyone who's going to do better step up vs. get in their way.)


#15

Well, I did write IMAP and FTP for r3 but didn't think it was worthwhile to promote them to ren-c since who uses these things?? Really it should be sftp or ftps.


#16

Must say I never used FINGER and WHOIS. But I have sometimes needs for FTP. Not all partners use SFTP. Some do not even know it exists.


#17

Do you think of design in general or that of extensibility?

I think the main thing is that if you’re going to screw up admit it and be willing to let anyone who’s going to do better step up vs. get in their way.

I too have lazy colleagues who do not want to make robust code. But the number of concerns to be met to get a good code is so great that I understand this attitude a bit. For example, it pisses off having to worry about web application security.

Recently I was dealing with a co-worker who does not even want to admit that he could have made an unintentional mistake because of fatigue. We must have confidence in others to overcome our shortcomings and we must not be afraid of our shortcomings, we will not be able to avoid them.


#18

Well, ftp should be easy enough to port from r3-alpha and I guess ftps should be okay as well. Someone just has to read up about the authentication etc.