I'd hoped in pursuing the stackless model that it would shed a light onto how debugging would have to work (or at least, on how it wouldn't work).
I've been wanting a scriptable debugger, where you could write something like:
>> test: func  [print "<break>", breakpoint, print "Hello!", print "World!"] >> test <break> ** Breakpoint hit [console/test]>> step2: func  [loop 2 [print "<step>", step/over]] [console/test]>> step2 <step> Hello! <step> World!
What stackless brings to the table here is the capability for STEP to put the loop on hold (while still remembering how far along it is in the loop count). The debugger enters a state of suspended animation and instructs the stack it was running to resume...running long enough to complete one step.
If you didn't have a stackless model, you'd have to use a continuation-passing style. STEP would be parameterized with a function to call back when the step completed.
Intuitively, it seems that what you do while using a debugger is something that should be scriptable in this fashion. There's nothing that profound about how you push the step-over and step out buttons in a typical C debugger...look at the stack...and make decisions. Why can't that get automated?
There are some mechanical questions. Like "what if a breakpoint happens while you are stepping". Your script has to have an answer to this--but then again, so would you.
How would you recognize the completion of the STEP you did vs. some other event? You need some kind of handle to ensure continuity, that you knew a step completed.
Maybe that's done by thinking of it as a handle you WAIT on, and then you find out about whether that completes or some other event gets triggered:
req: step-request frame wait [ req [--handling if that step finishes--] debug-events [--any other debug event?--] 1000 [--timeout value?--] ]
Stackless Assists, Then Stackless Complicates
Stackless does offer the mentioned leverage to be able to put the debugger in a suspended state while picking up another stack's code. But once stackless code exists, you have to worry about debugging stackless code too.
As with any API, you find the debugger starts to need to have a model of all the various internal entities that you would need to talk about. So it has to have a model of "threads of execution" (green threads). It has to discern stacks which belong to the debugger (which should not be stepped into) from those that belong to the client (which should...or maybe only some?)
Things get really complicated, really quickly.
Taking A STEP/BACK
When you think about the problem of mixing the UI for a debugger into the same process as what is being debugged, this raises questions.
There are pros and cons to it. One of the big supposed "pros" of being in the same process is having access to the memory for all the objects. So you can poke at it and manipulate it directly.
However, there are good reasons to design a debugger to go through some level of indirection to do these kinds of things. One very good reason is that it means you can make a remote debugger.
It seems to me a minimal bar for looking for future-forward inspiration on this front is systems that are actually working today. And you can see that in the Chrome DevTools Protocol.
If you're going to be a client of the devtools protocol, then to do debug evaluations you pass the code to the debugged session...and then you get back either a primitive value (which you can use directly), or a "remote object ID". If it's a remote object ID, you can use that to do more poking at the client, and extract more primitive values from it.
Connecting to a remote debugger is a two-step process. The first is to connect to chrome via a debug port and get a list of the running tabs. Then you pick a tab, and you get a websocket URL to connect to in order to send the actual API requests. Keeping that websocket alive is what keeps the remote object IDs alive that you asked for across API calls, and it also makes it possible to do things like subscribe to events in the debugger.
This Seems The Way Forward
This certainly seems like a staggering and epic undertaking. But it's better to lay the foundations, and have modest initial features, than to try going down a road that is a dead end in the long run.
The server and websocket abilities could all initially just come from C code, e.g. the websockets.org library:
libRebol already provides a good way of tracking API handles, and those could be used as remote object IDs.
Over the long run, stackless is still crucial here...because we don't necessarily want to keep the entire debug server (that talks over sockets to the client) to be written in C. But if the server is running as usermode code inside the process being debugged, it's going to need to be able to run without interfering with the mid-stack of user code that it's debugging.
If done correctly, this could be bridged with talking to websockets in a browser...so a WASM interpreter in a browser could connect through Chrome DevTools and with a little fiddling make the calls. This would be a case where the interpreter would not need libwebsockets built in, because it would be leveraging what's already in Chrome.