Should EVALUATE/NEXT bomb on an error?

Currently if you try something bad in EVALUATE/NEXT, it throws an error:

>> evaluate/next [unspaced null]
** Script Error: unspaced requires line argument to not be null

As EVALUATE/NEXT is a relatively low-level service, it would seem more likely one would want to handle the error on the same basis as other possible return values:

>> [position product]: evaluate/next [unspaced null foo bar]
== [foo bar]

>> product
== <<unspaced-null>>  ; some error you can handle

In this case, the bomb isn't particularly informative and seems reasonable to say 'user beware—assume errors will happen'. It's kind of difficult to work around too.

This sort of puts it in the same class as TRAP with different semantics:

trap [ok ok something bad] => [**something-bad null]
trap [ok ok] => [null ok]
evaluate [something bad ok ok] => [[ok ok] **something-bad ]
evaluate [ok ok] => [ok [ok]]

I guess the wrinkle here is how do you determine where something bad ends and ok ok resumes? That may or may not be obvious.

1 Like

Quite right.

Rebol can't measure the span of a single step of evaluation without having the side-effect of running it. That's just the nature of the beast.

I'd once tried making a "neutral" mode of the evaluator which would only gather arguments but not have any side effects. This would be able to count through the function arguments, and the arguments to functions that were those arguments, and so on:

 >> eval-neutral [print "Hi" print "Bye"]
 == [print "Bye"]   ; no actual printing done, but arity of PRINT exploited

 >> eval-neutral [print "Bye"]
 == []

But this falls down the moment you run code which changes the definitions:

 >> redefine-print: func [] [print: does [print "PRINT is arity-0 now"]]

 >> eval-neutral [redefine-print print "Bye"]
 == [print "Bye"]  ; didn't actually *run* REDEFINE-PRINT

 >> eval-neutral [print "Bye"]
 == []  ; should have only stepped past PRINT, leaving "Bye"

Some aspect of this foundational problem applies any time you try to resume things. Hence, the only granularity of resumption can be the end of BLOCK!/GROUP!.

(It's this "we can't know the limits of boundaries of expressions" that tripped up the idea of making the MATH dialect for precedence reordering able to mix expressions without putting all executable expressions between the operators in GROUP!s.)

I've written some ideas on making a more formal contract between callers and things that error. Perhaps you would be able to weigh in there.

We can't to do anything about hard failures (which can occur anywhere in the middle of an incomplete expression, at any stack level, even from just looking up a word that's a typo). No hope of graceful handling there...

...BUT we can theoretically do something about the new and novel definitionally raised error antiforms :atom: which emerge from the overall step, that have not yet been promoted to a failure. Because the antiform ERROR! is a legitimate evaluation product, and the flow of control has not yet been interrupted.

(And luckily, all meaningfully interceptible errors are definitional. Read the above link to understand why.)

Though It Turns Out To Be Tricky. :thinking:

EVALUATE/NEXT gives back a multi-result pack, with the position as the first pack item, and the synthesized value as the second.

But we don't want raised errors in PACK. In fact, if a function returns a raised error... that's the only thing it can return. Because instead of the antiform BLOCK! (the pack of values) you're returning an antiform ERROR!.

So EVALUATE/NEXT can't give you back both a raised error and a position.

Or...could it?

The expression completion position could be a field in the error itself.

Using some overlong descriptive names to illustrate:

[pos value]: evaluate/next [1 / 0 10 + 20] except e -> [
     if e.id = 'raised-error-in-evaluate-next [
         assert [e.error.id = 'divide-by-zero]  ; actual error is wrapped in e
         pos: e.resume-position  ; e.g. [10 + 20]
     ] else [
         fail e  ; some other error
    ]
]

There are more mundane approaches, such as adding /EXCEPT such that EVALUATE/NEXT/EXCEPT produces a ~[pos value error]~ pack instead of just a ~[pos value]~ pack. Then you have to remember to check that the error is not NULL on all paths. That sounds less foolproof.

Another trick could be to have an /ENTRAP refinement. The concept behind ENTRAP is to take everything up one meta level...

>> entrap [10 + 20]
== '30

So 10 + 20 gave you a quoted 30. And if you had a plain ERROR! you would get a quoted error. If you had a null antiform you'd get a quasi-null.

>> entrap [pick [a b] 3]
== ~null~

This means all values will be metaforms... either quoted or quasi.

But then, if an ERROR! antiform is encountered... ENTRAP returns it in a plain form:

>> entrap [1 / 0]
== make error! [
    type: 'Math
    id: 'zero-divide
    message: "attempt to divide by zero"
    near: '[1 / 0 **]
    where: '[/ entrap console]
    file: ~null~
    line: 1
]

And it's the only plain form you can get. So if you get a plain ERROR? RESULT back from EVAL, you know it actually represents a raised one. Otherwise your real result is the UNMETA of what you got (drop a quote level from quoted things, turn quasiforms into antiforms).

(It's a weird multiplexing trick, but it's serviceable...and kind of a testament to the versatility of the isotopic model.)

So there's hope on this! I'm actually working on something that needs this right now. Because without something like this, you cannot write TRAP in usermode...

The idea of having to add meta-oriented refinements to every function that wants to do this turns out to be grating, and undermines the generality of the isotopic protocol.

So I decided to back down on the "no raised errors in packs, ever" policy, instead saying that PACK just doesn't allow it by default...and you have to use a different PACK* function to get them.

The trick is that if a pack decays to its first element, it first checks to see if any of the non-first-elements are raised errors...and promotes them to abrupt failure. This way you don't accidentally gloss over them.

So it's actually pretty trivial to accomplish the original desire now--another home run for isotopes! :baseball:

>> name: null

>> block: [1 + 2 1 / 0 10 + 20 unspaced ["hello" name]]

>> collect [
       while [[block ^/result']: eval/next block] [
           if raised? unmeta result' [
               keep quasi (unquasi result').id
           ] else [
               keep unmeta result'
           ]
       ]
   ]
== [3 ~zero-divide~ 30 ~need-non-null~]

If you had told me when I woke up this morning that this issue would be solved by end of day, I would not have believed you.

That's some clean expressive power, right there. So many good ideas dovetailing together it almost hurts.

A Note On Why You Can't Intercept UNSPACED NULL

unspaced ["hello" null] gives a definitional error due to the choice of UNSPACED to return a definitional error in that case. But unspaced null causes a parameter type checking error, and is a hard failure. Type check errors are not definitional, which is by design--and we would not want to do otherwise.

It would be like making typos interceptible. Imagine if typos raised definitional errors. You'd say try takke block and the TRY would suppress the "no such variable as TAKKE" error and turn it to NULL. Then BLOCK would be evaluated in isolation.

You only want definitional errors to come from inside the functions themselves once they've started running and have all their arguments.

Theoretically, UNSPACED could make NULL one of its accepted parameter types. Then from the inside of its implementation, it could raise a definitional error related to it. I'll leave it as an exercise for the reader to think about why not to do that. :slight_smile:

1 Like