The pdf/create function takes a block! and evaluates its contents in a context containing helper functions. The result should be a completely modeled PDF document ready for serialization:
doc: pdf/create [
info [
title "A Document"
author "@rgchris"
]
add-page 640x480 [
; supported colors:
; RGB 0.0.0 - 255.255.255
; Gray 0 - 100
; CMYK 0.0.0.0 - 100.100.100.100
;
set-fill 0.100.100.0
set-pen 0.0.0.0
; several helper functions are available in page context for
; altering the graphics state
;
set-line-width 2
; other helper functions apply graphics to the page
;
draw none non-zero [
rectangle 20x20 600x440
]
draw line none [
rectangle 530x370 50x50
]
set-dash-array [4 5] 2
set-line-cap 'round
draw line none [
move-to 0x0
curve-to
as-pair 0 page/height
as-pair page/width 0
as-pair page/width page/height
]
]
add-page 200x200 [
set-pen 204.0.0
set-fill 10
set-line-width 2
; PUSH creates an isolated graphics state in which changes
; do not affect subsequent graphics operations
;
push [
set-pen 80
draw line none [
repeat offset 21 [
move-to as-pair 0 offset - 1 * 10 0
line-to as-pair 200 offset - 1 * -10 + page/height
]
]
]
; this retains the red pen from before the PUSH
;
draw line even-odd [
rectangle 20x85 30x30
]
]
; fonts/text not yet implemented, this is a no-op
;
add-font /Helvetica [
spec
]
]
Some functions can be used outside the pdf/create context:
The model can be serialized using the pdf/render function:
probe pdf/render doc
Relevance
The context-sensitive functions give both an appearance of dialecting and the transparency and rigor of being regular functions. I think the function specialization in Ren-C would really make this an efficient and scalable approach.
Likely relevant is that I'm pursuing "fully pluggable" evaluators; where you can reuse as much or little of the base evaluator behaviors in a dialect as you like.
This is much like how UPARSE is wired together with a map of COMBINATORs. There are generic ones (like the combinator for WORD! or GROUP! or SET-WORD!). Then more specific ones, like for SOME and INTO. You can override them or replace them entirely, and inherit behaviors for your own derived combinators (with some limitations, an overridden combinator cannot currently delegate variadically to a combinator with a different parameterization than its own...not saying it's impossible, but you can't do it at this moment).
By analogy, this would let you customize the base experience of running "DO" code as a map of EVALUATORs.
So if you wish to have something that's "a lot like plain DO, except PATH!s act differently" then you can run a block with a new mapping for PATH! => an evaluator that you write. (That's something that Redbol needs, because the current compatibility method is bad and is holding Ren-C back.)
In the past, people have tried to accomplish evaluator re-use by looping and using DO/NEXT, maybe intervening a bit on steps they feel like tweaking. I've grumbled that I don't think that's particularly great, because it enforces DO being an amnesiac that can't build up state (like new LET variables):
There still needs to be binding, and I've said quite a bit on that, with a hard-core belief that we do need scopes and string interpolation. But I think some of these dialect behavior issues may be best dealt with via parameterized evaluation, vs. something more intended for users to provide variables to those dialects.
It's sometimes useful to react based on first impressions (even if one is going to react with different ones later), and so I think it's worth just saying that the dashes seem suboptimal.
This does create a bit of a pickle when you're mixing DO code with a dialect, where something like SET has a meaning already. It's like you can't have it both ways.
One thing I did in UPARSE was I distinguished the non-dialected function calls with a terminal slash, making use of the BLANK!-ended paths:
>> if uparse [1020] [i: negate/ integer!] [print ["Success, negated is" i]]
Success, negated is -1020
This way, you can still have parameters interpreted via UPARSE (including not making the function call if the rule-based parameters can't all be fulfilled by matches).
So that adds another thought point to the standard of throwing in groups.
I'd definitely say when it comes to dialects that pushing on the words-separated-by-spaces should be a strong instinct (and here's a place that I've been influenced by your insistence...)
add page 640x480 [
set fill 0.100.100.0
set pen 0.0.0.0
set line-width 2
draw none non-zero [ ; non obvious to the layperson what this means
rectangle 20x20 600x440
]
draw line none [ ; ...or this parameterization
rectangle 530x370 50x50
]
...
(I can look up what those things mean, just trying to ask if there's some expression there that I'd look at and instinctively go "oh, I know what that is")
I do see the merit to this. My leaning in these experiments is towards idiomatically consistent code as opposed to full-on natural language, it's cheap, scalable, interoperable and largely tracks the PDF command set. If anything, this is the low-level interface—albeit with some methods of convenience—on which dialects/other shortcuts can be built.
A case could also be made for dropping the set- and add- prefixes which are only there as a hint anyway.