Planning Ahead for BigNum INTEGER!

I've long been an advocate for the idea that INTEGER! should be an arbitrary precision implementation. It doesn't make sense to me why a high-level language which is targeting future applications would constrain itself to 32-bit, or (in R3-Alpha's case) 64-bit INTEGER! values. I've been trying--for instance--to "futureproof" conversions of INTEGER! to BINARY!, in anticipation of the change.

Now, due to Red's desire to be involved in cryptocurrency, they are adding BIGNUM! as a distinct datatype from INTEGER!:

...which is not the way I would do it. INTEGER! already has to detect math overflows in order to give an error when your math is out of range. I would prefer INTEGER! to start out using CPU register-size integers, and instead of erroring on overflows, promote the representation into a bignum...on demand.

There's already bignum code in Ren-C

As it so happens, the HTTPS implementation forced the inclusion of some bignum arithmetic for cryptography. So Saphir, Atronix/R3, and Ren-C all have some bignum code in it.

There is nothing particularly special about this code, which comes from axTLS. If you read it (and the Red code) you can see that basic bignum implementations are more or less like doing arithmetic by hand in columns, or spinning wheels on an odometer. It isn't particularly hard code to write, if you constrain yourself to add/subtract/divide/multiply/modulus, and don't insist on it being super fast.

For more operations and peer-reviewed optimized industrial-strength implementations, there's things like the GNU Multiple Precision Arithmetic Library Of course it's going to be bigger: "High-level signed integer arithmetic functions (mpz). There are about 150 arithmetic and logic functions in this category."

The Identity Problem

I suggested that INTEGER! could not hurt for performance for most applications, if it was merely promoted to bignum internal representations on overflow. But there's an issue of identity in bignum operations.

Today this is the behavior for integers:

 foo: func [x [integer!]] [add x 1]
 smallnum: 0
 foo smallnum
 print smallnum ;-- prints 0, unaffected

So each integer cell has its own identity (as an ANY-IMMEDIATE!, "immediate value").

ADD or + do not go in and fiddle the bits in a way that is seen by "other instances of that integer", the way that APPEND reaches into the "data node" behind a series and affects all instances.

But with BigNums, making a new identity on each addition can be costly:

bignum: 123456789012345678901234567890
loop 1000000 [
    bignum: bignum + 1

How many bignum structures should that be "malloc()" ing? Naively done, that would be a million or more allocations, because each addition in Rebol is expected to produce a new-and-disconnected integer.

Getting around this by making ADD mutate BigNum--when it doesn't mutate other things--seems inconsistent. It's not a new problem, there were some issues with ADD of VECTOR!. In Red too, see "Adding a number to a vector changes the vector itself"

Proposal: Mutating ADD, non-mutating +

The thought I had would be that INTEGER!s would not be immediate by their nature, but that literals would be LOCK'd by default.

 >> x: 10
 >> add x 20
 ** Error, locked integer

 >> x: make integer! 10
 >> add x 20
 >> print x

Then, + would basically be using ADD with a COPY, and locking the result:

 +: enfix func [a b] [
     lock add copy a b

For non-bignums, this could be optimized internally so that if the input was a register-size integer...and the output was a register-size identity node ever need be created.

So someone can still do an inefficient bignum loop with +, if they really wanted to. But a savvy crypto-person would use ADD so they are modifying the existing bignum in place.

What this does is bolster Rebol's "mutability by default" story about operations like APPEND, bringing it further to ADD. It addresses the issue with VECTOR!, by saying that + will create a new vector while add would mutate the existing one. And it avoids introducing a new BIGNUM! datatype.

Like other changes, this can be skinned with Rebol2/R3-Alpha compatibility to make add 1 2 work in old code that needs it to.

(Note: as a side-benefit, this could remove the need for 64-bit math emulation code on 32-bit platforms. Currently, in an I-believe-to-be-misguided-standardization-effort, Rebol has tried to give those using the interpreter a baseline confidence that INTEGER! has 64-bit representation...even if that's not the native integer type. All that code could go, replaced by bignum code that's needed anyway, if you have any cryptography.)


This is another one of those "things that have to get figured out before Beta/One".

We don't actually need the implementation to be bignum yet...

...moreover I don't think there's time for it. There's just too much to do. (It's looking like a fully insane promise to ship this year as it is...adding and vetting BigNums is one of those things that could take months on its own!)

The only thing we need for Beta/One is the operators to be consistent with how they'd work w.r.t. the numeric mutability a bignum-based system would require.

Remember the don't want to do a loop like:

 bignum: 123456789012345678901234567890
 loop 1000000 [
     bignum: bignum + 1

You'd rather have one bignum that's getting incremented, not a million dynamically allocated bignums that have to be copied on each addition. Conceptually think of it like doing math operators on a big vector of numbers and not wanting to make a copy each time. Except it's a big vector of digits representing a single number.

Today ADD is not mutating:

 foo: func [x [integer!]] [add x 1]
 smallnum: 0
 foo smallnum
 print smallnum ;-- prints 0, unaffected

But I'm proposing ADD be the prefix mutating operator on arbitrary-precision integers (and vectors, and anything else you might ADD to). So this could run and print 1, without raising an error.

However, I think the default for this code should be to raise an error on the ADD. It seems to me that this very strongly parallels the inadvertent mutation of source blocks. Except here you'd be mutating the 0 that's referenced by the memory cell between smallnum: and foo (!)

Note that I said "that's referenced by the memory cell" and not "that lives in the memory cell". This is a very good argument for having source integers be LOCK'd...not CONST. Because if there's any chance people wanted to be thinking about someone having a "reference" to that integer down the line and wanting to influence it through that reference, they had to be handed some kind of indirection from the get-go. But if it was permanently locked and fit in a cell, then it will always fit in a cell--no indirection needed.

But once you've gone ahead and made your COPY of that INTEGER! from a cell, you can do all the CONST and MUTABLE on it you like. You paid for the node, might as well use it.

After 8 months since I posted about it, PLUS still seems sound

Having PLUS as the prefix form of +, as a non-modifying variant of ADD, probably handles things. You don't see a tremendous number of ADD and SUBTRACT calls in source anyway, so changing them to PLUS and MINUS doesn't seem like that big a deal.

A naive version of PLUS would look like this:

plus: chain [adapt 'add [value1: copy value1 | :lock]]
+: enfix tighten :plus

That's naive because it's paying the cost we want to avoid for small numbers...namely creating an indirection node for an ephemeral mutable value, that's only going to get locked later. To get the performance we're used to on basic numeric operations on CPU-sized numbers, PLUS has to be smarter than this:

plus: chain [
    adapt 'add [
         if integer? :value [return plus-integer-optimized value1 value2]
         value: copy value1
    ] | :lock

So something more like that, but in native code. It should only promote cell-sized integers to memory indirections via COPY if it absolutely has to.

(Again: today's memory indirections would not be BigNums and all the issues they bring into this, but just 64-bit get the interface right so bignums would have the right foundation to work.)

This needs review and doing if it's going to be done.

What would be very helpful to me would be if someone could learn a bit more about Red's BIGNUM! and give me a summary of it, and raise any relevant challenges to my proposal here. I'd like to know about any hard down-the-road issues that are going to come up on integrating bignums into Rebol before I start. We need a table of names for everything and an understanding of which operators on numbers today would modify and which would not.

...afterthought: could this make us build fully standard C89?

One of the "won't build on older C compilers" aspects of R3-Alpha is its use of long long, in order to try and standardize to a 64-bit INTEGER! type. If that standard were removed (by having arbitrary precision on all platforms) then long long could be optional; using whatever maximum legal int there was.

We might envision a world where if all the // comments were merely auto-translated into /* ... */, the codebase would have only one issue keeping it from being fully C89 pedantic... the use of declarations after statements within a code block. Technically speaking, a simple auto-generator could probably take care of that too... just introduce a new scope each time it happens (!)

 void foo() {
     int x = 1;
     x = x + 1;
     int y = 2;
     y = y + 2;

That gives:

test.c:4:10: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
      int y = 2;

But if it were auto-transform'd:

 void foo() {
     int x = 1;
     x = x + 1;
     { int y = 2;
     y = y + 2; }

That would be fine! If you have the interest, @Mark-hi, you might look at @Brett's C lexicals and see if you could twist that to make a PARSE-based preprocessor that can transform the existing sources to build as C89 w.r.t. declaration after statement and comments. Then when bignum gets rid of the long long dependency, it'll be possible.

It's been a year since I've weighed in on the ideas surrounding BigNum arithmetic. But we still need it, and the idea that written-in-C Rebol extensions can always lean on INTEGER! via libRebol to get their fast BigNum math is very appealing. Look at how you have to write m = c^d mod n in the Crypt module's rsa.c:

bigint *RSA_private(const RSA_CTX *c, bigint *bi_msg)
    return bi_crt(c->bi_ctx, bi_msg, c->dP, c->dQ, c->p, c->q, c->qInv);
    BI_CTX *ctx = c->bi_ctx;
    ctx->mod_offset = BIGINT_M_OFFSET;
    return bi_mod_power(ctx, bi_msg, c->d);

If your bigints were just REBVAL* to INTEGER!, you could say:

 rebValue(c, "pow", d, "mod", n)

I picked a small sample as the tip of that iceberg, and if you look around in that file you can see just what a PITA the bi_xxx routines are when it comes to allocation, freeing, etc.

Arbitrary precision the way of the world now, and the BigNums our TLS uses should be the same BigNums the language uses, or we are failing.

The main musing I've been having is that we may not want to tie ourselves to any one BigNum implementation. Some are lighter, some are heavier. We could make BIGNUM! an extension-provided type distinct from INTEGER! (as Red does) and force you into a conscious decision of which you are using... thus allowing you to compile with or without it. But I don't like this line of thinking.

An alternative would be to have an overflow/underflow hook for the native integer math. The default would just you could build a thin Rebol that only worked with platform ints. But if an extension registered a hook, then the overflow would pass control to the extension to re-engineer the cell to point to a dynamic structure of its choice. You could build with one-and-only-one such hook installed (could be enforced by the uniqueness requirements of linking, and the "NoBigInt Extension" could be the extension that just gave the dummy that fails).

(Demoting back to a platform-sized int would be at the discretion of the extension when handing back a result from any math operation.)

Anyway: just wanted to mention this interesting point about switching it around so that any C powered crypto escapes out to use Rebol INTEGER! to back its arithmetic operations, via calls to libRebol.


I recently had a look at erlang where I had a look at the tail recursion for factorial calculation.
Basically it had to do with calculating the value that results in a number that is far too big for a normal INT or even a float, giving about 1750 digits.
The code is as expected: (in fac_func.erl)
another_factorial(0) -> 1;
another_factorial(N) -> N * another_factorial(N-1).
1> c(fac_func).
2> fac_func:another_factorial(200).
3> fac_func:another_factorial(2000).

Is this what you have in mind?

Just for your information.
There is a big difference between needing the result of c^d mod n and obtaining said result by modding c^d by n. Nobody does the latter, for good reason: most of the work in calculating c^d is useless wasted effort. Instead, the calculation is done incrementally, multiplying by c and reducing it to a remainder every time the product exceeds n. So it is always a single operation (function) that takes three arguments, and never an expression with the two operators "power" and "modulo".
Still need the bignum math though, since in typical usage c, d, and n are all many, many digits.

Just for your information.

Another Brian doing a pretty good job of 'xplainin;