In an issue on Red's GitHub, Gregg gives this example:
red>> n: 0 while [n <= 0.3][print mold/all n n: n + 0.1]
0
0.1
0.2
; Huh?
red>> n: 0 while [n <= 3][print mold/all n n: n + 1]
0
1
2
3
; OK. I'm not crazy
red>> n: 0 while [n <= 0.4][print mold/all n n: n + 0.1]
0
0.1
0.2
0.30000000000000004
0.4
; What... Oh... How do I work around that?
Sure, you can point everyone to "What Every Computer Scientist Should Know About Floating Point Arithmetic" and say it's not your problem. I mean geez, it's only a mere 50 or 60 pages long.
But as Gregg says:
"Yes, we can explain it. Yes, not everyone will hit this. But [...] we don't just look like we're passing problems on and throwing our hands in the air, saying it's not our problem; it's part of our mission, and being empathetic to pain points is something people can hold up as examples of why to use Red, rather than why not to use Red."
This makes me wonder about the choice to use floating point as a default. Yes, it's in the hardware. Yes you should be able to use them somehow. But it may not be a fit for the language goals when considering examples like the above. So I thought I'd ask the AIs about it.
Note: The MONEY! datatype in R3Alpha is not an IEEE floating point. It's something Ladislav wrote:
Deci significands are 87bit long, unsigned, unnormalized, stored in
little endian order. (Maximal deci significand is 1e26  1, i.e. 26
nines)
Sign is onebit, 1 means nonpositive, 0 means nonnegative.
Exponent is 8bit, unbiased.
ChatGPT
Languages that avoid using floatingpoint representation for decimal numbers typically do so to provide better precision for certain kinds of calculations, especially in financial and business applications where exact decimal representation is critical.
Prominent Examples of Languages Avoiding FloatingPoint as Default:

COBOL:

What Went Right: COBOL uses fixedpoint decimal arithmetic by default, which is crucial for financial calculations where exact precision is necessary. This choice made COBOL a staple in business, finance, and government systems, where avoiding floatingpoint errors is essential.

What Went Wrong: COBOL’s strict adherence to fixedpoint arithmetic limited its applicability outside of these specific domains. It also meant that performance in nonfinancial contexts was slower compared to languages using floatingpoint arithmetic.

Rexx:

What Went Right: Rexx, designed for ease of use in scripting and text processing, also opts for fixedpoint arithmetic by default. This choice made it easy to write scripts without worrying about floatingpoint imprecision.

What Went Wrong: Rexx’s simplicity and fixedpoint arithmetic made it less suitable for scientific and engineering applications where floatingpoint is necessary. As a result, its usage was limited to simpler scripting tasks.

Ada:

What Went Right: Ada offers both fixedpoint and floatingpoint types, giving developers the choice based on the application's needs. For safetycritical and financial applications, fixedpoint types are used to ensure precision.

What Went Wrong: The flexibility came at the cost of complexity, as developers had to understand and choose the correct type for each use case. This complexity made Ada less popular than more straightforward languages like C or Python.
Consequences and Lessons Learned:

Precision vs. Performance: Languages that prioritize precision over performance by avoiding floatingpoint as the default are often limited to domains where that precision is paramount. They may lack the versatility and speed of languages that embrace floatingpoint arithmetic.

Adoption and Popularity: Fixedpoint arithmetic can make a language less appealing for generalpurpose programming, particularly in fields like scientific computing, graphics, and games where floatingpoint performance is critical.

Niche Success: Languages that avoid floatingpoint arithmetic find success in niche areas where their precision is a strong advantage, but they struggle to gain traction outside those niches.
Modern Developments:
Some modern languages, like Python, attempt to bridge the gap by offering multiple numeric types (e.g., float
and decimal
) and making it easy to choose the right one for the job. However, this still requires developers to be aware of the underlying issues and make conscious decisions about numeric types.
In summary, while avoiding floatingpoint arithmetic can prevent certain types of errors, it also limits the language's applicability and performance in domains where floatingpoint is the appropriate tool. The success or failure of these languages often hinges on how well they serve their intended niche and how easily developers can adapt to or work around their limitations.
Claude
Languages with Alternative Decimal Representations

Python

While Python uses floatingpoint numbers by default, it also provides a Decimal
type in its standard library.

The Decimal
type offers precise decimal representation and arithmetic.

Fate: Widely successful language, with the Decimal
type being particularly useful in financial applications.

Julia

Julia uses floatingpoint by default but provides arbitraryprecision arithmetic via the BigFloat
type.

It also offers a Rational
type for exact representation of ratios.

Fate: Growing popularity in scientific computing and numerical analysis.

Scala

Scala, running on the JVM, inherits Java's BigDecimal
class for arbitraryprecision decimal arithmetic.

Fate: Successful in both industry and academia, particularly for big data processing.

Haskell

Haskell provides a Rational
type for exact representation of ratios.

It also has libraries for arbitraryprecision decimals.

Fate: Influential in programming language research and functional programming communities.

Erlang
What Went Right

Precision in Financial Applications: Languages offering exact decimal representations are preferred in financial and accounting applications where precision is crucial.

Predictability: Alternative representations often provide more predictable behavior, avoiding surprises like the one demonstrated in the Rebol example.

Ease of Understanding: For beginners, exact representations can be easier to understand and reason about.

Flexibility: Many languages that offer alternative representations do so alongside floatingpoint, giving developers the choice based on their specific needs.
What Went Wrong (or Challenges Faced)

Performance: Exact representations and arbitraryprecision arithmetic are generally slower than hardwaresupported floatingpoint operations.

Memory Usage: Exact representations often require more memory, which can be a concern for largescale applications or systems with limited resources.

Compatibility: Deviating from the IEEE 754 standard (used for floatingpoint) can create interoperability issues with other systems and libraries.

Learning Curve: Developers accustomed to floatingpoint may need to adjust their thinking and practices when working with alternative representations.
Conclusion
While floatingpoint remains the default in many languages due to its performance and hardware support, there's a growing recognition of its limitations. Languages that offer alternative representations, either as the default or as easily accessible options, provide developers with powerful tools to handle decimal arithmetic more precisely when needed. The success of these languages often comes from striking a balance between performance and precision, allowing developers to choose the right tool for their specific use case.