Should Floating Point Really Be The Default?

In an issue on Red's GitHub, Gregg gives this example:

red>> n: 0  while [n <= 0.3][print mold/all n  n: n + 0.1]
0
0.1
0.2
; Huh?

red>> n: 0  while [n <= 3][print mold/all n  n: n + 1]
0
1
2
3
; OK. I'm not crazy

red>> n: 0  while [n <= 0.4][print mold/all n  n: n + 0.1]
0
0.1
0.2
0.30000000000000004
0.4
; What... Oh... How do I work around that?

Sure, you can point everyone to "What Every Computer Scientist Should Know About Floating Point Arithmetic" and say it's not your problem. I mean geez, it's only a mere 50 or 60 pages long.

But as Gregg says:

"Yes, we can explain it. Yes, not everyone will hit this. But [...] we don't just look like we're passing problems on and throwing our hands in the air, saying it's not our problem; it's part of our mission, and being empathetic to pain points is something people can hold up as examples of why to use Red, rather than why not to use Red."

This makes me wonder about the choice to use floating point as a default. Yes, it's in the hardware. Yes you should be able to use them somehow. But it may not be a fit for the language goals when considering examples like the above. So I thought I'd ask the AIs about it.


Note: The MONEY! datatype in R3-Alpha is not an IEEE floating point. It's something Ladislav wrote:

Deci significands are 87-bit long, unsigned, unnormalized, stored in
little endian order. (Maximal deci significand is 1e26 - 1, i.e. 26
nines)

Sign is one-bit, 1 means nonpositive, 0 means nonnegative.

Exponent is 8-bit, unbiased.

ChatGPT


Languages that avoid using floating-point representation for decimal numbers typically do so to provide better precision for certain kinds of calculations, especially in financial and business applications where exact decimal representation is critical.

Prominent Examples of Languages Avoiding Floating-Point as Default:

  1. COBOL:

    • What Went Right: COBOL uses fixed-point decimal arithmetic by default, which is crucial for financial calculations where exact precision is necessary. This choice made COBOL a staple in business, finance, and government systems, where avoiding floating-point errors is essential.

    • What Went Wrong: COBOL’s strict adherence to fixed-point arithmetic limited its applicability outside of these specific domains. It also meant that performance in non-financial contexts was slower compared to languages using floating-point arithmetic.

  2. Rexx:

    • What Went Right: Rexx, designed for ease of use in scripting and text processing, also opts for fixed-point arithmetic by default. This choice made it easy to write scripts without worrying about floating-point imprecision.

    • What Went Wrong: Rexx’s simplicity and fixed-point arithmetic made it less suitable for scientific and engineering applications where floating-point is necessary. As a result, its usage was limited to simpler scripting tasks.

  3. Ada:

    • What Went Right: Ada offers both fixed-point and floating-point types, giving developers the choice based on the application's needs. For safety-critical and financial applications, fixed-point types are used to ensure precision.

    • What Went Wrong: The flexibility came at the cost of complexity, as developers had to understand and choose the correct type for each use case. This complexity made Ada less popular than more straightforward languages like C or Python.

Consequences and Lessons Learned:

  • Precision vs. Performance: Languages that prioritize precision over performance by avoiding floating-point as the default are often limited to domains where that precision is paramount. They may lack the versatility and speed of languages that embrace floating-point arithmetic.

  • Adoption and Popularity: Fixed-point arithmetic can make a language less appealing for general-purpose programming, particularly in fields like scientific computing, graphics, and games where floating-point performance is critical.

  • Niche Success: Languages that avoid floating-point arithmetic find success in niche areas where their precision is a strong advantage, but they struggle to gain traction outside those niches.

Modern Developments:

Some modern languages, like Python, attempt to bridge the gap by offering multiple numeric types (e.g., float and decimal) and making it easy to choose the right one for the job. However, this still requires developers to be aware of the underlying issues and make conscious decisions about numeric types.

In summary, while avoiding floating-point arithmetic can prevent certain types of errors, it also limits the language's applicability and performance in domains where floating-point is the appropriate tool. The success or failure of these languages often hinges on how well they serve their intended niche and how easily developers can adapt to or work around their limitations.

Claude


Languages with Alternative Decimal Representations

  1. Python

    • While Python uses floating-point numbers by default, it also provides a Decimal type in its standard library.

    • The Decimal type offers precise decimal representation and arithmetic.

    • Fate: Widely successful language, with the Decimal type being particularly useful in financial applications.

  2. Julia

    • Julia uses floating-point by default but provides arbitrary-precision arithmetic via the BigFloat type.

    • It also offers a Rational type for exact representation of ratios.

    • Fate: Growing popularity in scientific computing and numerical analysis.

  3. Scala

    • Scala, running on the JVM, inherits Java's BigDecimal class for arbitrary-precision decimal arithmetic.

    • Fate: Successful in both industry and academia, particularly for big data processing.

  4. Haskell

    • Haskell provides a Rational type for exact representation of ratios.

    • It also has libraries for arbitrary-precision decimals.

    • Fate: Influential in programming language research and functional programming communities.

  5. Erlang

    • Erlang uses arbitrary-precision integers by default and has a rational module for exact rational
      arithmetic.

    • Fate: Successful in telecommunications and distributed systems.

What Went Right

  1. Precision in Financial Applications: Languages offering exact decimal representations are preferred in financial and accounting applications where precision is crucial.

  2. Predictability: Alternative representations often provide more predictable behavior, avoiding surprises like the one demonstrated in the Rebol example.

  3. Ease of Understanding: For beginners, exact representations can be easier to understand and reason about.

  4. Flexibility: Many languages that offer alternative representations do so alongside floating-point, giving developers the choice based on their specific needs.

What Went Wrong (or Challenges Faced)

  1. Performance: Exact representations and arbitrary-precision arithmetic are generally slower than hardware-supported floating-point operations.

  2. Memory Usage: Exact representations often require more memory, which can be a concern for large-scale applications or systems with limited resources.

  3. Compatibility: Deviating from the IEEE 754 standard (used for floating-point) can create interoperability issues with other systems and libraries.

  4. Learning Curve: Developers accustomed to floating-point may need to adjust their thinking and practices when working with alternative representations.

Conclusion

While floating-point remains the default in many languages due to its performance and hardware support, there's a growing recognition of its limitations. Languages that offer alternative representations, either as the default or as easily accessible options, provide developers with powerful tools to handle decimal arithmetic more precisely when needed. The success of these languages often comes from striking a balance between performance and precision, allowing developers to choose the right tool for their specific use case.