The problem of loss of accuracy in go float

there is always a precision problem when some specific decimals are used in calculation. I don"t know how the go language deals with this problem.
what I don"t understand is why the first two variables, acenture b, are of the same type, so why the subtraction still causes the precision problem

package main

import "fmt"

func main() {

    a := 143.66
    b := 14.55
    c := a - b
    // c = c * 100
    fmt.Println(c)
    fmt.Printf("s = %T\n", a)
    fmt.Printf("s = %T\n", b)
    fmt.Printf("s = %T\n", c)

    d := 1129.6

    fmt.Println((d * 100))

}

the printed value is as follows:

129.10999999999999
s = float64
s = float64
s = float64
112959.99999999999
Mar.22,2021

you will find this problem in almost all programming languages, which is a typical problem of precision loss of binary floating-point numbers. under the condition of fixed length, there may be precision loss between binary decimal and decimal decimal, which will cause the above phenomenon.

The

solution is to use an exact counting algorithm, such as Java, which uses the special class BigDecimal to implement this algorithm.

corresponds to the function of go implemented under the package of big: https://golang.org/pkg/math/big/

.

this has nothing to do with language, all languages are like this, as long as the float or double type has this problem, because they themselves have errors, if you want to be error-free, Java has a BigDecimal.

Menu