This one is pretty simple, I am changing a variable by ± 0.1. But when I go to test it, It sometimes doesn’t change it by the exact amount I need it to increase, instead giving me strange floating point errors off of the expected number. I do have video proof of this, but this site won’t allow me to send it through here. I highly doubt it’s an issue on my part since the logic was too simple for it to be human error.
could you show me…at least something? what does the console say?
This was pulled from an actual project which is why you see the button logic, but it shouldn’t have any bearing on the code it’s suppose to execute
And this is what I mean by “floating point” error
Ok, so update, I have a work-around for this bug by adding some rounding logic just after the variable modifier logic activates. But that being said, I shouldn’t have to in the first place, so it’s still a bug that does need to be looked into.
Unfortunately, it’s not a bug in the normal sense. It’s a limitation. It is a floating point rounding error. There’s a trade off between accuracy and speed. I honestly can’t believe it still exists since computers are so much faster and have so much more memory and storage. But it is.
From Gemini.
Why it Happens
Computers think in binary (base-2), but we think in decimal (base-10). Some fractions that look “clean” to us, like 0.1 or 1/10, are impossible to represent perfectly in binary. They become repeating decimals, similar to how 1/3 becomes 0.3333… in our decimal system.
When you add 0.1 repeatedly, the computer has to round that infinite binary string. Eventually, those tiny rounding errors stack up, and instead of getting exactly 0.3, you get something like 0.30000000000000004.
I do remember Silver-Streak talking about that once.
Rounding it should do the trick.


