Alan Cox asked "What is wrong "%.*g" with FLT_DECIMAL_DIG (from float.h)."
Okay, I'll explain.
DBL_DECIMAL_DIG is more appropriate, since we use type double (64 bits),
not type float (16 bits), to represent numeric values in OpenSCAD.
First, consider javascript:
$ node
94.8
94.8
0.12
0.12
0.1 + 0.02
0.12000000000000001
Now try printing 94.8 using C and printf() and %.*g with DBL_DECIMAL_DIG.
On MacOS, I get 94.799999999999997
If you use the "How To Print Floating Point Numbers Accurately" algorithm,
which is not available from the C library, then each unique float value has
a unique printed representation, there is no truncation. But the shortest
representation that reconstructs the original float value is always
produced.
On 13 November 2015 at 08:07, Alan Cox alan@lxorguk.ukuu.org.uk wrote:
Better approach would be to specify output accuracy like
sprintf(buf, "%.*f", n, f);
where n is something like
What is wrong "%.*g" with FLT_DECIMAL_DIG (from float.h). None of the
other mucking around should be needed.
Alan
OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
kintel wrote
The primary reason why OpenSCAD sometimes cannot read back its STL output
is that CGAL doesn’t support reading triangles without an area. This tends
to happen when converting from CGAL’s internal representation to double,
where some insanely small triangles get their coordinates shifted a tiny
bit due to floating point accuracy limitations.
One solution, as mentioned earlier, could be to perform a topology-aware
surface optimization to ensure all triangles have a valid area, but since
CGAL is the most picky component and most other tools can handle zero-area
triangles, this hasn’t been a priority.
Hey Marius, dumb question time!
If we were to use Gmpfr (fixed precision floating-point number) up-front
(see http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpfr.html)
instead of arbitrary-precision Gmpq (see
http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpq.html) as the
kernel number type, would a lot of these issues just resolve themselves?
My trail of thought being, the loss in precision happens up-front in the
operations and the resulting pre-export mesh is already at/near the expected
export precision. No degenerate faces because they've already been computed
out.
One likely change point:
https://github.com/openscad/openscad/blob/master/src/cgal.h#L43
Andrew.
--
View this message in context: http://forum.openscad.org/Inconsistent-conversion-of-floating-number-to-string-at-7th-significant-digit-tp14350p14455.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
Using high precision floating point internally for the mesh will not fix
the problem, if we throw away most of the precision when we output the STL
file. When we output ASCII STL, we truncate each value to 6 decimal digits
of precision, and that is the most significant reason that thin triangles
are being converted to zero width triangles: most of the time, this is
happening in the export code.
If we fix the ASCII STL export code to produce super high precision output
(the equivalent of 128 bit floating point precision, or better), then it
won't help in those rare cases where truncation to 64 bit floating point
precision is enough to create a zero width triangle. That's because the
programs that import the STL that we produce are mostly likely using 64 bit
floats for their mesh, and will throw away any extra precision beyond that
point. Any program that imports AMF is required by the standard to use at
least 64 bit precision for the mesh, and it's rare for programs to support
better than 64 bit float precision because the software and hardware
support is lacking.
Here's how I think we should fix the problem:
On 13 November 2015 at 14:00, clothbot andrew@plumb.org wrote:
kintel wrote
The primary reason why OpenSCAD sometimes cannot read back its STL output
is that CGAL doesn’t support reading triangles without an area. This
tends
to happen when converting from CGAL’s internal representation to double,
where some insanely small triangles get their coordinates shifted a tiny
bit due to floating point accuracy limitations.
One solution, as mentioned earlier, could be to perform a topology-aware
surface optimization to ensure all triangles have a valid area, but since
CGAL is the most picky component and most other tools can handle
zero-area
triangles, this hasn’t been a priority.
Hey Marius, dumb question time!
If we were to use Gmpfr (fixed precision floating-point number) up-front
(see http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpfr.html)
instead of arbitrary-precision Gmpq (see
http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpq.html) as the
kernel number type, would a lot of these issues just resolve themselves?
My trail of thought being, the loss in precision happens up-front in the
operations and the resulting pre-export mesh is already at/near the
expected
export precision. No degenerate faces because they've already been computed
out.
One likely change point:
https://github.com/openscad/openscad/blob/master/src/cgal.h#L43
Andrew.
--
View this message in context:
http://forum.openscad.org/Inconsistent-conversion-of-floating-number-to-string-at-7th-significant-digit-tp14350p14455.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
On Nov 13, 2015, at 14:00 PM, clothbot andrew@plumb.org wrote:
If we were to use Gmpfr (fixed precision floating-point number) up-front
(see http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpfr.html)
instead of arbitrary-precision Gmpq (see
http://doc.cgal.org/latest/Number_types/classCGAL_1_1Gmpq.html) as the
kernel number type, would a lot of these issues just resolve themselves?
I don’t think CGAL’s Nef polyhedrons would support that number type:
http://doc.cgal.org/latest/Nef_3/classCGAL_1_1Nef__polyhedron__3.html
-Marius
On Nov 13, 2015, at 14:51 PM, doug moen doug@moens.org wrote:
That's because the programs that import the STL that we produce are mostly likely using 64 bit floats for their mesh, and will throw away any extra precision beyond that point.
Also keep in mind that binary STL dictates 32-bit floats. This might also result in there being 32-bit importers out there for ASCII as well.
(Lack of minkowski sum is a common problem, so we may have to code that for the new engine.)
Not sure if this helps, but: We currently implement minkowski sums as a union of convex hulls, so the missing piece is convex decomposition.
-Marius
Most use cases of minkowski would be more simply and accurately implemented with a 3d version of the offset function. Minkowski with a sphere is a terrible way to make rounded edged objects.
On 14 Nov 2015, at 6:33 am, Marius Kintel marius@kintel.net wrote:
On Nov 13, 2015, at 14:51 PM, doug moen doug@moens.org wrote:
That's because the programs that import the STL that we produce are mostly likely using 64 bit floats for their mesh, and will throw away any extra precision beyond that point.
Also keep in mind that binary STL dictates 32-bit floats. This might also result in there being 32-bit importers out there for ASCII as well.
(Lack of minkowski sum is a common problem, so we may have to code that for the new engine.)
Not sure if this helps, but: We currently implement minkowski sums as a union of convex hulls, so the missing piece is convex decomposition.
-Marius
OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
Philipp Tiefenbacher wrote
Hi?
In all cases given the number is not representable in base 2 (because base
2 has no prime factor 5 like base 10 which you gave your examples in) so
the ...5 becomes a ...4999smthng and that gets rounded down correctly.
This explains the "random" behaviour.
Greetings
Philipp
I see. Converting to binary form:
0.1 = 0.00011001100110011001100...
0.2 = 0.00110011001100110011001...
Both are not exact.
$ Runsun Pan, PhD
$ libs: doctest , faces ( git ), offline doc ( git ),runscad.py( 1 , 2 , git );
$ tips: hash( 1 , 2 ), sweep , var , lerp , animGif
--
View this message in context: http://forum.openscad.org/Inconsistent-conversion-of-floating-number-to-string-at-7th-significant-digit-tp14350p14461.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
On Nov 13, 2015, at 15:50 PM, Greg Frost Gregorybartonfrost@gmail.com wrote:
Most use cases of minkowski would be more simply and accurately implemented with a 3d version of the offset function. Minkowski with a sphere is a terrible way to make rounded edged objects.
3D offset is on the wishlist. It’s not a trivial operator to implement efficiently though. Pointers are welcome :)
-Marius
doug.moen wrote
If we fix the ASCII STL export code to produce super high precision output
(the equivalent of 128 bit floating point precision, or better), then it
won't help in those rare cases where truncation to
64 bit floating point
precision
is enough to create a zero width triangle. That's because the
programs that import the STL that we produce are mostly likely using 64
bit
floats for their mesh, and will throw away any extra precision beyond that
point. Any program that imports AMF is required by the standard to use
at
least 64 bit precision for the mesh, and it's rare for programs to support
better than 64 bit float precision because the software and hardware
support is lacking.
I did a little test: Let i = 1 + k, then make k smaller by 10 fold
continuously, to see how far down k goes to get ignored. When it did, we
will see 1==i.
I tested 1+0.2 and 1+0.6, so the tests go like:
1.02 == 1 ? ==> false
1.06 == 1 ? ==> false
1.002 == 1 ? ==> false
1.006 == 1 ? ==> false
and so on. This goes on until there are 16 0's:
1.0000000000000002 ==1 ? ==> false (15 0's)
1.0000000000000006 ==1 ? ==> false
1.00000000000000002 ==1 ? ==> true (16 0's)
1.00000000000000006 ==1 ? ==> true
What it means to me is:
So, questions:
is this a fare test of the internal precision ?
What does this tells us about Openscad's internal precision, in terms of
"??? bit floating-point precision" ?
$ Runsun Pan, PhD
$ libs: doctest , faces ( git ), offline doc ( git ),runscad.py( 1 , 2 , git );
$ tips: hash( 1 , 2 ), sweep , var , lerp , animGif
--
View this message in context: http://forum.openscad.org/Inconsistent-conversion-of-floating-number-to-string-at-7th-significant-digit-tp14350p14463.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
Your test is correct, 17 decimal digits of precision is pretty close to
reality. OpenSCAD uses IEEE 754 floating point arithmetic at 64 bit
precision, which means a 53 bit mantissa. Due to a technical trick, this is
really equivalent to 54 binary digits of precision. log10(2^54) is 16.25
decimal digits of precision, which rounds up to 17.
OpenSCAD represents numbers internally as binary, not decimal, so thinking
about the numbers as decimals only takes you so far. Arithmetic results are
rounded, but they are rounded in binary, not rounded in decimal.
On 13 November 2015 at 17:17, runsun runsun@gmail.com wrote:
doug.moen wrote
If we fix the ASCII STL export code to produce super high precision
output
(the equivalent of 128 bit floating point precision, or better), then it
won't help in those rare cases where truncation to
64 bit floating point
precision
is enough to create a zero width triangle. That's because the
programs that import the STL that we produce are mostly likely using 64
bit
floats for their mesh, and will throw away any extra precision beyond
that
point. Any program that imports AMF is required by the standard to use
at
least 64 bit precision for the mesh, and it's rare for programs to
support
better than 64 bit float precision because the software and hardware
support is lacking.
I did a little test: Let i = 1 + k, then make k smaller by 10 fold
continuously, to see how far down k goes to get ignored. When it did, we
will see 1==i.
I tested 1+0.2 and 1+0.6, so the tests go like:
1.02 == 1 ? ==> false
1.06 == 1 ? ==> false
1.002 == 1 ? ==> false
1.006 == 1 ? ==> false
and so on. This goes on until there are 16 0's:
1.0000000000000002 ==1 ? ==> false (15 0's)
1.0000000000000006 ==1 ? ==> false
1.00000000000000002 ==1 ? ==> true (16 0's)
1.00000000000000006 ==1 ? ==> true
What it means to me is:
So, questions:
is this a fare test of the internal precision ?
What does this tells us about Openscad's internal precision, in terms of
"??? bit floating-point precision" ?
$ Runsun Pan, PhD
$ libs: doctest , faces ( git ), offline doc ( git ),runscad.py( 1 , 2 ,
git );
$ tips: hash( 1 , 2 ), sweep , var , lerp , animGif
--
View this message in context:
http://forum.openscad.org/Inconsistent-conversion-of-floating-number-to-string-at-7th-significant-digit-tp14350p14463.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org