You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current go 1.5 documentation is not clear regarding the %g formatting. In particular, it states
For floating-point values, width sets the minimum width of the field and precision sets the number of places after the decimal, if appropriate, except that for %g/%G it sets the total number of digits. For example, given 123.45 the format %6.2f prints 123.45 while %.4g prints 123.5. The default precision for %e and %f is 6; for %g it is the smallest number of digits necessary to identify the value uniquely.
This seems to imply that width for %g/%G is the total number of digits that will be outputted, while it is the number of significant digits, therefore leading zeroes does not contribute to the count. For example, formatting 0.001234 with %.3g will be printed without changes, 0.001234, while from the docs it seems that only 0.001 should be printed.
I propose to change the documentation and change the following:
except that for %g/%G it sets the total number of digits. For example, given 123.45 the format %6.2f prints 123.45 while %.4g prints 123.5.
with the following (bold used to mark the changes)
except that for %g/%G it sets the total number of significant digits. For example, given 123.45 the format %6.2f prints 123.45 while %.4g prints 123.5**, and 0.00123 is printed as 0.001 by %.3f, but printed as 0.00123 by %.3g**.
Or with a better statement and/or example if the one I provided are not accurate enough.
The text was updated successfully, but these errors were encountered:
Current go 1.5 documentation is not clear regarding the %g formatting. In particular, it states
This seems to imply that width for %g/%G is the total number of digits that will be outputted, while it is the number of significant digits, therefore leading zeroes does not contribute to the count. For example, formatting 0.001234 with %.3g will be printed without changes, 0.001234, while from the docs it seems that only 0.001 should be printed.
I propose to change the documentation and change the following:
with the following (bold used to mark the changes)
Or with a better statement and/or example if the one I provided are not accurate enough.
The text was updated successfully, but these errors were encountered: