www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Format g bug ?

reply Temtaime <temtaime gmail.com> writes:
import std.stdio;

void main()
{
	writefln(`%.2g`, 3.11);
	writefln(`%.2f`, 3.11);

	writefln(`%.1g`, 3.11);
	writefln(`%.1f`, 3.11);
}


3.1
3.11
3
3.1

But expected

3.1
3.11
3.1
3.11
Aug 09 2017
parent reply Temtaime <temtaime gmail.com> writes:
Sorry, messed up numbers

Expected:

3.11
3.11
3.1
3.1

Seems g outputs one digit less
Aug 09 2017
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 8/9/17 4:10 PM, Temtaime wrote:
 Sorry, messed up numbers
 
 Expected:
 
 3.11
 3.11
 3.1
 3.1
 
 Seems g outputs one digit less
I was bugged by this too. It's not a bug. For the %f specifier, the number represents the number of digits *after* the decimal. For the %g specifier, the number represents the number of digits *before and after* the decimal. So: writefln(`%.1g`, 31.11) -> 31 The most annoying thing is that %g is the default specifier for floating point. -Steve
Aug 09 2017