You are not logged in.

#1 2009-11-01 12:57:23

PingPing
Member
From: /dev/null
Registered: 2009-07-24
Posts: 39
Website

'long double' in GCC - what precicion in decimal digits?

I'm new to programming and I've just started an evening course in C.
I've been looking at Data Types and Qualifiers and have a question about 'long double' in GCC.
I see that 'double' is 8 bytes (64 bits) in size giving 53 bits to the significand (mantissa), resulting in a precision of log10 (2^53) = 16 decimal digits.
But how many bits in the significand of the 'long double' data type in GCC?

Offline

#2 2009-11-01 14:13:46

Trent
Member
From: Baltimore, MD (US)
Registered: 2009-04-16
Posts: 990

Re: 'long double' in GCC - what precicion in decimal digits?

In most cases, code to the standard (C99), not to the compiler (GCC).  I suggest you investigate section 5.2.4.2.2 of the C99 standard, which describes the floating point types in detail.

http://www.open-std.org/jtc1/sc22/wg14/ … .html#9899

It's a little slow wading through the standardese, but basically what's going on is that the long double type is not required to be any larger than a regular double.  Obviously it is for GCC (at least the version of GCC you're using -- I suspect the sizes would be different on other architectures), but it's not required to be that way (even assuming the compiler fully implements C99).

If you're new to C programming, you may not really understand that you shouldn't have to care about the sizes of types most of the time.  They're handy to know for debugging purposes, but you should not code to them.  Use int where you want integers, without caring whether int is 16, 32, or 64 bits wide, or even oddball values like 20 or 36.  You should use long only when an int really isn't long enough, which is fairly rare even for 16-bit ints (32768-element arrays, anyone?)  Similarly, use double almost indiscriminately for floating point numbers, unless it actually causes problems.  Even 32-bit floating point numbers (the minimum precision of a double) are precise enough for many, many applications.

That said, to find out how many bits are in the mantissa of a long double, simply print the value of the LDBL_MANT_DIG macro, defined in <float.h>.  Mine prints 64.

Offline

#3 2009-11-01 15:05:08

PingPing
Member
From: /dev/null
Registered: 2009-07-24
Posts: 39
Website

Re: 'long double' in GCC - what precicion in decimal digits?

Thanks Trent.
Just a quick further question: am I correct in my understanding that a 'float' gives numerical accuracy to 7 decimal digits while a calculation involving doubles gives accuracy to 16 decimal digits?

Offline

#4 2009-11-01 17:51:57

Trent
Member
From: Baltimore, MD (US)
Registered: 2009-04-16
Posts: 990

Re: 'long double' in GCC - what precicion in decimal digits?

That may be true for your system.  The C99 standard guarantees at least six unambiguous decimal digits for floats, and at least ten for doubles.  Since no number of bits will work out to exactly 6 decimal digits, real implementations will have "approximately" 7 decimal digits of precision, with some ambiguity on the 7th.  If you need to know the specifics for your compiler, the macros to look at are FLT_DIG and DBL_DIG, again defined in <float.h>.  I get 6 and 15 for float and double, respectively.

Offline

Board footer

Powered by FluxBB