Discussion:
Enabling compiler warning flags
David A. Wheeler
2012-12-18 05:28:14 UTC
Permalink
Did you realize that several GNU projects now enable virtually
every gcc warning that is available (even including those that
are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
via gnulib's manywarnings.m4 configure-time tests?
Of course, there is a list of warnings that we do disable,
due to their typical lack of utility and the invasiveness
of changes required do suppress them.
Is there any way that the autoconf (or automake) folks could make compiler warnings much, much easier to enable? Preferably enabled by default when you start packaging something? For example, could gnulib warnings and manywarnings be distributed and enabled as *part* of autoconf? If not, could autoconf at least strongly advertize the existence of these, and include specific instructions to people on how to quickly install it? The autoconf section on "gnulib" never even *MENTIONS* the "warnings" and "manywarnings" stuff! And while automake has warnings, they are for the automake configuration file... not for compilation.

Compiler warning flags cost nearly nothing to turn on when you're *starting* a project, but they're harder to enable later (a thousand warnings about the same thing later is harder than fixing it the first time). And while some warnings are nonsense, their use can make the resulting software much, much better. If we got people to turn on warning flags all over the place, during development, a lot of bugs would simply disappear.

--- David A. Wheeler
Jeffrey Walton
2012-12-18 06:10:14 UTC
Permalink
On Tue, Dec 18, 2012 at 12:28 AM, David A. Wheeler
Post by David A. Wheeler
Did you realize that several GNU projects now enable virtually
every gcc warning that is available (even including those that
are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
via gnulib's manywarnings.m4 configure-time tests?
Of course, there is a list of warnings that we do disable,
due to their typical lack of utility and the invasiveness
of changes required do suppress them.
Is there any way that the autoconf (or automake) folks could make compiler warnings much, much easier to enable? Preferably enabled by default when you start packaging something? For example, could gnulib warnings and manywarnings be distributed and enabled as *part* of autoconf? If not, could autoconf at least strongly advertize the existence of these, and include specific instructions to people on how to quickly install it? The autoconf section on "gnulib" never even *MENTIONS* the "warnings" and "manywarnings" stuff! And while automake has warnings, they are for the automake configuration file... not for compilation.
Compiler warning flags cost nearly nothing to turn on when you're *starting* a project, but they're harder to enable later (a thousand warnings about the same thing later is harder than fixing it the first time). And while some warnings are nonsense, their use can make the resulting software much, much better. If we got people to turn on warning flags all over the place, during development, a lot of bugs would simply disappear.
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.

Platform security integration includes fortified sources and stack
protectors. Here are the flags of interest:
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables

FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
I know Drepper objects to the safer string/memory functions, but his
way (the way of 1970's strcpy and strcat) simply does not work. I
don't disagree that the safer functions are not completely safe, but I
refuse to throw the baby out with the bath water.

These measures would have stopped a number of recent high profile
0-days and security vulnerabilities, including those against MySQL
(http://seclists.org/bugtraq/2012/Dec/12 and
http://seclists.org/bugtraq/2012/Dec/11) and Pidigin
(http://seclists.org/fulldisclosure/2012/Jul/183).

For those who think its over the top, then let them shoot themselves
in the foot by backing off security integration. Consider: Drepper is
an expert, and even his loader and runtime library make appearances on
Bugtraq and Full Disclosure. Mere mortals (like me and many other
developers) need the integration to help build a secure system.

A hardened or secure toolchain should be a part of every developer's
warchest. It starts with the tools like Autoconf.

Jeff
Russ Allbery
2012-12-18 06:16:42 UTC
Permalink
Post by Jeffrey Walton
FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
I know Drepper objects to the safer string/memory functions, but his
way (the way of 1970's strcpy and strcat) simply does not work. I
don't disagree that the safer functions are not completely safe, but I
refuse to throw the baby out with the bath water.
Having tried both styles, what works even better than replacing strcpy and
strcat with strlcpy and strlcat, or the new *_s functions, is to replace
them with asprintf. You have to do a little bit of work to be guaranteed
to have asprintf (or a lot of work if you want to support platforms with a
broken snprintf as well), but gnulib will do it for you, and that coding
style is so much nicer than trying to deal with static buffers and
worrying about truncation, particularly if you design the software with
that in mind from the start. Yes, it's probably slower, but I'll trade
speed for clarity and safety nearly all of the time.

(Or you could also dodge the memory management problems by using a C
framework that supports garbage collection, like APR, but that's farther
afield of this list.)
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Walton
2012-12-18 06:23:33 UTC
Permalink
Post by Russ Allbery
Post by Jeffrey Walton
FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
I know Drepper objects to the safer string/memory functions, but his
way (the way of 1970's strcpy and strcat) simply does not work. I
don't disagree that the safer functions are not completely safe, but I
refuse to throw the baby out with the bath water.
Having tried both styles, what works even better than replacing strcpy and
strcat with strlcpy and strlcat, or the new *_s functions, is to replace
them with asprintf. You have to do a little bit of work to be guaranteed
to have asprintf (or a lot of work if you want to support platforms with a
broken snprintf as well), but gnulib will do it for you, and that coding
style is so much nicer than trying to deal with static buffers and
worrying about truncation, particularly if you design the software with
that in mind from the start. Yes, it's probably slower, but I'll trade
speed for clarity and safety nearly all of the time.
Yeah, I think you are right about asprintf (though I have never used it).

I can't count how many times I've seen silent truncation due to
sprint. Most recently, I pointed it out on some SE Android patches
(Android port of SE Linux) that passed by the NSA sponsored mailing
list. They went unfixed. Amazing.

Jeff
Russ Allbery
2012-12-18 06:29:58 UTC
Permalink
Post by Jeffrey Walton
Yeah, I think you are right about asprintf (though I have never used it).
I can't count how many times I've seen silent truncation due to sprint.
Most recently, I pointed it out on some SE Android patches (Android port
of SE Linux) that passed by the NSA sponsored mailing list. They went
unfixed. Amazing.
Silent truncation is the primary reason why strlcpy and strlcat aren't in
glibc. Both functions are designed to silently truncate when the target
buffer isn't large enough, and few callers deal with that. This
ironically can actually create other types of security vulnerabilities
(although it's probably less likely to do so than a stack overflow).

asprintf guarantees that you don't have silent truncation; either you run
out of memory and the operation fails, or you get the whole string. The
cost, of course, is that you now have to do explicit memory management,
which is often what people were trying to avoid by using static buffers.
But it *is* C; if you're not going to embrace explicit memory management,
you may have picked the wrong programming language.... :)

strlcpy and strlcat have some benefit in situations where you're trying to
add some robustness (truncation instead of overflow) to code with
existing, broken APIs that you can't change, which I suspect was some of
the original motivation. But if you can design the APIs from the start,
I'd always use strdup and asprintf (or something more sophisticated like
obstacks or APR pools) instead.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Mike Frysinger
2012-12-18 06:44:48 UTC
Permalink
Post by Jeffrey Walton
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.
Platform security integration includes fortified sources and stack
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables
if you do choose to add these to your configure script, you should provide a
flag to control the behavior (default enabling is OK). some of these are not
cheap, especially for some architectures.
-mike
Jeffrey Walton
2012-12-18 07:55:23 UTC
Permalink
Post by Mike Frysinger
Post by Jeffrey Walton
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.
Platform security integration includes fortified sources and stack
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables
if you do choose to add these to your configure script, you should provide a
flag to control the behavior (default enabling is OK). some of these are not
cheap, especially for some architectures.
Good point. A noexec stack or noexec heap can be costly if using PaX.

What abstractions does Autoconf have to identify platforms and
security measures so a maintainer can supply one configure that works
for all platforms and architectures? For example, noexec stacks should
be enabled by default on x86 and x64. To split hairs even further,
noexec stacks should be on by default for x86 and x64, while noexec
heaps should be in effect on Gentoo systems running on x86 and x64.

Leaving these security related decisions to developers has a history
of failures due to gaps in awareness and knowledge (confer: audit the
software at ftp.gnu.org). In this case, Autoconf can close the gap and
be part of the solution.

Jeff
Mike Frysinger
2012-12-18 18:44:01 UTC
Permalink
Post by Jeffrey Walton
Post by Mike Frysinger
Post by Jeffrey Walton
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.
Platform security integration includes fortified sources and stack
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables
if you do choose to add these to your configure script, you should
provide a flag to control the behavior (default enabling is OK). some
of these are not cheap, especially for some architectures.
Good point. A noexec stack or noexec heap can be costly if using PaX.
those weren't the ones i was thinking of actually :). the mainline kernel
itself handles the GNU_STACK segment, although it relies on hardware support
for it. if the hardware doesn't support it, then that's where PaX's software
implementation might come into play.
Post by Jeffrey Walton
What abstractions does Autoconf have to identify platforms and
security measures so a maintainer can supply one configure that works
for all platforms and architectures?
if you use AC_CANONICAL_HOST, you get access to $host_os (e.g. "linux") and
$host_cpu (e.g. "x86_64"). but that's about it.
Post by Jeffrey Walton
For example, noexec stacks should
be enabled by default on x86 and x64. To split hairs even further,
noexec stacks should be on by default for x86 and x64, while noexec
heaps should be in effect on Gentoo systems running on x86 and x64.
noexec is already enabled by default for all Linux/gcc/glibc targets. there
should be no need for people to specify it themselves. the only time it
really comes up anymore is if someone is writing pure assembly and didn't put
the prerequisite section in there.
-mike
Bob Friesenhahn
2012-12-18 18:48:10 UTC
Permalink
Post by Jeffrey Walton
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.
Platform security integration includes fortified sources and stack
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables
FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
I understand your concern and the reasoning, but these sort of options
are highly platform/target/distribution specific. It is easy to
create packages which fail to build on many systems. Later, the baked
in settings of somewhat dated distribution tarballs may not meet
current standards.

Surely it is better to leave this to OS distribution maintainers who
establish common rules for OS packages and ensure that options are
applied in a uniform and consistent manner.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Jeffrey Walton
2012-12-19 05:55:54 UTC
Permalink
On Tue, Dec 18, 2012 at 1:48 PM, Bob Friesenhahn
Post by Jeffrey Walton
If you are going to try the waters with warnings, you should also
consider the flags to integrate with platform security.
Platform security integration includes fortified sources and stack
* -fstack-protector-all
* -z,noexecstack
* -z,noexecheap (or other measure, such as W^X)
* -z,relro
* -z,now
* -fPIE and -pie for executables
FORTIFY_SOURCE=2 (FORTIFY_SOURCE=1 on Android 4.1+), where available.
I understand your concern and the reasoning, but these sort of options are
highly platform/target/distribution specific. It is easy to create packages
which fail to build on many systems. Later, the baked in settings of
somewhat dated distribution tarballs may not meet current standards.
Surely it is better to leave this to OS distribution maintainers who
establish common rules for OS packages and ensure that options are applied
in a uniform and consistent manner.
I think your arguments make a lot of sense and I would like to agree with you.

Unfortunately, the folks at Red Hat provided a "proof by counter
example" with the recent MySQL 0-days
(http://www.h-online.com/security/news/item/MariaDB-fixes-zero-day-vulnerability-in-MySQL-1761451.html).
I would have expected Red Hat security folks to be on top of it,
especially with a high risk application such as a database that
accepts input from the network (some hand waiving since PHP is likely
in front of it).

In the recent MySQL 0-days, the developers (MariaDB) and the platform
(Red Hat) both failed. Its been repeated time and time again
throughout our brief history.

Jeff
Paul Eggert
2012-12-19 15:47:40 UTC
Permalink
Post by Jeffrey Walton
Unfortunately, the folks at Red Hat provided a "proof by counter
example" with the recent MySQL 0-days
No matter what the security regime is, it will always
break down. Always. The question is not whether security
could be improved. Security could always be improved.
The question is whether it's worth the effort.

Abstractly, I think Autoconf machinery to support security
checking is a good idea, but the devil is in the details.
One good way to help determine whether the proposed change
to Autoconf is worth the effort is to see whether someone
is willing to volunteer the work to make the proposed change happen,
and to donate their change to the FSF. Are you willing
and able to do that? If not, can you find someone who is?
Jeffrey Walton
2012-12-20 00:24:20 UTC
Permalink
Hi Paul,
Post by Paul Eggert
Post by Jeffrey Walton
Unfortunately, the folks at Red Hat provided a "proof by counter
example" with the recent MySQL 0-days
No matter what the security regime is, it will always
break down. Always. The question is not whether security
could be improved. Security could always be improved.
The question is whether it's worth the effort.
Agreed.
Post by Paul Eggert
Abstractly, I think Autoconf machinery to support security
checking is a good idea, but the devil is in the details.
Agreed.
Post by Paul Eggert
One good way to help determine whether the proposed change
to Autoconf is worth the effort is to see whether someone
is willing to volunteer the work to make the proposed change happen,
and to donate their change to the FSF. Are you willing
and able to do that? If not, can you find someone who is?
Well, I work in the "secure software" field (whatever that's worth
given the collective failures of the security folks). I am willing to
try and help. I've been lurking on the list trying to learn (I don't
even use Autoconf - I still write my makefiles by hand).

I'm not sure how much help the FSF will be. Forgive my ignorance, but
are FSF and GNU equivalent? A couple of years ago when Savannah got
hacked (January, 2011), I sent an email asking for guidance for
projects on security related matters (broadly, secure coding guides,
data security and best practices, selection of cryptographic
algorithms, and the like). The email was sent to ***@gnu.org (the
listed point of contact), and it opened with: "There's two points
below that GNU could address. The first is storing plain text
passwords. Second is the lack of security topics in 'GNU Coding
Standards'." I did not even get a reply.

For completeness, I don't think this is an Autoconf problem. But I was
hoping Autoconf (or other friends, such as Automake) could be part of
the solution. I am wit's end trying to figure out how to put a sizable
dent in the problem. I've been putting fires out with garden hoses,
and its not working.

Jeff
Paul Eggert
2012-12-20 03:14:08 UTC
Permalink
Post by Jeffrey Walton
I'm not sure how much help the FSF will be.
The GNU project can be of some help, sure.
Post by Jeffrey Walton
Forgive my ignorance, but
are FSF and GNU equivalent?
Not exactly. The FSF sponsors the GNU project, but it also
does other things. The GNU project is its biggest project,
though. See <http://www.fsf.org/about/>.
Sorry, I'm not on that mailing list and didn't see it.
Please see <http://www.gnu.org/software/security/>
for how to report security issues that need to be escalated.
Post by Jeffrey Walton
I was hoping Autoconf (or other friends, such as Automake) could be part of
the solution.
Yes, I think this is a reasonable idea for improving the
robustness of GNU and GNU-using software.
Bob Friesenhahn
2012-12-20 03:26:42 UTC
Permalink
Post by Jeffrey Walton
Post by Bob Friesenhahn
Surely it is better to leave this to OS distribution maintainers who
establish common rules for OS packages and ensure that options are applied
in a uniform and consistent manner.
I think your arguments make a lot of sense and I would like to agree with you.
Unfortunately, the folks at Red Hat provided a "proof by counter
example" with the recent MySQL 0-days
(http://www.h-online.com/security/news/item/MariaDB-fixes-zero-day-vulnerability-in-MySQL-1761451.html).
I would have expected Red Hat security folks to be on top of it,
especially with a high risk application such as a database that
accepts input from the network (some hand waiving since PHP is likely
in front of it).
I don't know anything about this vulnerability but your conclusion
does not quite make sense. Software is evaluated for vulnerability at
the source code level without consideration for the fortifications
which were suggested.

I am suggesting that OS distributions know how to best fortify their
systems and that fortification methods may vary with each OS release.
This does not mean that application bugs should not be corrected.

Most of the the -z,blahblah options could be eliminated if the OS and
toolchain were to arrange to do useful security things by default.
They could do useful security things by default and flags could
disable safeguards for rare code which needs to intentionally do the
things guarded against.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Russ Allbery
2012-12-20 04:15:14 UTC
Permalink
Post by Bob Friesenhahn
Most of the the -z,blahblah options could be eliminated if the OS and
toolchain were to arrange to do useful security things by default. They
could do useful security things by default and flags could disable
safeguards for rare code which needs to intentionally do the things
guarded against.
Ubuntu patches gcc to enable a bunch of these options. Debian discussed
doing the same and decided not to, since Debian really dislikes diverging
from upstream on things that have that much public-facing visibility, and
instead built it into our packaging system.

I think having the toolchain do some of this automatically has been a hard
sell for understandable backwards-compatibility concerns, but that would
certainly be something that could be explored across multiple GNU
projects. Although one of the problems with making toolchain changes is
that the needs of embedded systems, who are heavy toolchain users, are
often quite different.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Mike Frysinger
2012-12-18 06:46:47 UTC
Permalink
Post by David A. Wheeler
Did you realize that several GNU projects now enable virtually
every gcc warning that is available (even including those that
are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
via gnulib's manywarnings.m4 configure-time tests?
Of course, there is a list of warnings that we do disable,
due to their typical lack of utility and the invasiveness
of changes required do suppress them.
Is there any way that the autoconf (or automake) folks could make compiler
warnings much, much easier to enable? Preferably enabled by default when
you start packaging something? For example, could gnulib warnings and
manywarnings be distributed and enabled as *part* of autoconf? If not,
could autoconf at least strongly advertize the existence of these, and
include specific instructions to people on how to quickly install it? The
autoconf section on "gnulib" never even *MENTIONS* the "warnings" and
"manywarnings" stuff! And while automake has warnings, they are for the
automake configuration file... not for compilation.
Compiler warning flags cost nearly nothing to turn on when you're
*starting* a project, but they're harder to enable later (a thousand
warnings about the same thing later is harder than fixing it the first
time). And while some warnings are nonsense, their use can make the
resulting software much, much better. If we got people to turn on warning
flags all over the place, during development, a lot of bugs would simply
disappear.
you might want to look at the autoconf-archive project:
http://www.gnu.org/software/autoconf-archive/

they provide AX_CFLAGS_WARN_ALL for starters, and then for more refined -W
flags, you can easily use AX_CHECK_COMPILE_FLAG.
-mike
Bob Friesenhahn
2012-12-18 18:42:56 UTC
Permalink
Post by David A. Wheeler
Compiler warning flags cost nearly nothing to turn on when you're
*starting* a project, but they're harder to enable later (a thousand
warnings about the same thing later is harder than fixing it the
first time). And while some warnings are nonsense, their use can
make the resulting software much, much better. If we got people to
turn on warning flags all over the place, during development, a lot
of bugs would simply disappear.
What might actually happen is that a bunch of casts get added to the
code in order to quench the warnings. These casts cause new bugs when
the code is updated or they hide later attempts to find conversion
issues.

It is pretty common that the person trying to eliminate a warning does
not understand the code well enough to understand the consequences of
their action or is interested in a quick fix.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Jeffrey Walton
2012-12-20 19:46:07 UTC
Permalink
On Tue, Dec 18, 2012 at 12:28 AM, David A. Wheeler
Post by David A. Wheeler
Did you realize that several GNU projects now enable virtually
every gcc warning that is available (even including those that
are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
via gnulib's manywarnings.m4 configure-time tests?
Of course, there is a list of warnings that we do disable,
due to their typical lack of utility and the invasiveness
of changes required do suppress them.
Is there any way that the autoconf (or automake) folks could make compiler warnings much, much easier to enable? Preferably enabled by default when you start packaging something? For example, could gnulib warnings and manywarnings be distributed and enabled as *part* of autoconf? If not, could autoconf at least strongly advertize the existence of these, and include specific instructions to people on how to quickly install it? The autoconf section on "gnulib" never even *MENTIONS* the "warnings" and "manywarnings" stuff! And while automake has warnings, they are for the automake configuration file... not for compilation.
Compiler warning flags cost nearly nothing to turn on when you're *starting* a project, but they're harder to enable later (a thousand warnings about the same thing later is harder than fixing it the first time). And while some warnings are nonsense, their use can make the resulting software much, much better. If we got people to turn on warning flags all over the place, during development, a lot of bugs would simply disappear.
To further muddy the water, there are also preprocessor macros that
affect security!

Debug configurations can/should have _DEBUG and DEBUG preprocessor
macros; while Release configurations should/must have _NDEBUG and
NDEBUG preprocessor macros. Posix only observes NDEBUG
(http://pubs.opengroup.org/onlinepubs/009604499/basedefs/assert.h.html).
The additional Debug and Release preprocessor macros help ensure the
'proper' or 'more complete' uptake of third party libraries (such as
SQLite and SQLCipher).

Other libraries also add additional macro dependencies. For example
Objective C Release configurations also need NS_BLOCK_ASSERTIONS=1
defined.

If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS
server comes to mind (confer: there are CVE's assigned for the errant
behavior, and its happened more than once!
http://www.google.com/#q=isc+dns+assert+dos).

So there you have it: all the elements of a secure toolchain. It
includes the preprocessor (macros), the compiler (warnings), and
linker (platform security integration). Many people don't realize all
the details that go into getting a project set up correctly, long
before the first line of code is ever written. And it applies to
Makefiles, Eclipse, Net Beans, Xcode, Visual Studio, et al. Its not
just limited to one tool or one platform.

Jeff
Bob Friesenhahn
2012-12-20 20:24:05 UTC
Permalink
Post by Jeffrey Walton
If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS
The "falling victim to runtime assertions" is the same as falling
victim to a bug. It is not necessarily true that removing the
assertion is better than suffering from the unhandled bug. Once again
this is a program/situation-specific issue.

You keep repeating standard recipies which are not proper/best for all
software.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Jeffrey Walton
2012-12-20 21:32:30 UTC
Permalink
Hi Bob,

On Thu, Dec 20, 2012 at 3:24 PM, Bob Friesenhahn
Post by Jeffrey Walton
If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS
The "falling victim to runtime assertions" is the same as falling victim to
a bug. It is not necessarily true that removing the assertion is better
than suffering from the unhandled bug. Once again this is a
program/situation-specific issue.
Well, I can't think of a situation where an abort or crash is
preferred over gracefully handling a failure that could be handled
with an exit. In this case, the program is already in a code path -
why not just fail the function rather than abort? But then again, I
don't think like many others (as you can probably tell). So I could be
missing something.

In the case of a bug with known security implications (a detectable
stack smash, for example), include a "last line" defense that ensures
the program does not proceed (such as an OS initiated termination).
There is a subtle difference: in one case the program is calling
abort(); while in the other the OS is calling abort().
You keep repeating standard recipies which are not proper/best for all
software.
I understand its not "one size fits all," but I'm not proposing
anything evolutionary either.

All I ask is that a program properly handle its use cases (including
negative cases). The program should exhibit well defined behavior (its
an attribute or emergent property of being correct). Part of
exhibiting well defined behavior is having an understanding of your
tools due to things like Debug/Release and NDEBUG.

Folks *have* to be responsible for their programs. They can't keep
passing the buck and hope someone else will take care of it. The
operating system or Distribution Maintainers should not have to do
these things for developers.

Jeff
Bob Friesenhahn
2012-12-20 23:13:16 UTC
Permalink
Post by Jeffrey Walton
The "falling victim to runtime assertions" is the same as falling victim to
a bug. It is not necessarily true that removing the assertion is better
than suffering from the unhandled bug. Once again this is a
program/situation-specific issue.
Well, I can't think of a situation where an abort or crash is
preferred over gracefully handling a failure that could be handled
with an exit. In this case, the program is already in a code path -
why not just fail the function rather than abort? But then again, I
don't think like many others (as you can probably tell). So I could be
missing something.
Assertions are intended for detecting unexpected conditions.
External inputs to the program do not count as 'unexpected condition'
and so one should never write an assertion for external inputs. When
an unexpected condition occurs, the best thing to do is to dump core
so that it is possible to figure out how the impossible happend.

I agree with Russ Allbery that the primary reason to disable
assertions is to avoid the performance penalty. In properly-written
code (such as your own) these assertions should not be firing anyway.

In my own performance-tuned software which uses many assert
statements, I find the performance benefit from removing assertions to
be virtually unmeasurable.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Russ Allbery
2012-12-20 20:49:01 UTC
Permalink
Post by Jeffrey Walton
If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS server
comes to mind (confer: there are CVE's assigned for the errant behavior,
and its happened more than once!
http://www.google.com/#q=isc+dns+assert+dos).
It's very rare for it to be sane to continue after an assert(). That
would normally mean a serious coding error on the part of the person who
wrote the assert(). The whole point of assert() is to establish
invariants which, if violated, would result in undefined behavior.
Continuing after an assert() could well lead to an even worse security
problem, such as a remote system compromise.

The purpose of the -DNDEBUG compile-time option is not to achieve
additional security by preventing a DoS, but rather to gain additional
*performance* by removing all the checks done via assert(). If your goal
is to favor security over performance, you never want to use -DNDEBUG.
--
Russ Allbery (***@stanford.edu) <http://www.eyrie.org/~eagle/>
Jeffrey Walton
2012-12-20 21:32:34 UTC
Permalink
Hi Russ,
Post by Russ Allbery
Post by Jeffrey Walton
If a project does not observe proper preprocessor macros for a
configuration, a project could fall victim to runtime assertions and
actually DoS itself after the assert calls abort(). The ISC's DNS server
comes to mind (confer: there are CVE's assigned for the errant behavior,
and its happened more than once!
http://www.google.com/#q=isc+dns+assert+dos).
It's very rare for it to be sane to continue after an assert(). That
would normally mean a serious coding error on the part of the person who
wrote the assert(). The whole point of assert() is to establish
invariants which, if violated, would result in undefined behavior.
Continuing after an assert() could well lead to an even worse security
problem, such as a remote system compromise.
So, I somewhat disagree with you here. I think the differences are
philosophical because I could never find guidance from standard bodies
(such as Posix or IEEE) on rationales or goals behind NDEBUG and the
intention of the abort() behind an assert().

First, an observation: if all the use cases are accounted (positive
and negative), code *lacking* NDEBUG will never fire an asserts. The
default case of 'fail' is enough to ensure this. You would be
surprised (or maybe not) how many functions don't have the default
'fail' case. Any code that lacks NDEBUG because it depends upon
assert() the abort() is defective by design. That includes the ISC's
DNS server and their assertion/abort scheme (critical infrastructure,
no less).

Under no circumstance is a program allowed to abort(). It processes as
expected or it fails gracefully. If it fails gracefully, it can exit()
if it likes. But it does not crash, and it does not abort().

Here's the philosophical difference (that will surely draw criticism):
asserts are a debug/diagnostic tool to aide in development. They have
no place in release code. I'll take it a step further: Posix asserts
are useless during development under a debugger because the eventually
lead to SIGTERM. A much better approach in practice is to SIGTRAP.

Code under my purview must (1) validate all parameters and (2) check
all return values. Not only must there be logic to fail the function
if anything goes wrong, *everything* must be asserted to alert of the
point of first failure. In this respect, asserts create self-debugging
code.

I found developers did not like assert in debug configurations. They
did not like asserts because of SIGTERM, which meant the developers
did not fully assert. That caused the code to be non-compliant. The
root cause was they did not like eating the dogfood of their own bugs.
So I had to rewrite the asserts to use SIGTRAP, which made them very
happy (they could make a mental note and continue on debugging). Code
improved dramatically after that - we were always aware of the first
point of failure, with out the need for breakpoints and detailed
inspection unless needed.
Post by Russ Allbery
The purpose of the -DNDEBUG compile-time option is not to achieve
additional security by preventing a DoS, but rather to gain additional
*performance* by removing all the checks done via assert(). If your goal
is to favor security over performance, you never want to use -DNDEBUG.
Probably another philosophical difference: (1) code must be correct.
(2) code should be secure. (3) code can be efficient. NDEBUG just
removes the debugging/diagnostic aides, so it does help with (3). (1)
is achieved because there is a separate if/then/else that handles the
proper failure of a function in a release configuration.

I know many will disagree, but I will put my money where my mouth is:
I have code in the field (secure containers and secure channels) that
has never taken a bug report or taken less than a handful (fewer than
3). They were developed with the discipline described above, and they
include a complete suite of negative, multi-threaded self tests that
ensure graceful failures. I don't care too much about the positive
test cases since I can hire a kid from a third world country for $10
or $15 US a day to copy/paste code that works under the 'good' cases.

Can anyone else say claim have a non-trivial code base that does not
suffer defects (with a reasonable but broad definition of defect)?

Anyway, sorry about the philosophicals. I know it does not lend much
to the thread.

Jeff
Paul Eggert
2012-12-20 23:49:27 UTC
Permalink
Post by Jeffrey Walton
Posix asserts
are useless during development under a debugger because the eventually
lead to SIGTERM. A much better approach in practice is to SIGTRAP.
I didn't follow all that message, but this part doesn't appear
to be correct. In POSIX, when assert() fails it leads to SIGABRT.

More generally, I'd rather focus this mailing list's energy into
improving Autoconf rather than worrying too much about
philosophical considerations.
Jeffrey Walton
2013-02-26 01:09:02 UTC
Permalink
On Tue, Dec 18, 2012 at 12:28 AM, David A. Wheeler
Post by David A. Wheeler
Did you realize that several GNU projects now enable virtually
every gcc warning that is available (even including those that
are new in the upcoming gcc-4.8, for folks that use bleeding edge gcc)
via gnulib's manywarnings.m4 configure-time tests?
Of course, there is a list of warnings that we do disable,
due to their typical lack of utility and the invasiveness
of changes required do suppress them.
Is there any way that the autoconf (or automake) folks could make compiler warnings much, much easier to enable? Preferably enabled by default when you start packaging something? For example, could gnulib warnings and manywarnings be distributed and enabled as *part* of autoconf? If not, could autoconf at least strongly advertize the existence of these, and include specific instructions to people on how to quickly install it? The autoconf section on "gnulib" never even *MENTIONS* the "warnings" and "manywarnings" stuff! And while automake has warnings, they are for the automake configuration file... not for compilation.
Compiler warning flags cost nearly nothing to turn on when you're *starting* a project, but they're harder to enable later (a thousand warnings about the same thing later is harder than fixing it the first time). And while some warnings are nonsense, their use can make the resulting software much, much better. If we got people to turn on warning flags all over the place, during development, a lot of bugs would simply disappear.
GCC 4.8 added a couple of interesting flags
(http://gcc.gnu.org/gcc-4.8/changes.html): -fsanitize=address and
-fsanitize=thread. Some reading about them is available at
http://llvm.org/devmtg/2012-11/Serebryany_TSan-MSan.pdf.

It might be helpful to projects if the auto tools enabled one or both
by default. The overhead on Address Sanitizer looks small compared to
the payoff.

Also, the new -Og requires a debug level (-g2, -g3, etc).

Jeff
Eric Blake
2013-02-26 13:01:49 UTC
Permalink
Post by Jeffrey Walton
GCC 4.8 added a couple of interesting flags
(http://gcc.gnu.org/gcc-4.8/changes.html): -fsanitize=address and
-fsanitize=thread. Some reading about them is available at
http://llvm.org/devmtg/2012-11/Serebryany_TSan-MSan.pdf.
It might be helpful to projects if the auto tools enabled one or both
by default. The overhead on Address Sanitizer looks small compared to
the payoff.
That paper said address sanitizer added 2x slowdown (20x under
valgrind), and would need hardware support to cut the slowdown to only
20%. It also said thread sanitizer added 20x to 300x slowdown. That
sounds like neither one should be enabled by default, but are best used
during development. But thanks for pointing them out - they sound
interesting.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
Jeffrey Walton
2013-02-26 13:53:17 UTC
Permalink
Post by Eric Blake
Post by Jeffrey Walton
GCC 4.8 added a couple of interesting flags
(http://gcc.gnu.org/gcc-4.8/changes.html): -fsanitize=address and
-fsanitize=thread. Some reading about them is available at
http://llvm.org/devmtg/2012-11/Serebryany_TSan-MSan.pdf.
It might be helpful to projects if the auto tools enabled one or both
by default. The overhead on Address Sanitizer looks small compared to
the payoff.
That paper said address sanitizer added 2x slowdown (20x under
valgrind), and would need hardware support to cut the slowdown to only
20%. It also said thread sanitizer added 20x to 300x slowdown. That
sounds like neither one should be enabled by default, but are best used
during development. But thanks for pointing them out - they sound
interesting.
Yes, my bad. Address Sanitizer should be enabled for debug
configurations by default (along with other "program diagnostics" to
borrow from Posix).

Release configurations should leave the choice to the user.

Jeff
Bob Friesenhahn
2013-02-26 15:22:00 UTC
Permalink
Post by Jeffrey Walton
Yes, my bad. Address Sanitizer should be enabled for debug
configurations by default (along with other "program diagnostics" to
borrow from Posix).
What is a "debug cofiguration"? The Autoconf default is to enable
debugging symbols with GCC (-g) so the default supports debugging.

I am curious if this ThreadSanitizer extension will work with the
normal build of GCC GOMP (for OpenMP) on GNU/Linux. Up to now, it has
been necessary for interested parties to build their own GCC in order
to build a libgomp which uses pthreads rather than Linux clone for
threading. This is because valgrind only supports threads created via
pthreads and pthread locking semantics.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Jeffrey Walton
2013-02-26 16:21:47 UTC
Permalink
On Tue, Feb 26, 2013 at 10:22 AM, Bob Friesenhahn
Post by Jeffrey Walton
Yes, my bad. Address Sanitizer should be enabled for debug
configurations by default (along with other "program diagnostics" to
borrow from Posix).
What is a "debug configuration"?
Full debug instrumentation, including program diagnostics.
The Autoconf default is to enable debugging
symbols with GCC (-g) so the default supports debugging.
-g just gets you symbols. Its helpful, but not as good as a debug
build for debugging purposes during development.

Opposed to debug is the "release configuration." The release
configuration is used in production. The auto tools would define
NDEBUG (per Posix), and remove debugging aides and other diagnostics.

You also have a "test configuration." This configuration looks a lot
like a release build. The key difference is protected and private
stuff (for example, a C++ function or a Java method) are made public
for testing. In this build, you would run your positive and negative
suites to provide a heuristic validation.
I am curious if this ThreadSanitizer extension will work with the normal
build of GCC GOMP (for OpenMP) on GNU/Linux.
I was wondering about that myself. I don't build GCC from sources (it
was too frustrating), so I'll have to wait for it to show up in Fedora
for testing.

Jeff
Miles Bader
2013-02-27 01:53:37 UTC
Permalink
Post by Jeffrey Walton
What is a "debug configuration"?
Full debug instrumentation, including program diagnostics.
Autoconf has no real notion of "Debug" and "Release" states.

The default compiler options, with gcc, result in a program that's
releasable _and_ reasonably (if not perfectly) debuggable. This has
both advantages (it does not suffer from the sort of "works only in
debug state" problems that often crop up when using specialized debug
builds, and it allows debugging installed system programs) and obvious
disadvantages (when compiler transformations make the program harder
to debug). Note that one of gcc's long term goals has been to allow
this sort of thing (and some recent changes, like better variable
tracking and improved dwarf support, should help it).

An individual application can of course offer more nuanced
configuration settings, if the defaults don't work well for it.

-miles
--
永日の 澄んだ紺から 永遠へ
Loading...