Discussion:
obscure failures when RAM is low
Daniel Pocock
2015-05-30 18:15:45 UTC
Permalink
I recently had a report from a user that he was building one of my
projects, reSIProcate, on a machine with 512MB RAM and the builds were
failing without any clear error

He looked more closely and found it was exactly like the situation
described here:


http://www.ahwkong.com/post/2013/12/24/failed-at-unified-cpp-2/


Can this be handled any better by autotools or is it a g++ problem
exclusively?

Can the configure script check for sufficient RAM and disk space before
the build?

Regards,

Daniel
Paul Eggert
2015-05-30 18:31:59 UTC
Permalink
Post by Daniel Pocock
Can this be handled any better by autotools or is it a g++ problem
exclusively?
No, it's a problem that in principle affects every program. If you don't have
enough memory, your programs won't run.
Post by Daniel Pocock
Can the configure script check for sufficient RAM and disk space before
the build?
I'm afraid the short answer is "no". The only way, in general, to find out
whether you have enough memory to run a program, is to run the program. I
suppose we could add a feature whereby a developer of a package could guess the
amount of resources needed to build a package, but in practice such a feature
would be so poorly maintained that I expect it'd be more trouble than it's
worth. Just get a big-enough machine -- this is software development, after all.
Daniel Pocock
2015-05-30 19:12:30 UTC
Permalink
Post by Paul Eggert
Post by Daniel Pocock
Can this be handled any better by autotools or is it a g++ problem
exclusively?
No, it's a problem that in principle affects every program. If you
don't have enough memory, your programs won't run.
Well, some programs give a clear error when they need more RAM and the
user buys the RAM or changes their virtual machine settings

In this case, two things went wrong:

a) the g++ output was really obscure, no suggestion that it was a RAM
problem

b) the output of every second g++ invocation is redirected to /dev/null
by libtool, so the error doesn't always appear at all
Zack Weinberg
2015-05-31 01:23:37 UTC
Permalink
"Internal compiler error: Killed (program cc1plus)" almost always
indicates that the compiler-proper ran the computer out of memory and
triggered the OOM killer. Perhaps you should file a bug on GCC asking
for the driver to mention this possibility when it detects that the
compiler-proper has received a SIGKILL.

Perhaps you should also file a bug on libtool asking for it to
suppress compiler output with -w instead of redirecting it to
/dev/null (at least, when it knows it's driving GCC). In the
alternative, perhaps you should develop some Automake extensions that
would *finally* allow us to take libtool out back and shoot it.

zw
Daniel Pocock
2015-06-01 09:17:35 UTC
Permalink
Post by Zack Weinberg
"Internal compiler error: Killed (program cc1plus)" almost always
indicates that the compiler-proper ran the computer out of memory and
triggered the OOM killer. Perhaps you should file a bug on GCC asking
for the driver to mention this possibility when it detects that the
compiler-proper has received a SIGKILL.
Perhaps you should also file a bug on libtool asking for it to
suppress compiler output with -w instead of redirecting it to
/dev/null (at least, when it knows it's driving GCC). In the
alternative, perhaps you should develop some Automake extensions that
would *finally* allow us to take libtool out back and shoot it.
I agree the GCC error is not an autotools fault and the libtool thing is
not something I can roll my sleeves up and fix right now

That is why I was asking about a way for configure to do a basic sanity
test on available memory. Maybe my question wasn't clear enough. I
don't expect configure to magically know how much memory is needed, just
a simple test where the developer can suggest some fixed value (e.g.
1GB) and the configure script stops if there is less.

I also fully understand this is not a bulletproof solution, other
processes could still take memory after the build starts running and it
fails.

Regards,

Daniel
Bob Friesenhahn
2015-06-01 13:54:00 UTC
Permalink
Post by Daniel Pocock
I agree the GCC error is not an autotools fault and the libtool thing is
not something I can roll my sleeves up and fix right now
That is why I was asking about a way for configure to do a basic sanity
test on available memory. Maybe my question wasn't clear enough. I
don't expect configure to magically know how much memory is needed, just
a simple test where the developer can suggest some fixed value (e.g.
1GB) and the configure script stops if there is less.
Running out of memory while running the compiler is not a common
problem. I have not observed it for least 17 years.
Post by Daniel Pocock
I also fully understand this is not a bulletproof solution, other
processes could still take memory after the build starts running and it
fails.
The notion of "memory" is a very complex topic. For example, a
sufficiently large swap partition might allow the compiler to succeed
(while taking more time). This is not something that autoconf can
reasonably test.

The amount of "over commit" on many GNU/Linux systems is often huge
yet they continue to work fine.

cat /proc/meminfo

cat /proc/vmstat

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Loading...