Discussion:
Help with static linking
Kip Warner
2013-05-31 05:30:29 UTC
Permalink
Hey lists,

Sorry for posting on both autoconf and automake lists. I wasn't sure
which one would be more appropriate for this problem.

I know this has come up before, judging by the archives, but I cannot
figure out the best way to have my executable statically link against
certain dependencies. This is needed because it executes off of optical
media and I cannot always guarantee that the user's runtime environment
will have the needed dependencies and shipping them shared would be a
maintenance nightmare.

The dynamic dependencies, according to objdump, are the following...

Dynamic Section:
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libglib-2.0.so.0
NEEDED libzzip-0.so.13
NEEDED libpng12.so.0
NEEDED libstdc++.so.6
NEEDED libm.so.6
NEEDED libgcc_s.so.1
NEEDED libpthread.so.0
NEEDED libc.so.6

libc, pthreads, the C++ runtime, etc., are safe to assume are available,
but the rest I'd like to statically link against. Actually, I'd prefer
to statically link against everything that I can if possible. But the
ones for certain I know I should be able to statically link against are
at least libzzip and libpng.

I know there a number of different approaches to doing this, but from
the pieces scattered in various places, it was difficult to determine
the most reliable and recommended approach. For instance, I've tried
'myproduct_LDADD = $(LIBINTL) -static', but objdump still reports all of
the above dynamic dependencies, so maybe it's not doing what I thought
it was suppose to do.

This is my configure.ac:
<http://rod.gs/Jwo>

This is my Makefile.am:
<http://rod.gs/Lwo>

Any help appreciated.

Respectfully,
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Robert Boehne
2013-05-31 13:31:03 UTC
Permalink
Statically linking libc is a recipe for disaster, so either read and understand why, or just take my word for it.

I don't quite understand why you think you need the rest linked statically, BUT the easiest way to do that would be to add LT_INIT to configure.ac to use Libtool, and add --static-libtool-libs to the target's LDFLAGS.

That will cause all of the Libtool libraries to be linked statically when possible.

If you are only targeting Linux desktop systems, png, gobject, gio, and glib should already be there, and in most cases already in memory, so you will benefit from zero additional memory use for the code pages. This also goes for all the dependencies of these libraries. I'm not familiar with zzip, so if it isn't a Libtool library you will have to make sure it is linked like this:
-Wl,-static -lzzip -Wl,call_shared

I don't have a computer in front of me, so YMMV, you should man ld to make sure those flags are correct.

HTH,

Robert Boehne
Post by Kip Warner
Hey lists,
Sorry for posting on both autoconf and automake lists. I wasn't sure
which one would be more appropriate for this problem.
I know this has come up before, judging by the archives, but I cannot
figure out the best way to have my executable statically link against
certain dependencies. This is needed because it executes off of optical
media and I cannot always guarantee that the user's runtime environment
will have the needed dependencies and shipping them shared would be a
maintenance nightmare.
The dynamic dependencies, according to objdump, are the following...
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libglib-2.0.so.0
NEEDED libzzip-0.so.13
NEEDED libpng12.so.0
NEEDED libstdc++.so.6
NEEDED libm.so.6
NEEDED libgcc_s.so.1
NEEDED libpthread.so.0
NEEDED libc.so.6
libc, pthreads, the C++ runtime, etc., are safe to assume are
available,
but the rest I'd like to statically link against. Actually, I'd prefer
to statically link against everything that I can if possible. But the
ones for certain I know I should be able to statically link against are
at least libzzip and libpng.
I know there a number of different approaches to doing this, but from
the pieces scattered in various places, it was difficult to determine
the most reliable and recommended approach. For instance, I've tried
'myproduct_LDADD = $(LIBINTL) -static', but objdump still reports all of
the above dynamic dependencies, so maybe it's not doing what I thought
it was suppose to do.
<http://rod.gs/Jwo>
<http://rod.gs/Lwo>
Any help appreciated.
Respectfully,
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
------------------------------------------------------------------------
_______________________________________________
Autoconf mailing list
https://lists.gnu.org/mailman/listinfo/autoconf
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Diego Elio Pettenò
2013-05-31 13:57:16 UTC
Permalink
Post by Robert Boehne
-Wl,-static -lzzip -Wl,call_shared
I don't have a computer in front of me, so YMMV, you should man ld to make
sure those flags are correct.
What you're thinking of is -Wl,-Bstatic and -Wl,-Bdynamic — for the GNU
linker at least, but this is not portable.

Seriously, it sounds to me like something else is wrong, you should never
have the need to statically link stuff that way unless you're doing release
management of binary applications, in which case you have another set of
problems entirely.


Diego Elio Pettenò — Flameeyes
***@flameeyes.eu — http://blog.flameeyes.eu/
Kip Warner
2013-06-01 23:29:27 UTC
Permalink
What you're thinking of is -Wl,-Bstatic and -Wl,-Bdynamic — for the GNU
linker at least, but this is not portable.
Seriously, it sounds to me like something else is wrong, you should never
have the need to statically link stuff that way unless you're doing release
management of binary applications, in which case you have another set of
problems entirely.
Hey Diego. Sorry, I'm confused. Are you referring to my need to
statically link against certain libraries, or Robert's suggested
approach?
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Kip Warner
2013-06-01 23:27:46 UTC
Permalink
Post by Robert Boehne
Statically linking libc is a recipe for disaster, so either read and understand why, or just take my word for it.
I'm in agreement and standard libraries are something I'm fine with not
statically linking against, although it's not unusual for some games to
ship their own prefixed within their own installation. Although usually
when I see this, they're not statically linked, but separate shared
objects. Regardless, I understand what you are saying.
Post by Robert Boehne
I don't quite understand why you think you need the rest linked
statically,
Libraries like the following may not be present on the end user's system
already:

NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libzzip-0.so.13
NEEDED libpng12.so.0

These ones, especially the latter two, I'd prefer to statically link
with and there shouldn't be any harm in doing that.
Post by Robert Boehne
BUT the easiest way to do that would be to add LT_INIT to
configure.ac to use Libtool, and add --static-libtool-libs to the
target's LDFLAGS.
Ok, so I tried the following in my configure.ac,

...
LT_INIT
LDFLAGS="$LDFLAGS --static-libtool-libs"
...

The linker flag ended up making another check fail...

AC_CHECK_LIB([ocrad], [OCRAD_version], [], [AC_MSG_ERROR([GNU
OCRAD library is required, but unavailable...])])

So what I ended up doing was moving the LDFLAGS= line to near the end of
the configure.ac just before AC_OUTPUT. It configures fine, but when I
build, I get the following during CXXLD build step...

g++-4.7.real: error: unrecognized command line option
'--static-libtool-libs'

Same result if I move the LDFLAGS= initialization into Makefile.am as
such,

myproduct_LDFLAGS = --static-libtool-libs
Post by Robert Boehne
That will cause all of the Libtool libraries to be linked statically when possible.
Just for my sake, when you say Libtool libraries, are you talking about
libraries that Libtool uses, or libraries that I mentioned above that I
am trying to statically link against?
Post by Robert Boehne
-Wl,-static -lzzip -Wl,call_shared
This is the configure.ac for zziplib. I don't really have much
meaningful experience with libtool, but based on my reading of this, I
think it is using it:

<http://sourceforge.net/p/zziplib/svn/HEAD/tree/trunk/zzip-0/configure.ac>
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Mike Frysinger
2013-06-02 03:14:21 UTC
Permalink
Post by Kip Warner
Post by Robert Boehne
I don't quite understand why you think you need the rest linked
statically,
Libraries like the following may not be present on the end user's system
be aware that what ever version of glibc & gcc you use to build, the end user
cannot have a version older than that or it'll fail to start
Post by Kip Warner
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libzzip-0.so.13
NEEDED libpng12.so.0
you could use -rpath,$ORIGIN and then ship those libs in the same dir (and if
people want to use the system copy, they can just rm the bundled ones). this
is frequently what games do that ship their own libs (or they have a hacky
shell script to set LD_LIBRARY_PATH).

i don't want to encourage what it is you're trying to do at all, buuuut the
lddtree.py tool might be helpful. on the downside, it'll bundle *all* shared
libs your app uses (including glibc ones). on the upside, it should make your
package dependent only upon the kernel version (whatever your glibc is
compiled to support minimally), the ABIs that your code is compiled for
([colloquially] 32bit/64bit/etc...), and it would automate the whole process
(so you don't have to manually copy files around yourself).

http://sources.gentoo.org/gentoo-projects/pax-utils/lddtree.py
-mike
Kip Warner
2013-06-02 05:10:36 UTC
Permalink
Post by Mike Frysinger
be aware that what ever version of glibc & gcc you use to build, the end user
cannot have a version older than that or it'll fail to start
Do you mean in the case of dynamic linking? If so, that's awful. But
strange because I've seen many upstream projects release precompiled
binaries without that ever really being made into an issue. Or do you
mean in the case of static linking?
Post by Mike Frysinger
Post by Kip Warner
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libzzip-0.so.13
NEEDED libpng12.so.0
you could use -rpath,$ORIGIN and then ship those libs in the same dir (and if
people want to use the system copy, they can just rm the bundled ones). this
is frequently what games do that ship their own libs (or they have a hacky
shell script to set LD_LIBRARY_PATH).
i don't want to encourage what it is you're trying to do at all, buuuut the
lddtree.py tool might be helpful.
Hey, I'm all ears and open to alternatives. If you have a better
suggestion on how to get an application with said requisite runtimes to
run out of the box off optical media via XDG autostart without requiring
the novice user to have to do anything with their package manager, I'm
all ears.
Post by Mike Frysinger
on the downside, it'll bundle *all* shared
libs your app uses (including glibc ones). on the upside, it should make your
package dependent only upon the kernel version (whatever your glibc is
compiled to support minimally), the ABIs that your code is compiled for
([colloquially] 32bit/64bit/etc...), and it would automate the whole process
(so you don't have to manually copy files around yourself).
http://sources.gentoo.org/gentoo-projects/pax-utils/lddtree.py
This looks pretty complicated to get up and running and I wonder if not
just statically linking against libpng and libzziplib would not be
better? I'm also missing the elftools python dependency the script needs
and having trouble tracking down what I need to do to get that working,
assuming lddtree.py approach is the best way to handle this scenario.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Mike Frysinger
2013-06-02 07:06:55 UTC
Permalink
Post by Kip Warner
Post by Mike Frysinger
be aware that what ever version of glibc & gcc you use to build, the end
user cannot have a version older than that or it'll fail to start
Do you mean in the case of dynamic linking? If so, that's awful. But
strange because I've seen many upstream projects release precompiled
binaries without that ever really being made into an issue. Or do you
mean in the case of static linking?
i mean dynamic linking. it's always been this way.

people who do binary releases often times find an old distro that works and
then upgrade packages as need be. then they keep that image around forever.

either that or they just do a build for the last two RHEL or Ubuntu releases
and say "compatible with these two distros and we don't care about the rest".
Post by Kip Warner
Hey, I'm all ears and open to alternatives. If you have a better
suggestion on how to get an application with said requisite runtimes to
run out of the box off optical media via XDG autostart without requiring
the novice user to have to do anything with their package manager, I'm
all ears.
do what every one else does. release a .deb for Ubuntu that depends on the
right packages. or open source it :P. done.
-mike
Paul Smith
2013-06-02 14:45:06 UTC
Permalink
I'm removing automake from this thread as I'm getting two copies of
every mail. Hope no one minds.
Post by Mike Frysinger
Post by Kip Warner
Post by Mike Frysinger
be aware that what ever version of glibc & gcc you use to build, the end
user cannot have a version older than that or it'll fail to start
Do you mean in the case of dynamic linking? If so, that's awful. But
strange because I've seen many upstream projects release precompiled
binaries without that ever really being made into an issue. Or do you
mean in the case of static linking?
i mean dynamic linking. it's always been this way.
This is because GCC has some of its internal functionality implemented
in libraries, which are linked dynamically by default. This is
especially true if your program is written in C++, because the STL is
provided with the compiler and is very compiler-specific. However, even
some low-level C functionality is implemented as a shared library.

If the runtime system has an older version of the compiler with older
versions of these libraries, you can run into trouble (again, C++ is the
biggest culprit: the C helper libraries are pretty stable, release to
release, in general).

This is easily solved, though. You just have to add the -static-libgcc
and, if you use C++, -static-libstdc++ to the link line, then these
helper libraries are linked statically and doesn't matter what version
of GCC is installed on the system.

These libraries have a special exception to the GPL, which allows them
to be statically linked in straightforward situations without the result
being GPL-licensed. See the license for details.
Post by Mike Frysinger
people who do binary releases often times find an old distro that works and
then upgrade packages as need be. then they keep that image around forever.
This is a different issue than the compiler version. The above solution
lets you use a newer _compiler_. This problem relates to the
_distribution_ (that is, the version of libc, etc.)

As Mike says, GNU/Linux distros generally guarantee that if you build
against version X of a system it will run without problems on any
version >=X (note here I'm talking mostly about basic system libraries:
the higher up into userspace you go the less reliable such statements
become). However, there is no guarantee about running on version <X,
and in fact that very often does not work. This is not really
surprising: you can't guarantee perfect compatibility, forward and
backward, for all time!

However, using the "old image" method is, IMO, not a good solution for
any larger-scale development. It's slow, difficult to manage, and
generally painful.

My recommendation for this situation is to instead create a "sysroot",
which is basically a directory structure containing the dev files for a
given distribution: header files and libraries (.so and .a). You don't
need any bin, man, etc. files. Pick a pretty old distribution (the
oldest that you want to support). The current-minus-one Debian or Red
Hat distros are good choices, generally, because they usually have such
old versions of everything that it's unusual to find another mainstream
distro with older versions.

Alternatively, if you prefer to distribute different packages for
different systems, you can create multiple sysroots: one for each
system. They're not that big and are easy to manage.

Then use GCC's --sysroot flag to point the compiler at the sysroot
directory structure rather than your system's headers and libraries.

Now your build environment is portable and completely divorced from the
version of the underlying build system and the result will run on any
distribution which has the same or newer versions of libraries as your
sysroot.

It's a bit of effort to set up but it's a LOT nicer than dealing with
virtual machines with older releases loaded, or whatever.


Regarding autoconf: using the above setup you have two choices I can
see. First you can just go with it as-is, and hope the result works
well (it probably will). Second you can try to build your packages as
if you were cross-compiling, which is a little safer since autoconf
won't try to use your build system to infer things about your host
system, if it detects that they're different. However, not all packages
have perfect support for cross-compilation so it may be more work.
Kip Warner
2013-06-03 20:37:14 UTC
Permalink
Post by Paul Smith
I'm removing automake from this thread as I'm getting two copies of
every mail. Hope no one minds.
No problem. I'll try to remember to do the same.
Post by Paul Smith
This is because GCC has some of its internal functionality implemented
in libraries, which are linked dynamically by default. This is
especially true if your program is written in C++, because the STL is
provided with the compiler and is very compiler-specific. However, even
some low-level C functionality is implemented as a shared library.
Understood. Good explanation.
Post by Paul Smith
If the runtime system has an older version of the compiler with older
versions of these libraries, you can run into trouble (again, C++ is the
biggest culprit: the C helper libraries are pretty stable, release to
release, in general).
This is easily solved, though. You just have to add the -static-libgcc
and, if you use C++, -static-libstdc++ to the link line, then these
helper libraries are linked statically and doesn't matter what version
of GCC is installed on the system.
That's a great solution. So what I've done in my configure.ac is the
following...

LDFLAGS="$LDFLAGS -static-libgcc -static-libstdc++"

What I should probably do is have that conditionally done based on some
flag to ./configure, but I'm not sure what would be the most appropriate
thing to call it. I know --enable-static is a convention with specific
semantics with libtool.

Another thing, I see the the libstdc++.so.6 and libgcc_s.so.1 removed
from its dependencies according to objdump, but I did note the addition
of a new one, ld-linux-x86-64.so.2. Do you think that will be a problem?
Post by Paul Smith
These libraries have a special exception to the GPL, which allows them
to be statically linked in straightforward situations without the result
being GPL-licensed. See the license for details.
Aye, I think I've got everyone a bit sketched out about my needs. Rest
assured, it's all GPL'd <http://rod.gs/A2m>.
Post by Paul Smith
However, using the "old image" method is, IMO, not a good solution for
any larger-scale development. It's slow, difficult to manage, and
generally painful.
Agreed.
Post by Paul Smith
My recommendation for this situation is to instead create a "sysroot",
which is basically a directory structure containing the dev files for a
given distribution: header files and libraries (.so and .a). You don't
need any bin, man, etc. files. Pick a pretty old distribution (the
oldest that you want to support). The current-minus-one Debian or Red
Hat distros are good choices, generally, because they usually have such
old versions of everything that it's unusual to find another mainstream
distro with older versions.
Alternatively, if you prefer to distribute different packages for
different systems, you can create multiple sysroots: one for each
system. They're not that big and are easy to manage.
Then use GCC's --sysroot flag to point the compiler at the sysroot
directory structure rather than your system's headers and libraries.
Now your build environment is portable and completely divorced from the
version of the underlying build system and the result will run on any
distribution which has the same or newer versions of libraries as your
sysroot.
It's a bit of effort to set up but it's a LOT nicer than dealing with
virtual machines with older releases loaded, or whatever.
I wonder if I could do this for a GNU/Linux i386 binary and amd64, and a
w32 binary via mingw sysroot, if such a thing is possible.
Post by Paul Smith
Regarding autoconf: using the above setup you have two choices I can
see. First you can just go with it as-is, and hope the result works
well (it probably will). Second you can try to build your packages as
if you were cross-compiling, which is a little safer since autoconf
won't try to use your build system to infer things about your host
system, if it detects that they're different. However, not all packages
have perfect support for cross-compilation so it may be more work.
Thanks a lot Paul. I'll get back to you guys if I have issues.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Paul Smith
2013-06-03 21:06:52 UTC
Permalink
Post by Kip Warner
Another thing, I see the the libstdc++.so.6 and libgcc_s.so.1 removed
from its dependencies according to objdump, but I did note the addition
of a new one, ld-linux-x86-64.so.2. Do you think that will be a problem?
I can't think of any reason why adding those flags would cause this to
happen. Indeed, I can't think of any reason why ld-linux.so would NOT
be linked with your application before these flags were added. I would
have thought that any executable that had any dynamic libraries linked
at all would need this (it's the runtime linker).

Are you sure that library didn't used to exist in the ldd output? If
so, were you linking your executable statically (no shared libs) before?

Anyway, seeing that library doesn't bother me.
Post by Kip Warner
I wonder if I could do this for a GNU/Linux i386 binary and amd64, and a
w32 binary via mingw sysroot, if such a thing is possible.
It is certainly possible to create a cross-compilation environment to
build Windows/mingw output on a Linux system; indeed it's becoming a
very popular way of handling Windows targets. Some google searching
will lead you to many examples and write-ups; I don't have anything
handy that I've tried and can attest to unfortunately.

However, it WON'T be as simple as you describe above. Just having a
different sysroot won't be enough: you'll have to actually create a
separate cross-compiler toolchain as well. That's because all the
Linux-based distributions use the same basic executable layout, calling
structure, etc. (ELF). So, the only real difference from a compiler
standpoint between Red Hat, Debian, etc. are the versions of the various
shared libraries that are installed.

Windows, on the other hand uses an entirely different layout for its
executables and libraries. It's far more than just some library version
differences: you need a whole different compiler.
Kip Warner
2013-06-04 05:30:33 UTC
Permalink
Post by Paul Smith
I can't think of any reason why adding those flags would cause this to
happen. Indeed, I can't think of any reason why ld-linux.so would NOT
be linked with your application before these flags were added. I would
have thought that any executable that had any dynamic libraries linked
at all would need this (it's the runtime linker).
Who knows. I can't seem to explain it either. =P
Post by Paul Smith
Are you sure that library didn't used to exist in the ldd output? If
so, were you linking your executable statically (no shared libs) before?
This is what I see when I run ./configure --enable-dbus-interface &&
make && objdump -p ./viking-extractor

...
Dynamic Section:
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libglib-2.0.so.0
NEEDED libpng12.so.0
NEEDED libzzip-0.so.13
NEEDED libstdc++.so.6
NEEDED libm.so.6
NEEDED libgcc_s.so.1
NEEDED libpthread.so.0
NEEDED libc.so.6
...

And with --enable-static appended to ./configure...

...
Dynamic Section:
NEEDED libgio-2.0.so.0
NEEDED libgobject-2.0.so.0
NEEDED libglib-2.0.so.0
NEEDED libm.so.6
NEEDED libpthread.so.0
NEEDED libc.so.6
NEEDED ld-linux-x86-64.so.2
...

I should add that I defined some logic for --enable-static in
configure.ac as such...

# D-Bus interface...
AC_ARG_ENABLE([static],
[AS_HELP_STRING([--enable-static],
[enable static compilation @<:@default: no@:>@])],
[static=${enableval}],
[static=no])

...

# zziplib...

# Check for C header and library...
if test "$static" = yes; then
PKG_CHECK_MODULES_STATIC([libzzip], [zziplib], [have_zzip=yes], [have_zzip=no])
else
PKG_CHECK_MODULES([libzzip], [zziplib], [have_zzip=yes], [have_zzip=no])
fi
if test "x${have_zzip}" = xno; then
AC_MSG_ERROR([zziplib runtime library is required, but was not detected...])
fi

# Store the needed compiler flags for automake since this doesn't happen
# automatically like with AC_CHECK_LIB. We will take care of linker
# flags later...
CXXFLAGS="$CXXFLAGS $libzzip_CFLAGS"

# Portable network graphics...

# Check for C++ interface header...
AC_CHECK_HEADERS([png++/png.hpp], [have_png_cxx=yes], [have_png_cxx=no])
if test "x${have_png_cxx}" = xno; then
AC_MSG_ERROR([libpng++ headers are required, but were not detected...])
fi

# Check for C header and static library...
if test "$static" = yes; then
PKG_CHECK_MODULES_STATIC([libpng], [libpng], [have_png=yes], [have_png=no])
else
PKG_CHECK_MODULES([libpng], [libpng], [have_png=yes], [have_png=no])
fi
if test "x${have_png}" = xno; then
AC_MSG_ERROR([libpng runtime library is required, but was not detected...])
fi

# Store the needed compiler flags for automake since this doesn't happen
# automatically like with AC_CHECK_LIB. We will take care of linker
# flags later...
CXXFLAGS="$CXXFLAGS $libpng_CFLAGS"

...

# Set additional linker flags...

# If static compilation is enabled, update linker...
if test "$static" = yes; then

# libpng and libzzip statically link against...
LIBS="$LIBS -Wl,-Bstatic $libpng_LIBS $libzzip_LIBS -Wl,-Bdynamic"

# Static linking against GCC's runtimes and the standard C++ library...
LDFLAGS="$LDFLAGS -static-libgcc -static-libstdc++"

# Otherwise insert vanilla linker flags...
else
LIBS="$LIBS $libpng_LIBS $libzzip_LIBS"
fi

...

# If static compilation is enabled, update linker...
if test "$static" = yes; then

# libpng and libzzip statically link against...
LIBS="$LIBS -Wl,-Bstatic $libpng_LIBS $libzzip_LIBS -Wl,-Bdynamic"

# Static linking against GCC's runtimes and the standard C++ library...
LDFLAGS="$LDFLAGS -static-libgcc -static-libstdc++"

# Otherwise insert vanilla linker flags...
else
LIBS="$LIBS $libpng_LIBS $libzzip_LIBS"
fi

So that's the solution I've come up with so far and it seems to work,
but I'm open to better solutions.
Post by Paul Smith
Anyway, seeing that library doesn't bother me.
Post by Kip Warner
I wonder if I could do this for a GNU/Linux i386 binary and amd64, and a
w32 binary via mingw sysroot, if such a thing is possible.
It is certainly possible to create a cross-compilation environment to
build Windows/mingw output on a Linux system; indeed it's becoming a
very popular way of handling Windows targets. Some google searching
will lead you to many examples and write-ups; I don't have anything
handy that I've tried and can attest to unfortunately.
However, it WON'T be as simple as you describe above. Just having a
different sysroot won't be enough: you'll have to actually create a
separate cross-compiler toolchain as well. That's because all the
Linux-based distributions use the same basic executable layout, calling
structure, etc. (ELF). So, the only real difference from a compiler
standpoint between Red Hat, Debian, etc. are the versions of the various
shared libraries that are installed.
Alright. Noted.
Post by Paul Smith
Windows, on the other hand uses an entirely different layout for its
executables and libraries. It's far more than just some library version
differences: you need a whole different compiler.
Thankfully it looks as though there are precompiled MinGW debs available
for my distribution (Ubuntu Raring / amd64) and hopefully I'll be able
to get them up and running.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Kip Warner
2013-06-02 18:28:17 UTC
Permalink
Post by Mike Frysinger
people who do binary releases often times find an old distro that works and
then upgrade packages as need be. then they keep that image around forever.
either that or they just do a build for the last two RHEL or Ubuntu releases
and say "compatible with these two distros and we don't care about the rest".
Tedious, but makes sense.
Post by Mike Frysinger
do what every one else does. release a .deb for Ubuntu that depends on the
right packages. or open source it :P. done.
It's already GPL'd <http://rod.gs/A2m>. As for debs, I'd love to, but
for the reasons I mentioned, it's not possible. =)
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Dan Kegel
2013-05-31 14:37:27 UTC
Permalink
The ones for certain I know I should be able to statically link against are
at least libzzip and libpng.
You have
PKG_CHECK_MODULES([libzzip], [zziplib], [have_zzip=yes], [have_zzip=no])
Have you seen
https://bugs.freedesktop.org/show_bug.cgi?id=19541
? Maybe try PKG_CHECK_MODULES_STATIC
or PKG_CONFIG="pkg-config --static"

Never heard of libzzip, I can see why you don't want to expect it to
be on the user's system already.

Is libpng a problem because its soname isn't the same everywhere yet?
http://www.linuxbase.org/navigator/browse/lib_single.php?cmd=list-by-name&Section=ABI&Lname=libpng12

- Dan
Kip Warner
2013-06-01 23:42:43 UTC
Permalink
Post by Dan Kegel
You have
PKG_CHECK_MODULES([libzzip], [zziplib], [have_zzip=yes], [have_zzip=no])
Have you seen
https://bugs.freedesktop.org/show_bug.cgi?id=19541
? Maybe try PKG_CHECK_MODULES_STATIC
or PKG_CONFIG="pkg-config --static"
Hey Dan. I've actually already tried both approaches, but no luck. In
the case of at least zziplib, take a look at the following:

$ pkg-config --libs zziplib
-Wl,-Bsymbolic-functions -Wl,-z,relro -lzzip -lz

$ pkg-config --static --libs zziplib
-Wl,-Bsymbolic-functions -Wl,-z,relro -lzzip -lz

They both dump the same linker flags. Maybe I'm doing something wrong
here, or PKG_CHECK_MODULES_STATIC can't be any more useful than the
pkg-config metadata tags a package ships with it. This is the contents
of zziplib.pc:

# generated by configure / remove this line to disable
regeneration
prefix=/usr
exec_prefix=${prefix}
bindir=${exec_prefix}/bin
libdir=${exec_prefix}/lib
datarootdir=${prefix}/share
datadir=${prefix}/share
sysconfdir=/etc
includedir=${prefix}/include
package=zziplib
suffix=

Name: zziplib
Description: ZZipLib - libZ-based ZIP-access Library
Version: 0.13.56
Requires: zzip-zlib-config
Libs: -L${libdir} -Wl,-Bsymbolic-functions -Wl,-z,relro -lzzip
Cflags: -I${includedir} -D_FORTIFY_SOURCE=2

I don't see any .private tags. But based on what I can see, the package
seems to ship both a shared object runtime and a static library:

$ dpkg -L libzzip-dev
...
/usr/lib/libzzip.a
...
/usr/lib/libzzip.so
...
Post by Dan Kegel
Never heard of libzzip, I can see why you don't want to expect it to
be on the user's system already.
Yup.
Post by Dan Kegel
Is libpng a problem because its soname isn't the same everywhere yet?
http://www.linuxbase.org/navigator/browse/lib_single.php?cmd=list-by-name&Section=ABI&Lname=libpng12
I actually don't know about that issue. However, based on the LSB link,
it says the library is pretty ubiquitous from what I'm reading. So I'll
take your word for it. But it would still be an issue if someone wants
to compile my application for w32/64 where the library will definitely
not ship by default.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Dan Kegel
2013-06-02 00:17:20 UTC
Permalink
Post by Kip Warner
$ pkg-config --libs zziplib
-Wl,-Bsymbolic-functions -Wl,-z,relro -lzzip -lz
$ pkg-config --static --libs zziplib
-Wl,-Bsymbolic-functions -Wl,-z,relro -lzzip -lz
Aw, foo. I was under the misapprehention that --static would cause
pkgconfig to reference the .a files. I've clearly been spending
too much time in cmake-land.

I don't suppose you've tried passing absolute paths to the .a files?
- Dan
Kip Warner
2013-06-02 00:19:34 UTC
Permalink
Post by Dan Kegel
Aw, foo. I was under the misapprehention that --static would cause
pkgconfig to reference the .a files. I've clearly been spending
too much time in cmake-land.
No worries ;)
Post by Dan Kegel
I don't suppose you've tried passing absolute paths to the .a files?
I haven't tried that. I'm sure that would probably work, but there's
probably a more elegant approach.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Kip Warner
2013-06-02 00:31:59 UTC
Permalink
Post by Dan Kegel
You have
PKG_CHECK_MODULES([libzzip], [zziplib], [have_zzip=yes], [have_zzip=no])
Have you seen
https://bugs.freedesktop.org/show_bug.cgi?id=19541
? Maybe try PKG_CHECK_MODULES_STATIC
By the way, outside of that upstream bug report, I can't find any more
information on PKG_CHECK_MODULES_STATIC. Apparently in comment 7 it was
commit around May 19, but I don't see it in my
distro's /usr/share/aclocal/pkg.m4. Maybe it's just old.
--
Kip Warner -- Software Engineer
OpenPGP encrypted/signed mail preferred
http://www.thevertigo.com
Loading...