build nginx use autoconf - nginx

recently I'm reading nginx source code, but I got confused how to build it's code by autoconf, I have try my best to write a Makefile.am, unfortunate, I'm failed to write a correct Makefile.am file, so I cann't get a configure file, does anybody know how to write a Makefile.am?

I know how to write a Makefile.am, but you have no need to.
As you know, the nginx source package is a GNU autotools
package.
You don't do the autotooling. The people who write nginx do that. When you
download the source package, the configure.ac, the Makefile.am(s) and other
autotools files are already there along with all the source code.
To build the package, all you have to do is run the configure script to
generate correct makefiles for your system, then run make. (This is why the
build system is called autotools.)
Source packages are distributed from http://nginx.org/download/. Assuming
you want nginx 1.10.2 (the stable release at this time), you simply do this
in a suitable working directory:
$ wget http://nginx.org/download/nginx-1.10.2.tar.gz
$ tar zxf nginx-1.10.2.tar.gz
$ cd nginx-1.10.2
$ ./configure
$ make
Then it's built in ./nginx-1.10.2. If you then want to install nginx in your system, continue:
$ sudo make install
Building any autotooled source package is essential the same as this.
For full details and variations, do read NSTALLING NGINX OPEN SOURCE

I have write a email to nginx's author, he said configure file was written by hand

Related

How do I find the NetCDF root directory?

After installing CDO in a Manjaro distro, I got the following error:
cdo sinfo air_temperature.nc
cdo sinfo: Open failed on >air_temperature.nc<
Unsupported file type (library support not compiled in)
To create a CDO application with NetCDF support use: ./configure --with-netcdf=<NetCDF root directory> ...
netcdf is installed and works with other applications (RNetCDF, QGIS, etc.). However, I don't find which NetCDF root directory, I should indicate in the configure instruction.
Could somebody help me?
Thanks.
As the question is written you need to know the location of your netcdf libs. This is a duplicate of this linked question. If you have netcdf installed then you should be able to use nf-config to find out where your libs are, try
nf-config --flibs
On most Debian-based flavours of linux it can be installed with
sudo apt-get install libnetcdff-dev
but in your case using an arch-based system, you instead need
pacman -S netcdf-fortran-openmpi
But an easier alternative is to bypass the manual install altogether by using conda, see this page for details

Compiling Jaeger gRPC proto files with Python

I'm currently playing around with Jaeger Query and trying to access its content through the API, which uses gRPC. I'm not familiar with gRPC, but my understanding is that I need to use the Python gRPC compiler (grpcio_tools.protoc) on the relevant proto file to get useful Python definitions. What I'm trying to do is find out ways to access Jaeger Query by API, without the frontend UI.
Currently, I'm very stuck on compiling the proto files. Every time I try, I get dependency issues (Import "fileNameHere" was not found or has errors.). The Jaeger query.proto file contains import references to files outside the repo. Whilst I can find these and manually collect them, they also have dependencies. I get the impression that following through and collecting each of these one by one is not how this was intended to be done.
Am I doing something wrong here? The direct documentation through Jaeger is limited for this. The below is my basic terminal session, before including any manually found files (which themselves have dependencies I would have to go and find the files for).
$ python -m grpc_tools.protoc --grcp_python_out=. --python_out=. --proto_path=. query.proto
model.proto: File not found.
gogoproto/gogo.proto: File not found.
google/api/annotations.proto: File not found.
protoc-gen-swagger/options/annotations.proto: File not found.
query.proto:20:1: Import "model.proto" was not found or had errors.
query.proto:21:1: Import "gogoproto/gogo.proto" was not found or had errors.
query.proto:22:1: Import "google/api/annotations.proto" was not found or had errors.
query.proto:25:1: Import "protoc-gen-swagger/options/annotations.proto" was not found or had errors.
query.proto:61:12: "jaeger.api_v2.Span" is not defined.
query.proto:137:12: "jaeger.api_v2.DependencyLink" is not defined.
Thanks for any help.
A colleague of mine provided the answer... It was hidden in the Makefile, which hadn't worked for me as I don't use Golang (and it had been more complex than just installing Golang and running it, but I digress...).
The following .sh will do the trick. This assumes the query.proto file is a subdirectory from the same location as the script below, under model/proto/api_v2/ (as it appears in the main Jaeger repo).
#!/usr/bin/env sh
set +x
rm -rf ./js_out 2> /dev/null
mkdir ./js_out
PROTO_INCLUDES="
-I model/proto \
-I idl/proto \
-I vendor/github.com/grpc-ecosystem/grpc-gateway \
-I vendor/github.com/gogo/googleapis \
-I vendor/github.com/gogo/protobuf/protobuf \
-I vendor/github.com/gogo/protobuf"
python -m grpc_tools.protoc ${PROTO_INCLUDES} --grpc_python_out=./python_out --python_out=./python_out model/proto/api_v2/query.proto
This will definitely generate the needed Python file, but it will still be missing dependencies.
I did the following to get the Jaeger gRPC Python APIs:
git clone --recurse-submodules https://github.com/jaegertracing/jaeger-idl
cd jaeger-idl/
make proto
Use the files inside proto-gen-python/.
Note:
While importing the generated code, if you face the error:
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
Do:
pip3 install --upgrade pip
pip3 install --upgrade protobuf

How do you create a fake install of a debian package for use in testing?

I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html

Build m4, autoconf, automake, libtool on unix

I'm trying to setup PHP, apache environment on HP-UX server. While install i'm using usual commands of "./configur, make, make install". Here when I'm trying to install PCRE I got an error like follows.
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/ubuntu/softwares/m4-1.4.17/build-aux/missing aclocal-1.14 -I m4 /home/ubuntu/softwares/m4-1.4.17/build-aux/missing: line 81: aclocal-1.14: command not found WARNING: 'aclocal-1.14' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/> Makefile:1496: recipe for target 'aclocal.m4' failed make: *** [aclocal.m4] Error 127
So I download latest versions of "m4, autoconf and automake" source and try to install using usual make command.
First I tried to install "automake" it through error asking to install "autoconf"
Then I tried to install autoconf again it ask to install "m4"
Then I tried to install "m4" now it through the same error above listed.
So it became a loop of same set of error not letting me to install.
Can any one help me to sort this issues. Please consider this is a HP-UX unix server so don't recommend the famous ubuntu "apt-get install" command or red hat specific commands.
First read William Pursell's comment to your post (above). If you still need to install the autotools ...
Check to see what, if any, autotools you may already have installed by typing: m4 --versionand autoconf --versionand automake --version.
You should use HP-UX's package manager. It's called Software Distributor (SD). http://en.wikipedia.org/wiki/Software_Distributor
HP-UX's FAQ 5.9 explains how to handle dependencies using depothelper. http://hpux.connect.org.uk/hppd/answers/5-9.html
Here is where you find the correct autotool packages (autoconf, automake, libtool) for HP-UX. Install these HP-UX packages using HP-UX's native package manager instead of compiling from source. http://hpux.connect.org.uk/hppd/packages.html
I was facing the same problem with m4. In my case, the problem was I was transferring all the source files via scp to a server.
When I tried to configure, make and make install through ssh, this kept happening. I believe something did not transfer the way it was supposed to.
The problem was solved by manually transferring the files
through a USB.
It's not a perfect solution (it implies physical access to the server) but it works.

libcrypto.so.10: cannot open shared object file: No such file or directory

I am trying to install ODBC driver for Debian arrording to these instructions: https://blog.afoolishmanifesto.com/posts/install-and-configure-the-ms-odbc-driver-on-debian/
However trying to run:
sqlcmd -S localhost
I get the error
libcrypto.so.10: cannot open shared object file: No such file or
directory
What could be the cause?
So far I have tried
1.
$ cd /usr/lib
$ sudo ln -s libssl.so.0.9.8 libssl.so.10
$ sudo ln -slibcrypto.so.0.9.8 libcrypto.so.10
2.
/usr/local/lib64 to the /etc/ld.so.conf.d/doubango.conf file
3.
sudo apt-get update
sudo apt-get install libssl1.0.0 libssl-dev
cd /lib/x86_64-linux-gnu
sudo ln -s libssl.so.1.0.0 libssl.so.10
sudo ln -s libcrypto.so.1.0.0 libcrypto.so.10
4. Sudo apt-get install libssl0.9.8:i386
None of these have helped.
As I'm quite familiar with Debian and programming, here is some advice:
if you have questions about setting up your system, ask on SuperUser and/or (if your question is specific to a Un*x flavour) on Unix&Linux
when fuddling around with symlinks to shared-libraries, you should have a thorough understanding of what you are doing. these files are named for a reason - and the reason is to protect you (the user of the system) from weird crashes, because an application is using a wrong/incompatible library.
a tutorial that tells you to do so, should give proper warning and explanation about what you are to do.
So, why are these instructions in the tutorial you are following?
The application you are trying to run, has been linked against libcrypto.so.
On the developer machine (that was used to produce the application binary), libcrypto.so was a symlink to libcrypto.so.10, but this is missing on Debian: maybe because the library has been removed (and replaced by a new and incompatible version), or because Debian uses a different naming scheme as compared to the system that was used to compile the application.
If it is the former, then you cannot solve the issue by using symlinks.
You have to get the right library (or the application linked against the correct libraries).
If it is the latter, you may get away with symlinking the expected library name with the correct library files found on your system. (This is assuming that the only difference between the two systems is indeed the so-naming scheme).
So, how to do it?
first of all, you should find out, against which libraries your application was really linked, and which of these libraries are missing.
$ ldd /path/to/my/app | grep -i "not found"
libfoo.so.10 => not found
then find out, whether you have a (hopefully compatible) library on your system. A good place to start is /usr/lib/. but not-so-recently, Debian has started moving the libraries to /usr/lib/<host-triplet>, with <host-triplet> describing a target architecture. You can find out the default value if your application was indeed built for the architecture you are running (e.g. for linux-amd64) you can get the string by running something like:
$ gcc -print-multiarch
Imagine you discover that you have /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0.
if you have good reason to believe that this can act as a replacement for libfoo.so.10, you can go make the found library available to your application by means of a symlink, e.g.
# cd /usr/local/lib/
# ln -s /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0 libfoo.so.10
Finally, you might need to refresh the cache of the dynamic linker so it starts using the new library, by running ldconfig as root/superuser.

Resources