I am trying to install a github repository and on running as follows
/home/user/.local/bin/python3 -m pip install -e /home/user/repository_folder_name
I get the following error
ERROR: Command errored out with exit status 1:
command: /home/user/.local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/user/PyMatching/setup.py'"'"'; __file__='"'"'/home/user/PyMatching/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
cwd: /home/user/PyMatching/
Complete output (54 lines):
running develop
running egg_info
writing src/PyMatching.egg-info/PKG-INFO
writing dependency_links to src/PyMatching.egg-info/dependency_links.txt
writing requirements to src/PyMatching.egg-info/requires.txt
writing top-level names to src/PyMatching.egg-info/top_level.txt
reading manifest file 'src/PyMatching.egg-info/SOURCES.txt'
writing manifest file 'src/PyMatching.egg-info/SOURCES.txt'
running build_ext
-- pybind11 v2.4.dev4
CMake Warning (dev) at lib/lemon/CMakeLists.txt:6 (PROJECT):
Policy CMP0048 is not set: project() command manages VERSION variables.
Run "cmake --help-policy CMP0048" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
The following variable(s) would be set to empty:
LEMON_VERSION
This warning is for project developers. Use -Wno-dev to suppress it.
-- Could NOT find ILOG (missing: ILOG_CPLEX_LIBRARY ILOG_CPLEX_INCLUDE_DIR)
-- Could NOT find COIN (missing: COIN_CBC_LIBRARY COIN_CBC_SOLVER_LIBRARY COIN_CGL_LIBRARY COIN_CLP_LIBRARY COIN_OSI_LIBRARY COIN_OSI_CBC_LIBRARY COIN_OSI_CLP_LIBRARY)
-- Could NOT find SOPLEX (missing: SOPLEX_LIBRARY SOPLEX_INCLUDE_DIR)
-- Configuring done
-- Generating done
-- Build files have been written to: /home/user/PyMatching
Error: could not load cache
There is a similar question here. But I have tried commenting out #CMAKE_POLICY(SET CMP0048 OLD)
and made sure that I am using the latest version of cmake.
I have no idea how to find these development libraries or find alternatives. Please suggest how to resolve this. Also, this is an issue only on the unix server where I am trying to run this. On my local Windows computer, installation went smoothly.
I'm the developer of PyMatching and I've now added it to the Python package index (see here) so you can install the latest version from PyPI with pip:
pip install pymatching
Since pip will fetch the wheels, which have been built for various platforms, this should hopefully fix your issue, as you won't need to build it yourself.
If you still need to build your own local copy of the PyMatching source code for some reason, just cloning the latest version of the repo might also solve your problem, as I've switched to a more recent version of the Lemon C++ library since you posted.
If this doesn't fix the problem, you could also create an issue on GitHub and/or let me know which version of your operating system you're using.
Related
I am desperated and hope someone might be able to bring some light on this problem:
I am trying to install netcdf-fortran in Fedora 35 using Intel compilers. To do so, I first installed ONEAPI from intel in /opt/intel/oneapi. Then, I install
https://gmplib.org/download/gmp/gmp-6.2.0.tar.lz
https://www.mpfr.org/mpfr-current/mpfr-4.1.0.tar.gz
https://ftp.gnu.org/gnu/mpc/mpc-1.2.1.tar.gz
git clone https://gnu.googlesource.com/gcc
git checkout releases/gcc-10
source /opt/intel/oneapi/setvars.sh intel64
export PATH=/opt/intel/oneapi/compiler/2021.4.0/linux/bin:$PATH
export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/2021.4.0/linu/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=/opt/intel/oneapi/compiler/2021.4.0/lib/pkgconfig:$PKG_CONFIG_PATH
export C_INCLUDE_PATH=/opt/intel/oneapi/compiler/2021.4.0/linux/include:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=/opt/intel/oneapi/compiler/2021.4.0/linux/include:$CPLUS_INCLUDE_PATH
Then exported the utilities directory where I am install all these packages and exported it accordingly.
Then, I kept installing:
szip-2.1.1.tar
libjpeg-turbo-2.1.2
gzip-1.11
bzip2-1.0.8
libuuid-1.0.3
brotli
gperf-3.1
gettext-0.21
hdf-4.2.15
hdf5-1.13.0
netcdf-c-4.8.1
and up everything compiles and works fine. Yet, when I tried to install
https://downloads.unidata.ucar.edu/netcdf-fortran/4.5.3/netcdf-fortran-4.5.3.tar.gz
Then it keeps failing and failing with the error:
make[3]: * [Makefile:728: test-suite.log] Error 1
make[2]: * [Makefile:836: check-TESTS] Error 2
make[2]: Leaving directory '/SOME/LOCAL/ADDRESS/netcdf-fortran-4.5.0/nf03_test'
make[1]: * [Makefile:917: check-am] Error 2
make[1]: Leaving directory '/SOME/LOCAL/ADDRESS/netcdf-fortran-4.5.0/nf03_test'
I don't know what the problem is but it is not a problem with the version as even if I install an older version, the error keeps being the same.
I tried to follow the instructions I was finding about how to install these libraries.
Can someone please give me an advice on how to do this?
My configure is as follows:
CC=icc FC=ifort F77=ifort CPP="icc -E" ./configure --prefix=$PRFX --with-sysroot=$PRFX --with-pic
I have defined:
PRFX=/SOME/LOCAL/ADDRESS/
Thank you in advance,
When you say "... and up everything compiles and works fine." did you mean that you compiled all those packages with ONEAPI? or did you just install those from the depository?
It is much predictable to install netcdf-f when all other dependencies are compiled with a same compiler. Or you should refer to cross-compilation for all required dependencies.
Did you compile hdf5 and netcdf-c (which are one of the basic dependencies for netcdf-fortran) with the ONEAPI? If not, I recommend you doing that first.
I realized that using different compiler than GCC when compiling netcdf-fortran generates an error regarding shared library. You should try compiling it with static library by adding "--disable-shared --enable-static" during the "configuration" stage (for detailed instructions, consult the "Building with static libraries" sections of this link). Make sure you explicitly link the 'include' and 'lib'
directories when you are using the netcdf-f library.
Background: I compile bitcoind on one system but run it on another. When I compiled bitcoind 0.19.1 some time back using the following method, I was able to run bitcoind and bitcoin-cli on the target system without issue. I think.
./autogen.sh
./configure --disable-wallet --disable-tests --disable-bench --disable-gui --enable-util-tx=no --prefix=$HOME/bitcoind/x64 --exec-prefix=$HOME/bitcoind/x64
make && make install
Today I compiled v0.20.0 using the same method. If I run ./bitcoind -version on the system I compiled the binary it runs fine, but if I take the binary to my target system I get the following error:
./bitcoind: error while loading shared libraries: libboost_filesystem.so.1.67.0: cannot open shared object file: No such file or directory
The binary seemed to be portable last time, and the pre-compiled binary I download from the Bitcoin Core team runs fine.
Note that on the target system libboost-filesystem-dev and libboost-filesystem1.67-dev are not installed, this is likely the source of my error. That said, running the pre-compiled binary from the Core team runs, so why doesn't mine?
Can someone help me understand if I did something wrong or if I need to add ./configure flags to make the binary more portable? Specifically what I likely did differently than the core developers that made my binary fail where theirs worked?
EDIT 1: Running ./configure --enable-static or ./configure LDFLAGS=-static does not result in a portable binary either.
Also note that installing libboost-filesystem library with apt does fix the error.
Thanks to Andrew Chow for his helpful answer to this on the bitcoind StackExchange. I needed to build the depends as per the depends documentation. Since I'm building for the same platform I'll be running on, I can run make in the depends directory with no arguments except -j2 which uses two cores. Change the number to however many cores you want to commit to the compile.
cd depends
make -j2
cd ..
./autogen.sh
./configure --prefix=$PWD/depends/x86_64-pc-linux-gnu
make -j2 && make install
So I'm using the new Bash on Ubuntu on Windows shell, and installed the clisp package to mess with Common Lisp. I get this error when I try clisp test.clisp:
/usr/lib/clisp-2.49/base/lisp.run: error while loading shared libraries: libavcall.so.0: cannot enable executable stack as shared object requires: Invalid argument
This is an entirely fresh install too. I looked in /usr/lib and found the libavcall.so.0 file, but I'm not sure what to do with it. How do I fix this issue?
This issue no longer exists with libffcall 2.0 or newer. It was fixed through this commit.
If you are still using libffcall 1.x: The FAQ (cited by user #cybevnm) explains most of it:
libavcall.so is flagged as requiring executable stack (property GNU_STACK has the value RWE), although it does not need an executable stack. This occurs because it was compiled from assembly-language source code.
You can remove this flag through a command such as sudo execstack -c /usr/lib/libavcall.so.0.
Problem Definition
I'm attempting to adapt these rosjava installation instructions so that I can include rosjava on a target image built by the BitBake build system. I'm using the jethro branch of Poky.
Implementation Attempt: Build From .deb with package_deb.bbclass
According to the installation instructions, all that really needs to be done to install rosjava is the following:
sudo apt-get install ros-indigo-rosjava
Which works perfectly fine on my build machine. I figured that if I can just point to a .deb and use the Poky metadata class package_deb, it would do all the heavy lifting for me, so I produced the following simple recipe adapted on this posting on the Yocto Project mailing list:
inherit package_deb
SRC_URI = "http://packages.ros.org/ros/ubuntu/pool/main/r/ros-indigo-rosjava/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb"
SRC_URI[md5sum] = "2020ccc8b4a67dd918a9a2c426eece0b"
SRC_URI[sha256sum] = "ab9493fabe1285b0d21aab031348d0d733d116b0b2470bae90025709b303b649"
The relevant part of the errors I get during the above recipe's do_unpack are:
| no entry data.tar.gz in archive
|
| gzip: stdin: unexpected end of file
| tar: This does not look like a tar archive
| tar: Exiting with failure status due to previous errors
| DEBUG: Python function base_do_unpack finished
| DEBUG: Python function do_unpack finished
The following command produces the output below:
$ ar t python-rosdistro_0.4.5-1_all.deb
debian-binary
control.tar.gz
data.tar.xz
You can see here that there's a data.tar.xz, not data.tar.gz. What can I do to remedy this error and install from this particular .deb?
I've included package_deb in my PACKAGE_CLASSES variable and package-management in my IMAGE_FEATURES. I've tried other methods of installation which have all failed; I thought this method in particular would be very useful to know how to implement.
Update - 3/22
I'm attempting to circumvent the problems with the method above by doing my installation through a ROOTFS_POSTPROCESS_COMMAND which I've adapted from forum posts like this
install_rosjava() {
${STAGING_BINDIR_NATIVE}/dpkg \
--root=${IMAGE_ROOTFS}/ \
--admindir=${IMAGE_ROOTFS}/var/lib/dpkg/ \
-L /var/cache/apt/archives/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb
}
ROOTFS_POSTPROCESS_COMMAND += " install_rosjava() ; "
However, this fails due to dpkg not being a command found within the ${STAGING_BINDIR_NATIVE} path. The Yocto Project Reference Manual states that:
STAGING_BINDIR_NATIVE Specifies the path to the /usr/bin subdirectory of the sysroot directory for the build host.
Taking a look inside this directory yields a lot of commands but not dpkg (The recipe depends on the dpkg package, and this command can be found in my target rootfs after the build is finished; I've also tried pointing to ${IMAGE_ROOTFS}/usr/bin/dpkg which yields the same results). From what I understand of the BitBake process, this command may be in another sysroot, but I must admit that this is where my understanding breaks down.
Can I adjust this method so that it works, or will I need to start from scratch on an installation from source?
Perhaps there's a different method entirely which I could consider?
If you really want to install their deb directly then your rootfs postprocess is one solution. It doesn't work because depending on dpkg will build you a dpkg for the target but you want a dpkg that will run on the host. Add a dependency on dpkg-native to your image.
Though personally I'd either inherit bin_package and extract the deb they provide then re-package it as a standard package in OE, or ideally write a proper recipe to build rosjava and submit it to meta-ros (https://github.com/bmwcarit/meta-ros).
package_deb is where the packaging machinery for deb packages is stored, it's not something you'd inherit in a recipe but should be listed in PACKAGE_CLASSES.
When you put a .deb in a SRC_URI the fetcher will try to unpack it so you can access the contents: the assumption is that you're going to repack the contents as a native Yocto recipe.
If that's what you want to do then first you'll need to fix the unpack logic (in bitbake/lib/bb/fetch2/__init__.py) to handle .debs with xz-compressed data. This is a bug in bitbake and a bug report and/or patch would be appreciated.
The alternative would be to use their deb directly but I don't recommend that as it's likely the dependencies don't match. The best long-term solution would be to build it from source directly instead of attempting to use a package for another distro.
I'm trying to setup PHP, apache environment on HP-UX server. While install i'm using usual commands of "./configur, make, make install". Here when I'm trying to install PCRE I got an error like follows.
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /home/ubuntu/softwares/m4-1.4.17/build-aux/missing aclocal-1.14 -I m4 /home/ubuntu/softwares/m4-1.4.17/build-aux/missing: line 81: aclocal-1.14: command not found WARNING: 'aclocal-1.14' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/> Makefile:1496: recipe for target 'aclocal.m4' failed make: *** [aclocal.m4] Error 127
So I download latest versions of "m4, autoconf and automake" source and try to install using usual make command.
First I tried to install "automake" it through error asking to install "autoconf"
Then I tried to install autoconf again it ask to install "m4"
Then I tried to install "m4" now it through the same error above listed.
So it became a loop of same set of error not letting me to install.
Can any one help me to sort this issues. Please consider this is a HP-UX unix server so don't recommend the famous ubuntu "apt-get install" command or red hat specific commands.
First read William Pursell's comment to your post (above). If you still need to install the autotools ...
Check to see what, if any, autotools you may already have installed by typing: m4 --versionand autoconf --versionand automake --version.
You should use HP-UX's package manager. It's called Software Distributor (SD). http://en.wikipedia.org/wiki/Software_Distributor
HP-UX's FAQ 5.9 explains how to handle dependencies using depothelper. http://hpux.connect.org.uk/hppd/answers/5-9.html
Here is where you find the correct autotool packages (autoconf, automake, libtool) for HP-UX. Install these HP-UX packages using HP-UX's native package manager instead of compiling from source. http://hpux.connect.org.uk/hppd/packages.html
I was facing the same problem with m4. In my case, the problem was I was transferring all the source files via scp to a server.
When I tried to configure, make and make install through ssh, this kept happening. I believe something did not transfer the way it was supposed to.
The problem was solved by manually transferring the files
through a USB.
It's not a perfect solution (it implies physical access to the server) but it works.