we want to use tensorRT to accelerate our inference, but it is likely that the tensor-serving is only for .pb file
Related
I just installed OpenBSD 6.9 to study how it works.
I wanted to get the most minimal config possible, because I want to use it as a server.
During instalation I chose the option to not install Xserver, but I still have the /usr/X11R6 and /etc/X11 directories with X config and commands like startx. The only difference is that now, startx doesn't work. I tried installing on VirtualBox and on bare metal and both were the same.
What do I have to do in order to completely remove X from OpenBSD? And why is it still being installed in my machine even if I explicitly write "no" when prompted during installation?
My system:
OpenBSD 6.9
Intel Pentium G5400
Nvidia 1050 ti.
OpenBSD installation uses different file sets => see OpenBSD FAQ / File Sets
X11 installation is split into 4 file sets :
xbase71.tgz : Base libraries and utilities for X11 (requires xshare71.tgz)
xfont71.tgz : Fonts used by X11
xserv71.tgz : X11's X servers
xshare71.tgz : X11's man pages, locale settings and includes
During installation, you chose not to install xserv71.tgz (X servers) but you still have installed xbase71.tgz (startx command and others directories).
If you want to completely remove X from OpenBSD, during installation, remove every file set for X. But you should keep xbase71.tgz because some programs needs it to run correctly even if it's a non-X program.
I'm not a OpenBSD developer, so I cannot give a clear answer. But some specific packages which you can add with the package command from OpenBSD (pkg_add), needs some X libraries or binaries.
As example when you want to add vim for the first time, then you have eight flavors:
$ pkg_info -d vim-8.2.3456-no_x11
Information for inst:vim-8.2.3456-no_x11
[REMOVED]
Flavors:
gtk2 - build using the Gtk+2 toolkit
gtk3 - build using the Gtk+3 toolkit (default)
no_x11 - build without X11 support
lua - build with Lua support
perl - build with Perl support
python - build with Python support
python3 - build with Python3 support
ruby - build with Ruby support
It's depends on the packages what you need. Also when you want install something from the port collections.
You can try the quick and dirty way and simple remove your mentioned directories. But I could be possible that some programs from the base system no longer works, because of missing dependencies.
I want to create a cross-distribution RPM spec file for an application that bundles a certain library. The application uses this bundled library by default, but also provides a build-time option for using the version of the library installed in the system instead. I want the RPM to link the application to the system library if it is available (since this will make the package smaller, and because the system library gets security patches more frequently than the version bundled with the application) and if not, to fall back to the (default) bundled version. Is there any way of writing a conditional in the spec file that accomplishes this?
For example, I want to do something like
%if available(libfoo-devel >= 1.0)
BuildRequires: libfoo-devel >= 1.0
%endif
%prep
cat << EOF > build_config_file
%if available(libfoo-devel >= 1.0)
ac_add_options --with-system-foo
%endif
EOF
Right now I'm using conditionals that check for macros defined by particular versions of OSes that I already know package the correct library, but this is rather brittle and convoluted. (That is, I need to manually check each target distribution to see if it packages the correct version of the library, and then write a condition in the spec file for that distribution; moreover, I need to repeat this process periodically in case the correct version of the library becomes available for a distribution that didn't package it previously.) It would be simpler if I could just test for the availability of the dependency directly.
You must decide before creating the rpm how you will create it. Compilation happens before the target rpm is created, not upon installation.
As I see it you might consider creating two rpms:
one compiled with libfoo-devel
one without
I would then suggest a spec file along these lines:
Name: package
...
%package libfoo
Summary: %{name} build with libfoo
BuildRequires: libfoo-devel >= 1.0
Requires: libfoo >= 1.0
%build
build --without-libfoo --target "${RPM_BUILD_ROOT}/usr/bin/without-libfoo"
build --with-libfoo --target "${RPM_BUILD_ROOT}/usr/bin/with-libfoo"
%install
/usr/bin/without-libfoo
%install libfoo
/usr/bin/with-libfoo
Notes:
yes this means compiling twice: once with and once without libfoo
building this spec file will create two packages: "package.rpm" and "package-libfoo.rpm"
I'm trying to generate working Python modules from the containerd API .proto files as to be found here: https://github.com/containerd/containerd/tree/master/api.
Unfortunately, containerd's own .proto files contain references such as (in api/events/container.proto):
import weak "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
Now, this import is actually located in protobuf/plugin/fieldpath.proto, as opposed to (vendor/)github.com/containerd/.... A simple -I ... does not work in this context, as the import uses a "github"-absolute path, whereas the corresponding sources aren't located inside the vendor branch.
Simply copying over the sources inside vendor/github.com/... will cause runtime errors when trying to use the generated Python modules: this is because there are now two separate instances for the same protocol elements trying to register with GRPC with the same protocol element name, yet from two different Python modules. The GRPC Python runtime thus throws an error and terminates.
How can I correctly get this resolved when using python3 -m grpc.tools.protoc ...?
After some trial+error I've finally come up with a working solution that works as follows and which might be helpful for others facing gRPC-based APIs with are more complex than many gRPC examples.
copy over the API .proto files into a directory structure that reflects the final desired Python package and module structure. For containerd, this means having everything within a containerd/ directory structure, avoiding github.com/ folders and aliasing (import aliasing will break things).
fix all import statement paths that would either cause module aliasing or won't fit with the final desired package structure. A good way to do this is with sed while copying over the proto files. In my case, replace "github.com/containerd/containerd/..." with just "containerd/..." import paths.
in case of vendor'ed .proto files somehow related to gRPC infrastructure and where gRPC PyPI packages exist, such as grpcio and protobuf, put then side-by-side with your API .proto files, but do not vendor them into the API directory hierarchy. Ensure to put them in a directory structure that mimics the package structure of the already available PyPI packages.
use protoc via the python3 interpreter to generate the Python modules only for your API .proto files; make sure that the supplemental .proto files from grpc and protobuf are includable, but do not create modules for them. protoc does this already correctly unless you vendor the supplemental .proto files into your API .proto files ... so don't do this.
make sure that your grpcio and protobuf PyPI packages are recent and somehow in sync, avoid especially totally outdated Debian distro packages, install from PyPI ... even if this is a painfully slow process on arm64 because there are no binary wheels for grpcio and grpciotools. Symptoms when doing not so include runtime errors of missing grpc or protobuf object fields.
How do you make packaged generated applications smaller?
I type in the terminal of Windows
PS C:\Users\Lenovo-pc> PyInstaller -w 123.py
But the package is huge, and the following modules are mainly imported
import time,threading,itertools
from tkinter import Label,Tk,Button,Text,END,PhotoImage,Entry,RAISED
from random import shuffle,choice
from datetime import datetime
from openpyxl import load_workbook
How do I make the resulting executable smaller?
Actually, you can't do much about that, because PyInstaller would bring each dependency beside your application output. On the other side sometimes it brings unnecessary modules with your output which should be ignored.
For a small executable output you can do two things:
Always use virtualenv to buildup your app. This would remove unnecessary packages that you already installed on your main Python library. So they will be ignored on the current build.
According to this using a compression decrease your output executable significantly.
UPX is a free utility available for most operating systems. UPX
compresses executable files and libraries, making them smaller,
sometimes much smaller. UPX is available for most operating systems
and can compress a large number of executable file formats. See the
UPX home page for downloads, and for the list of supported executable
formats.
So bring up a virtualenv install your external dependencies and then install UPX from here and use --upx-dir to pass UPX dir.
I am trying to compile Network.HTTP (http://hackage.haskell.org/package/network) on win32/cygwin. However, it does fail with following message:
Setup.hs: Missing dependency on a foreign library:
* Missing (or bad) header file: HsNet.h
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
If the header file does exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
Unfortuntely it does not give more clues. The HsNet.h includes sys/uio.h which, actually should not be included, and should be configurered correctly.
Don't use cygwin, instead follow Johan Tibells way
Installing MSYS
Install the latest Haskell Platform. Use the default settings.
Download version 1.0.11 of MSYS. You'll need the following files:
MSYS-1.0.11.exe
msysDTK-1.0.1.exe
msysCORE-1.0.11-bin.tar.gz
The files are all hosted on haskell.org as they're quite hard to find in the official MinGW/MSYS repo.
Run MSYS-1.0.11.exe followed by msysDTK-1.0.1.exe. The former asks you if you want to run a normalization step. You can skip that.
Unpack msysCORE-1.0.11-bin.tar.gz into C:\msys\1.0. Note that you can't do that using an MSYS shell, because you can't overwrite the files in use, so make a copy of C:\msys\1.0, unpack it there, and then rename the copy back to C:\msys\1.0.
Add C:\Program Files\Haskell Platform\VERSION\mingw\bin to your PATH. This is neccesary if you ever want to build packages that use a configure script, like network, as configure scripts need access to a C compiler.
These steps are what Tibell uses to compile the Network package for win and I have used this myself successfully several times on most of the haskell platform releases.
It is possible to build network on win32/cygwin. And the above steps, though useful (by Jonke) may not be necessary.
While doing the configuration step, specify
runghc Setup.hs configure --configure-option="--build=mingw32"
So that the library is configured for mingw32, else you will get link or "undefined references" if you try to link or use network library.
This combined with #Yogesh Sajanikar's answer made it work for me (on win64/cygwin):
Make sure the gcc on your path is NOT the Mingw/Cygwin one, but the
C:\ghc\ghc-6.12.1\mingw\bin\gcc.exe
(Run
export PATH="/cygdrive/.../ghc-7.8.2/mingw/bin:$PATH"
before running cabal install network in the Cygwin shell)