python3 grpc compiler: how to handle absolute and relative imports in .protos? - grpc-python

I'm trying to generate working Python modules from the containerd API .proto files as to be found here: https://github.com/containerd/containerd/tree/master/api.
Unfortunately, containerd's own .proto files contain references such as (in api/events/container.proto):
import weak "github.com/containerd/containerd/protobuf/plugin/fieldpath.proto";
Now, this import is actually located in protobuf/plugin/fieldpath.proto, as opposed to (vendor/)github.com/containerd/.... A simple -I ... does not work in this context, as the import uses a "github"-absolute path, whereas the corresponding sources aren't located inside the vendor branch.
Simply copying over the sources inside vendor/github.com/... will cause runtime errors when trying to use the generated Python modules: this is because there are now two separate instances for the same protocol elements trying to register with GRPC with the same protocol element name, yet from two different Python modules. The GRPC Python runtime thus throws an error and terminates.
How can I correctly get this resolved when using python3 -m grpc.tools.protoc ...?

After some trial+error I've finally come up with a working solution that works as follows and which might be helpful for others facing gRPC-based APIs with are more complex than many gRPC examples.
copy over the API .proto files into a directory structure that reflects the final desired Python package and module structure. For containerd, this means having everything within a containerd/ directory structure, avoiding github.com/ folders and aliasing (import aliasing will break things).
fix all import statement paths that would either cause module aliasing or won't fit with the final desired package structure. A good way to do this is with sed while copying over the proto files. In my case, replace "github.com/containerd/containerd/..." with just "containerd/..." import paths.
in case of vendor'ed .proto files somehow related to gRPC infrastructure and where gRPC PyPI packages exist, such as grpcio and protobuf, put then side-by-side with your API .proto files, but do not vendor them into the API directory hierarchy. Ensure to put them in a directory structure that mimics the package structure of the already available PyPI packages.
use protoc via the python3 interpreter to generate the Python modules only for your API .proto files; make sure that the supplemental .proto files from grpc and protobuf are includable, but do not create modules for them. protoc does this already correctly unless you vendor the supplemental .proto files into your API .proto files ... so don't do this.
make sure that your grpcio and protobuf PyPI packages are recent and somehow in sync, avoid especially totally outdated Debian distro packages, install from PyPI ... even if this is a painfully slow process on arm64 because there are no binary wheels for grpcio and grpciotools. Symptoms when doing not so include runtime errors of missing grpc or protobuf object fields.

Related

libsqlite3 issue: I am not able to use cl-sql

I am trying to use cl-sql for database access to sqlite3.
But I am getting the error
Couldn't load foreign libraries "libsqlite3", "sqlite3". (searched CLSQL-SYS:*FOREIGN-LIBRARY-SEARCH-PATHS*: (#P"/usr/lib/clsql/" #P"/usr/lib/"))
The same is with sqlite.
I have installed sqlite3 using apt-get and there is a file libsqlite.so.0 in /usr/lib directory.
I also tried to build sqlite3 from source but I couldn't get the so file. What is that I am doing wrong?
Your problem is that cl-sql has a third party dependency. If you inspect the implementation of cl-sql (probably under "~/quicklisp/dists/quicklisp/software/clsql-202011220-git/db-sqlite3/sqlite3-loader.lisp") you will see that the function database-type-load-foreign is trying to load a library named either "libsqlite3" or "sqlite3".
Depending on your operating system this is either looking for a .dll or .so with exactly one of those names.
Given that the version of of libsqlite.so has a different name on your particular distribution of linux, you have a number of different options to make this library work.
Install a version of sqlite3 with the correct binary
Create a soft link to your binary that redirects via ln -s /usr/lib/libsqlite.so.0 /usr/lib/libsqlite3.so (assuming libsqlite.so.0 is the file that clsql is looking for)
Add new paths to CLSQL-SYS:*FOREIGN-LIBRARY-SEARCH-PATHS* to point to the correct binary if it is installed elsewhere (via clsql:push-libary-path)

How to compile gRPC Proto files for Node?

The gRPC documentation describes how to use the protoc command line program to compile a *.proto file to a certain language for all languages except for Node.
There, it is only described (at the time of this writing) how to dynamically load and at runtime (behind the scenes) generate the JS code.
Is it possible to use the protoc program to compile proto files to JS directly, similar to other languages?
I've found how to do static code generation for Node on this GitHub page.
Here's a copy of the example they provide:
npm install -g grpc-tools
grpc_tools_node_protoc --js_out=import_style=commonjs,binary:../node/static_codegen/ --grpc_out=../node/static_codegen --plugin=protoc-gen-grpc=`which grpc_tools_node_protoc_plugin` helloworld.proto
grpc_tools_node_protoc --js_out=import_style=commonjs,binary:../node/static_codegen/route_guide/ --grpc_out=../node/static_codegen/route_guide/ --plugin=protoc-gen-grpc=`which grpc_tools_node_protoc_plugin` route_guide.proto

How do I use PyInstaller to make packaged applications smaller

How do you make packaged generated applications smaller?
I type in the terminal of Windows
PS C:\Users\Lenovo-pc> PyInstaller -w 123.py
But the package is huge, and the following modules are mainly imported
import time,threading,itertools
from tkinter import Label,Tk,Button,Text,END,PhotoImage,Entry,RAISED
from random import shuffle,choice
from datetime import datetime
from openpyxl import load_workbook
How do I make the resulting executable smaller?
Actually, you can't do much about that, because PyInstaller would bring each dependency beside your application output. On the other side sometimes it brings unnecessary modules with your output which should be ignored.
For a small executable output you can do two things:
Always use virtualenv to buildup your app. This would remove unnecessary packages that you already installed on your main Python library. So they will be ignored on the current build.
According to this using a compression decrease your output executable significantly.
UPX is a free utility available for most operating systems. UPX
compresses executable files and libraries, making them smaller,
sometimes much smaller. UPX is available for most operating systems
and can compress a large number of executable file formats. See the
UPX home page for downloads, and for the list of supported executable
formats.
So bring up a virtualenv install your external dependencies and then install UPX from here and use --upx-dir to pass UPX dir.

Is there any special functionality in R package "exec" or "tools" directories?

I'm trying to develop an R package that will include some previously compiled executable programs and their supporting libraries. (I know this is bad form, but it is for internal use).
My question: Does the special exec and tools directories have any special functionality within R?
The documentation seems to be sparse. Here is what I've figured out so far:
From here
files contained in exec are marked as executable on install
subdirectories in exec are ignored
exec is rarely used (my survey of CRAN says tools is just as rarely used)
tools is around for configuration purposes?
Do these directories offer any that I couldn't get from creating an inst/programs directory?
[R-exts] has this to say:
Subdirectory exec could contain additional executable scripts the
package needs, typically scripts for interpreters such as the shell,
Perl, or Tcl. This mechanism is currently used only by a very few
packages. NB: only files (and not directories) under exec are
installed (and those with names starting with a dot are ignored), and
they are all marked as executable (mode 755, moderated by ‘umask’) on
POSIX platforms. Note too that this is not suitable for executable
programs since some platforms (including Windows) support multiple
architectures using the same installed package directory.
It's quite possible the last note won't apply to you if it's only for internal use.
Nevertheless, I'd suggest avoiding abusing any existing convention that might not apply precisely to your situation, and instead use inst/tools or inst/bin.
As far as I can tell, here is the functionality offered by the exec and tools directories.
exec
From R-exts by way of hadley:
Subdirectory exec could contain additional executable scripts the package needs, typically scripts for interpreters such as the shell, Perl, or Tcl. This mechanism is currently used only by a very few packages. NB: only files (and not directories) under exec are installed (and those with names starting with a dot are ignored), and they are all marked as executable (mode 755, moderated by ‘umask’) on POSIX platforms. Note too that this is not suitable for executable programs since some platforms (including Windows) support multiple architectures using the same installed package directory.
exec features I have figured out
On POSIX platforms (*nix, os x), the files within exec will be marked as executable.
No subdirectories of exec are included in the package, only files in exec root
(note, it could contain binary executables, but there is no architecture/platform handling
tools
From R-exts:
Subdirectory tools is the preferred place for auxiliary files needed during configuration, and also for sources need to re-create scripts (e.g. M4 files for autoconf).
tools features I have figured out
tools is to hold files used at package compile time
All files contained are copied recursively into the source *.tar.gz package (including subdirs)
tools is not included in the final, compiled form of the package. All contents are dropped

Compiling haskell module Network on win32/cygwin

I am trying to compile Network.HTTP (http://hackage.haskell.org/package/network) on win32/cygwin. However, it does fail with following message:
Setup.hs: Missing dependency on a foreign library:
* Missing (or bad) header file: HsNet.h
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
If the header file does exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
Unfortuntely it does not give more clues. The HsNet.h includes sys/uio.h which, actually should not be included, and should be configurered correctly.
Don't use cygwin, instead follow Johan Tibells way
Installing MSYS
Install the latest Haskell Platform. Use the default settings.
Download version 1.0.11 of MSYS. You'll need the following files:
MSYS-1.0.11.exe
msysDTK-1.0.1.exe
msysCORE-1.0.11-bin.tar.gz
The files are all hosted on haskell.org as they're quite hard to find in the official MinGW/MSYS repo.
Run MSYS-1.0.11.exe followed by msysDTK-1.0.1.exe. The former asks you if you want to run a normalization step. You can skip that.
Unpack msysCORE-1.0.11-bin.tar.gz into C:\msys\1.0. Note that you can't do that using an MSYS shell, because you can't overwrite the files in use, so make a copy of C:\msys\1.0, unpack it there, and then rename the copy back to C:\msys\1.0.
Add C:\Program Files\Haskell Platform\VERSION\mingw\bin to your PATH. This is neccesary if you ever want to build packages that use a configure script, like network, as configure scripts need access to a C compiler.
These steps are what Tibell uses to compile the Network package for win and I have used this myself successfully several times on most of the haskell platform releases.
It is possible to build network on win32/cygwin. And the above steps, though useful (by Jonke) may not be necessary.
While doing the configuration step, specify
runghc Setup.hs configure --configure-option="--build=mingw32"
So that the library is configured for mingw32, else you will get link or "undefined references" if you try to link or use network library.
This combined with #Yogesh Sajanikar's answer made it work for me (on win64/cygwin):
Make sure the gcc on your path is NOT the Mingw/Cygwin one, but the
C:\ghc\ghc-6.12.1\mingw\bin\gcc.exe
(Run
export PATH="/cygdrive/.../ghc-7.8.2/mingw/bin:$PATH"
before running cabal install network in the Cygwin shell)

Resources