Difference between `protoc` and `python -m grpc_tools.protoc` - grpc

To compile proto files for Python, I could
protoc -I=.--python_out=$DST_DIR sommem.proto
based on https://developers.google.com/protocol-buffers/docs/pythontutorial
or
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. some.proto
based on https://grpc.io/docs/languages/python/basics/#generating-client-and-server-code
I wonder what's the difference between protoc and python -m grpc_tools.protoc, which one is more recommended for generating python *_pb2.py[i] files?
BTW, it looks protoc doesn't support --grpc_python_out.

protoc contains just logic for protocol buffers. That is, it will generate serialization/deserialization code for many languages. It does not, however, generate code for stubs and servers by default. This is left up to separate RPC systems through a system called protoc plugins.
Protoc plugins offer a simple interface by which an executable takes a description of a Protocol Buffer on stdin and outputs the corresponding generated code on stdout. Internal to Google, this system is used to generate code for Stubby. Externally, it is used to generate code for gRPC (or any other RPC system that wants to use protocol buffers).
Plugins get to register a command line flag for themselves to indicate where protoc should output the generated code. So, in your example above, --python_out indicates where the generated serialization/deserialization code should go, while --grpc_python_out is the flag registered by the gRPC Python code generator, indicating where Python stub and server code should be placed on the filesystem.
grpc_tools is a C extension bundling both protoc and the gRPC Python protoc plugin together so that the user doesn't have to deal with downloading protoc, downloading the gRPC Python code generator, and getting the necessary configuration set up to make them work together properly. However, in theory, you should be able to put all these pieces together to make them work just like grpc_tools (though I haven't tried).

Related

How do I see the Reach compiler's intermediate outputs?

I run my programs with reach run and it creates build/index.main.mjs, which contains compiled bytecode, but I want to see (for example) the intermediate solidity file that it generates before compiling it all the way into bytecode.
You can use reach compile to perform only the compilation step, and to access compiler-specific options such as the --intermediate-files flag, which will dump a bunch of intermediate files including build/index.main.sol.
reach compile --help has more instructions. Technically the compiler is a program called reachc but reach compile runs a docker container for you containing the compiler. So you should translate any usage messages saying reachc to mean reach compile.
See also the online docs for reach compile.

How to implement gRPC Gateway in a node backend

I'm trying to implement gRPC gateway in a Node.JS server following the installation guide.
The steps are writen for golang but I tried to do in Node.JS.
I found this response, and also some packages like this one so is possible implement in node.
After read "insatallation guide", the main problem seems to be get the binary files:
protoc-gen-grpc-gateway
protoc-gen-openapiv2
protoc-gen-go
protoc-gen-go-grpc
I have downloaded first two binaries from here so the problem maybe are last two.
I assume go is from golang so I have to search something like protoc-gen-node-grpc. And I've seen this npm package, but, I want to implement by myself as much as is possible. I don't want to depends from third-party people.
At this point I have first to binaries into my path but no the last two.
Once defined gRPC service, next step is generate gRPC stubs. I have this line:
RUN protoc -I=. mygrpc.proto --js_out=import_style=commonjs,binary:./my/folder/ --grpc-web_out=import_style=commonjs,mode=grpcwebtext:./my/folder/
And this generate the files ok. I don't know if I have to use --js_out and --grpc-web_out to create client and service files.
Then, next step is to generate reverse-proxy using protoc-gen-grpc-gateway.
I do (as guide says):
protoc -I=./my/path/ myproto.proto \
--grpc-gateway_out ./my/path/ \
--grpc-gateway_opt logtostderr=true \
--grpc-gateway_opt paths=source_relative \
--grpc-gateway_opt generate_unbound_methods=true
And this generate a .go file: myproto.pb.gw.go.
Into the file says:
// Code generated by protoc-gen-grpc-gateway. DO NOT EDIT.
// source: myproto.proto
/*
Package myproto is a reverse proxy.
It translates gRPC into RESTful JSON APIs.
*/
So I assume steps are correctly done but: How can I execute this into my Node.JS server?
I have a node project using a Express API, I want to use grpc-gateway instead of express API endpoint... but I only have a .go file.
My proto version is:
libprotoc 3.14.0
Thanks in advance.
As the issue comment, grpc-gateway only generate Go code. Feel free to use Go, there only need generate code from proto and add service. You can reference my sample code in Java helloworlde/grpc-gateway; And buf is better than protoc to generate code. If you want write all by yourself, you can reference ReflectionCall.java wirte in Java, it can call server without generate code.

Penthao Kettle Download File from URL

I want to download a File from a URL. (ex. http://www.webadress.com/service/servicedata?ID=xxxxxx)
I found the HTTP Step for Job executables but I am forced to define a Target file name instead of just accepting the filename the Webdownload offers. (ex. ServiceData20200101.PDF)
Other Problem is that it creates a File even when the Webcall actually wouldn't supply a File.
Is the REST Client or HTTP client Step in Transformations able to download a file over a URL call that accepts the File as is?
The HTTP steps in Pentaho are somewhat limited. In similar use cases in the past I've done this by using an external shell script with arguments that then calls wget or curl and saves the result. Then Pentaho picks up the file in the temp dir and processes it from there.
The Shell job step allows you to specify a script file and pass fields from the stream as arguments.
Note that if you paste shell commands directly into the step on the second tab, they will execute in the embedded shell with older versions of curl and wget. You will also be missing environment config and certificates/keys.

How to encrypt a lua script and have it be able to run with a LuaJIT executor

I want to make a protected Lua script [for a game] that can be run via an external program. This means I don't want anyone else to see the source code. The external program is a Lua wrapper
Seraph is a ROBLOX Lua script execution exploit. It uses a wrapper to emulate a real ROBLOX scripting environment. It can run scripts in an elevated level 7 thread, allowing you to access API functions and change properties that are normally not accessible. It also contains re-implementations of missing functions such as loadstring and GetObjects, and bypasses various security checks, such as the URL trust check in the HttpGet/HttpPost functions.
They recently implemented LuaJIT and I thought this might help. If it can only be run by LuaJIT wrappers that would be awesome!
--PS I know basic lua but can't write anything cool.
--PPS It needs to be able to have a password pulled from an online database
Since I'm not familiar with ROBLOX, I'm just going to focus on this part of your question:
This means I don't want anyone else to see the source code.
For this, you will want to use Lua's bytecode dumping facilities. When executing a script, Lua first compiles it to bytecode, then executes said bytecode in the VM. LuaJIT does the same thing, except that it has a completely different VM & execution model. The important thing is that LuaJIT's bytecode is compatible with standard Lua and still offers the same bytecode dumping facilities.
So, the best 'protection' you can have for your code is to compile it on your end, then send and execute only the compiled, binary version of it on the external machine.
Here's how you can do it. First, you use this code on your machine to produce a compiled binary that contains your game's bytecode:
local file = io.open('myGame.bin', 'wb')
file:write(string.dump(loadfile('myGame.lua')))
file:close()
You now have a compiled version of your code in 'myGame.bin'. This is essentially as 'safe' as you're going to get.
Now, on your remote environment where you want to run the game, you transfer 'myGame.bin' to it, and run the compiled binary like so:
local file = io.open('myGame.bin', 'rb')
local bytecode = file:read('*all')
file:close()
loadstring(bytecode)()
That will effectively run whatever was in 'myGame.lua' to begin with.
Forget about passwords / encryption. Luke Park's comment was poignant. When you don't want someone to have your source, you give them compiled code :)

How do I scp a file to a Unix host so that a file polling service won't see it before the copy is complete?

I am trying to transfer a file to a remote Unix server using scp. On that server, there is a service which polls the target directory to detect incoming files for processing. I would like to ensure that the polling service does not pick up new files before the copy is complete. Is there a way of doing that?
My file transfer process is a simple scp command embedded in a larger Java program. Ideally, a solution which did not involve changing the Jana would be best (for reasons involving change control processes).
You can scp the file to a different (/tmp) directory and move the
file via ssh after transfer is complete. The different directory needs to be on the same partition as the final destination directory otherwise there will be a copy operation and you'll face a similar problem. Another service on the destination machine can do this move operation.
You can copy the file as hidden (prefix the filename with .) and copy, then move
If you can modify the polling service, you can check active scp processes and ignore files matching scp arguments.
You can check for open files with lsof +d $directory and ignore them in the polling server
I suggest copying the file using rsync instead of scp. rsync already copies new files to temporary filenames, and has many other useful features for file synchronization as well.
$ rsync -a source/path/ remotehost:/target/path/
Of course, you can also copy file-by-file if that's your preference.
If rsync's temporary filenames are sufficient to avoid being picked up by your polling service, then you could simply replace your scp command with a shell script that acts as a wrapper for rsync, eliminating the need to change your Java program.
You would need to know the precise format that your Java program uses to call the scp command, to make sure that the options you feed to rsync do what you expect.
You would also need to figure out how your Java program calls scp. If it does so by full pathname (i.e. /usr/bin/scp), then this solution might put other things at risk on your system that depend on scp (like you, for example, expecting scp to behave as it usually does instead of as a wrapper). Changing a package-installed binary like /usr/bin/scp may also "break" your package registration, making it difficult to install future security updates because a binary has changed to a shell script. And of course, there might be security implications to any change you make.
All in all, I suspect you're better off changing your Java program to make it do precisely what you want, even if that is to launch a shell script to handle aspects of automation that you want to be able to change in the future without modifying your Java.
Good luck!

Resources