Is it possible to write an Apache2 service that can pipe content to the client at is being generated?
I would like to setup a simple http service that triggers a build and immediately starts sending stdout (gcc stuff) to the client while the compiling is going on. The goal is that a client can use e.g. curl to test a build:
curl http://myserver.com/testbuild -F "file=#mypkg.tar.gz"
And immediately get to see stdout from the build process on the server.
I think it would be possible somehow using a cgi script, but the trick is to immediately get the stdout, bypassing the buffering. If you do not really need http as a transport protocol, why not use direct tcp streaming via netcat.
On the build server you run a script like:
#!/bin/bash
while true ; do
nc -l -p 8080 -e /path/to/buildscript
done
and when any client connects via
nc <buildservername or ip> 8080
it gets the build stout immediately.
My recommendation would be something different (using jenkins as ci server, I do this even on a cubietruck), but for a quick and small solution, it should be enough. If you need http, you can even get this adding the http header to your build script.
Related
I'm testing gRPC with .NetCore and looked up for a GUI tool or something that can help me to test my endpoint like testing REST API.
I found a proxy tool: grpc-json-proxy that can be used with Postman tool (also found another GUI tool: grpcox).
Using any tool gives an error like the following when trying to connect to the endpoint:
unable to do request err=[Post
http://localhost:5001/greet.Greeter/SayHello: dial tcp 127.0.0.1:5001:
connect: connection refused]
Any idea what could be the issue?
Most importantly, are you confident the gRPC server is listening on localhost:50051? You may confirm this (on Linux) using:
GRPC="50051"
ss --tcp --listening --processes "sport = :${GRPC}"
NOTE you may need to sudo ss ... to get the process
Or more simply:
telnet localhost 50051
If you get Connected to... that's a good sign
Then, if you're using either of these tools through docker, you'll need to ensure the container can access the host's 50051 port. To do this, run the container use --net=host. This will make the host's port available to the container.
I use grpCurl
I am looking to use asyncssh with python3.7 (asyncio)
Here is what I want to build:
A remote device would be running a client that does a call-home to a centralized server. I want the server to be able to execute commands on client using reverse ssh tunnels on the incoming connection. I cannot use forward ssh (regular ssh) because the client could be behind NAT and server might not know the address of the client. I prefer client doing a call-home and then server managing the client.
The program for a POC should use python3 + an async implementation of ssh. I see asyncssh as the only viable choice (please suggest if you have an alternate):
Client: Connect to server and accepts reverse ssh tunnels to be opened on same outbound connection
Server: Accepts connection from client and keeps the session open. The server then opens reverse ssh tunnel to the client. For e.g. the server program should open 3 reverse ssh tunnnels on the incoming connection. Each of these tunnels would run one command ['ls', 'sleep 30 && date', 'sleep 5 && cat /proc/cpuinfo']
Server program should print the received response for each of these commands (one should come back amost immediately, other after 5 and other after 30).
I looked at the documentation, and I could not see examples of using multiple reverse ssh tunnels.
Anyone has experience using this? Can you point me to examples?
Developer of asyncssh has provided an example:
As of now, this is in develop branch. I have tested it and it does the job perfectly!
https://asyncssh.readthedocs.io/en/develop/#reverse-direction-example
[If you are checking this after a while, you might find it in master documentation.]
Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.
Trying to authenticate the end user, for that need to query another server for the validation. So want to send the http get/post request from kamailio server to another server using http_query or similar methods.
But when use this function http_query() [got from http://www.kamailio.org/docs/modules/4.0.x/modules/utils.html#idp25440 ], kamailio starts failing. Though not sure the real reason, looks like config file is not able to find the the function. So can you let me know which module or params need to load in config file so that error can be resolved.
Or any better way tosend simple http request and take decision based on the reply in kamailio.cfg file and also what all module needs to be loaded to use those function?
The utils module has to be loaded.
You have to look inside the syslog file (depending of the OS, it can be /var/log/syslog or /var/log/messages) and look for error messages printed by Kamailio. They should reveal the reasons it doesn't start.
If you don't manage locating syslog file for you OS, then redirect the debug messages printed by Kamailio to the terminal. Either set log_stderror=yes in kamailio.cfg or start kamailio by adding the -E command line parameter.
If you don't see any error messages from kamailio, then be sure you don't have the utils module loaded in between some #!ifdef ... #!endif statement that is not enabled.
I am running a process, process A, inside a Docker container that sends an HTTP reqeust to a process, process B, running natively on the host. Process B receives the request and after processing sends a response which process A does not receive. I believe this could be because I am not exposing the port on which the reply is sent. My understanding is that, this source port is randomly chosen and I am not sure how I can expose this port. Is there a way to overcome this issue?
My understanding is that, this source port is randomly chosen and I am not sure how I can expose this port
It is only if your container runs with the -P option.
If you run with -p hostp_port:container_port option (with container_port being a port EXPOSE'd in the Dockerfile of the image from which the container is running), that will work.