Spawn-fcgi not working on windows - spawn-fcgi

I have build spawn-fcgi for windows using cygwin from source code available here : https://github.com/lighttpd/spawn-fcgi
And I'm trying to run mapserver using spawn-fcgi. In log file it gives spawn-fcgi: child spawned successfully: PID: 4920 but when i'm looking for process with this PID, there is no process.
I have hello_world example to test spawn-fcgi but this is also not working.

Related

Getting RStudio in Docker Environment Command Window as opposed to browser?

This is a question related to using docker to run scripts of RStudio. The problem I'm having is that the person evaluating the results I'm getting wants to have it so that they type in ./test.sh in the command window, that runs my R script, and a csv of my results prints out to the local directory.
Is it possible to get the rocker rstudio to appear in the command window as opposed to having to log into the browser to use rstudio? It seems all the resources online say something along the lines of put -p 8002:8787 and -d in the docker line of code, but this makes it so you have to go to a local browser to actually run your R script.
I've found this code snippet works, but is there an alternative to using \bin\bash at the end so that the rstudio commands can just stay in the window?
$ docker run -e PASSWORD=MYPASSWORD -v "$(pwd):/data:ro" -v "$(pwd):/workdir" -it thatguy /bin/bash
Or, better yet, is there a way to put this docker run command in my Dockerfile so that when I run my $ docker build this line just runs automatically?

Docker containers do not work when run from command line

I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest

Jar file run on a server background with close putty session

I have tried the run spring boot jar file using putty. but the problem is after closed the putty session service was stopped.
then i tried up the jar file with following command. its working fine .
**nohup java -jar /web/server.jar **
You should avoid using nohup as it will just disassociate your terminal and the process. Instead, use the following command to run your process as a service.
sudo ln -s /path/to/your-spring-boot-app.jar /etc/init.d/your-spring-boot-app
This command creates a symbolic link to your JAR file. Which then, you can run as a service using the command sudo service your-spring-boot-app start. This will write console log to /var/log/your-spring-boot-app.log
Moreover, you can configure spring-boot/application.properties to write console logs at your specified location using logging.path=path-to-your-log-directoryor logging.file=path-to-your-log-file.txt. Also, it may be worth noting that logging.file takes priority over logging.path

QT gui application not starting automatically on startup in ubuntu 14.04

I have two Qt applications, one is non gui called "App1" and another one is gui called "App2". As per my need I need to start "App1" on startup of Ubuntu 14.04 machine.
This "App1" runs a sh file called "myshfile.sh" and I am starting "App2" into this shell script file by /opt/myprojectname/App2 &
So to do the same I make .sh file called "myupstart.sh" and write into /opt/myprojectname/App1 & it and copy the file at path /etc/init.d/ and gave it +x permission to start "App1" on startup.
When I restart my machine then it runs "App1" (which is qt non-gui app) automatically on startup and runs "myshfile.sh" as expected. Till now all are working fine but the problem occurs from here as per below.
As I have mentioned above that "App1" runs a sh file called "myshfile.sh" and I am starting "App2" into the shell script file by /opt/myprojectname/App2 & but "App2" is not staring ( which is qt gui app).
When I do the same by simply running command into termianl /opt/myprojectname/App1 then all works fine and it calls the "myshfile.sh" file and "myshfile.sh" file also starts "App2".
So what I found that when I do the same by manually into termianl then all works fine and by script etc/init.d/myupstart.sh, it starts only Qt non-gui application and not starting Qt gui application on startup.
Kindly suggest me where I am wrong.
Thanks.

How to get the daemon Rserve running as worker dyno on Heroku

This question is an obscure problem - sorry for the length. I'm trying to deploy an app to Heroku. The app runs Rserve - a daemon of the R language, for running statistical reports. This should in principle be no more difficult than getting any daemon, such as memcached, to run in Heroku.
In Mac OSX I just start the daemon in the command line and forget it - all works fine. I'm interfacing with Rserve from node.js, using https://github.com/albertosantini/node-rio (not a factor here though).
But in deploying to Heroku, not having much luck. I'm using a multipack of R and node. Installation runs fine, all build steps exit okay, R starts fine.
Now comes the job of starting the Rserve daemon on the worker dyno.
My procfile looks like this:
web: node server.js
worker: R CMD Rserve --no-save
When I run it, I get the following error in the logs (scroll to end of block):
Rserv started in daemon mode.
heroku[worker.1]: State changed from starting to crashed
The Rserve() config docs are here: http://www.rforge.net/Rserve/doc.html I am not expert at configuring it but perhaps there is something in there that I should be doing for it to work in this environment?
An oddity is that you can run this without error from the Heroku run console, but (see below), it does not seem to actually be running when I try to access it from node.js:
heroku run R CMD Rserve
[Previously saved workspace restored]
Rserv started in daemon mode.
>
In node.js (heroku run node), I try testing it thus:
var rio = require('rio');
rio.evaluate("pi / 2 * 2");
which gives the error "Rserve call failed".
This leads me to think something is fundamentally wrong with what I am trying to do or how I am trying to do it.
Rserve runs as a daemon by default, so use a script to execute it so it runs "in process".
E.g.
# example R script for executing Rserve
require('Rserve')
# get the port from environment (heroku)
port <- Sys.getenv('PORT')
# run Rserve in process
run.Rserve(debug = FALSE, port, args = NULL, config.file = "rserve.conf")
And then your Procfile will have an entry as follows:
rserve: R -f rserve.r --gui-none --no-save
So I tried a dozen ways to get it started on a worker dyno, but all would crash. I never got to the bottom of all environment issues - I am not very expert at Unix. However... I did get it work by spawning a child process to run Rserve at the end of my server.js initialization script on my web dyno. It works.
childProcess.exec('R CMD Rserve --no-save', function (error, stdout, stderr) {});
My plan is to implement it this way in the worker process and use Web Workers to communicate between the separate environments.

Resources