I need to run the command line dotnet run, after starting the url localhost, I need to run another command like curl.
I could do like it:
dotnet run & sleep 12; curl http://localhost:59406
but if dotnet run delay more then 12 seconds, it will be stopped.
It may be possible to solve with linux commands, but I am not experienced in this.
If that helps, I will use this when building the Azure DevOps pipeline, to run newman tests on localhost
You can extend the sleep time if dotnet run takes longer to start up your application, to make sure your application is started when curl the url localhost
If you use this in azure devops pipeline, you can use two bash tasks to run above command. The first bash task run dotnet run & to start your application in the background. The second bash task run curl to curl url localhost. See below yaml pipeline example.
- bash: |
#cd ConnectionWeb/ConnectionWeb
#dotnet restore
dotnet run &
- bash: |
curl http://localhost:8881
Related
I use the geospatial rocker2 image to deploy Rstudio for development and a Shiny app for production. By using a single image, I have a consistent package library, credentials and database connections. I would like to use this same image to serve a plumber API.
Using the standard plumber.R example and the standard plumber Docker example I have tried to serve it as follows:
docker run -v `pwd`/app/plumber.R:/plumber.R --name plumber --restart=unless-stopped \
-p 8000:8000 my_rocker2_fork/geospatial Rscript /plumber.R
Success, kind of. The plumber.R file is clearly being sourced, but it is not being "plumbed":
Another issue is that the container continually restarts (this is the output of docker ps - please ignore the node.js container running):
One more oddity is that port 8000 isn't shown. Sometimes it is, sometimes it isn't. I think this is related to the restarting behaviour.
My code isn't plumbed, because I don't have the Entrypoint that is standard in the rstudio/plumber Dockerfile, and I don't think I want this Entrypoint, as it may cause issues with Rstudio Server and the Shiny app that are also in this image. Therefore, I think it is probably optimal to "plumb" by expanding the Rscript command at the end of my Docker run statement:
docker run -v `pwd`/app/plumber.R:/plumber.R -p 8000:8000 my_rocker2_fork/geospatial \
'Rscript pr("/plumber.R") %>% pr_run(port = 8000)' &
However, this fails because of all the special characters (like the pipe operator). How can I serve plumber code with an arbitrary Dockerfile without an Entrypoint?
The answer is simple! Call a script that sets the plumbing in motion, e.g.
docker run -v `pwd`/app/plumb_start.R:/plumb_start.R -p 8000:8000 my_rocker2_fork/geospatial \
Rscript plumb_start.R
Where plumb_start.R contains:
pr("plumber.R") %>% pr_run(port=8000)
Make sure that you also expose port 8000 in the Dockerfile.
I am trying to keep a dotnet run command going after terminating the SSH session I started the command on.
I am trying to use nohup, although it doesn't seem to work as advertised:
nohup dotnet run --project=CBU &
It seems to print out the following, but still block input to the CLI:
nohup: ignoring input and appending output to 'nohup.out'
Terminating the SSH session shows that the application stops executing.
We have a dotnet core script we use to index some files. We leverage RedHat Software Collection so items like dotnet can tie into our RHEL setup.
To run the script, we do the following:
source scl_source enable rh-dotnet30
/opt/rh/rh-dotnet30/root/usr/bin/dotnet /d/h/fileprocessor.dll 1
We want to run this in cron, but we can not get it to work. We have tried the following:
Adding the 'source' command to the bash profile, but this doesn't seem to be reliable for us, and not run on the cron event.
Running this directly in cron
Running this as a shell script in cron
We are at a loss, it seems we can never get the two commands to work together. If we don't include the source command, even if in our profile, it will not run and gives us the error " It was not possible to find any installed .NET Core SDKs
Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from:
https://aka.ms/dotnet-download"
The following works for me. I am using rh-dotnet31 (.NET Core 3.1) though, since rh-dotnet30 (.NET Core 3.0) is out of support:
Install packages:
$ sudo yum install rh-dotnet31 -y
Start at a known directory
$ cd ~
Create a directory for .NET Core source code to use
$ mkdir hello
$ cd hello
Create a simple application for testing
$ scl enable rh-dotnet31 bash
$ dotnet new console
$ dotnet publish
$ exit # this exits from the subshell started from scl enable command above
Copy the build over to a separate directory where we can run it
$ cp -a bin/Debug/netcoreapp3.1/publish ../hello-bin
Create the script that cron will invoke
$ cd ~
And put this in a ./test.sh file:
#!/bin/bash
echo "test.sh running now...."
source scl_source enable rh-dotnet31
dotnet $HOME/hello-bin/hello.dll 1
You could probably even combine the last two lines (source... and dotnet ...) into scl enable rh-dotnet31 -- dotnet $HOME/hello-bin/hello.dll 1
Then make it executable:
$ chmod +x ./test.sh
Set up the crontab file
$ crontab -e
And then add the line below in this file. This one runs the script every minute.
* * * * * $HOME/test.sh >> $HOME/test.cron.log 2>&1
On my machine, cron is running, so I now see the output of the cron job in the log file after a few minutes:
$ tail -f test.cron.log
test.sh running now....
Hello World!
test.sh running now....
Hello World!
test.sh running now....
Hello World!
test.sh running now....
Hello World!
test.sh running now....
Hello World!
The issue we ran into for this was that the runtime had only been installed, and not the SDK. Once the SDK was installed, which included many other dependencies, it just worked.
Can I use node-inspector with meteor?
I tried "--debug" option, and succeed to connect debug port.
But, cannot to access my codes.
exec "$DEV_BUNDLE/bin/node" "--debug" "$METEOR" "$#"
You might have countered the same problem like I have:
On Linux machine, Meteor script will spawn two processes:
Process1: node "meteor files"
Process2: node "your meteor files"
When you run exec "$DEV_BUNDLE/bin/node" "--debug" "$METEOR" "$#", it spawn process1 in debug mode but process2 still run in normal mode. This is why you can not see your files.
I just run the regular meteor script and send kill -s USR1 to process2 then you can see your file in node-inspector
I am trying to create a Jenkins job that restarts a program that runs all the time on one of our servers.
I specify the following as the command to run:
cd /usr/local/tool && ./tool stop && ./tool start
The script 'tool' contains a line like:
nohup java NameOfClass &
The output of that ends up in my build console instead of in nohup.out, so the job never terminates unless I terminate it manually, which terminates the program.
How can I cause nohup to behave the same way it does from a terminal?
If I understood the question correctly, Jenkins is killing all processes at the end of the build and you would like some process to be left running after the build has finished.
You should read https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller
Essentially, Jenkins searches for processes with some secret value in BUILD_ID environment variable. Just override it for the processes you want to be left alone.
In the new Pipeline jobs, setting BUILD_ID no longer prevents Jenkins from killing your processes once the job finishes. Instead, you need to set JENKINS_NODE_COOKIE:
sh 'JENKINS_NODE_COOKIE=dontKillMe nohup java NameOfClass &'
See the wiki on ProcessTreeKiller and this comment in the Jenkins Jira for more information.
Try adding the & in the Jenkins build step and redirecting the output using > nohup.out.
I had a similar problem with runnning a shell script from jenkins as a background process. I fixed it by using the below command:
BUILD_ID=dontKillMe nohup ./start-fitnesse.sh &
In your case,
BUILD_ID=dontKillMe nohup java NameOfClass &