Any issues running a daemon via XSP2? - asp.net

We want to run a daemon that exposes itself via ASMX, using Mono 2.0 (or later). Instead of dealing with the ASP.NET hosting APIs, we're thinking about just starting a daemon thread in the Application_Start event. XSP2 shouldn't restart the appdomain, so our daemon will be safe.
Are there any downsides to this (besides being a bit odd)? Any other approaches that allow us to have our code running in the same appdomain as the ASMX requests?

Why do need XSP to run a daemon through calling an ASXM when you can just build a shell console application (with the same code or accepting arguments)? That can be called in terminal or called from any shell script and added to cron. Simple no server required to do this.
If you want to do this, not the way I would do it, you can setup a basic server instance (using nginx, lighty or apache) listing in a certain internal port, add that server to a dummy host and on cron/shell script you can do
WGET http://dummyhost/mydaemon.asmx

Related

Azure DevOps Pipeline Task to connect to Unix Server and execute commands

I am seeking to set up a Release Pipeline in Azure DevOps Services that will deploy
an application to a Unix server, where it then executes some unix commands as part
of the deployment.
Would appreciate some guidance on which pipeline Task(s) I can set up to therefore
achieve the following objectives:
Connect to the Unix server.
Execute the required Unix commands.
By the way, the Agents are currently installed on Windows hosts but we are looking to
extend that to Unix servers in due course, so a solution that fits both setups would
be ideal, even though the former is the priority.
You can check out task SSH Deployment task.
Use this task to run shell commands or a script on a remote machine using SSH. This task enables you to connect to a remote machine using SSH and run commands or a script.
If you need to copy files to the remote linux server. You can check out Copy Files Over SSH task.
You probably need to create a SSH service connection. See steps here to create as service connection.
In the end, due to concerns raised about the install of private keys on the target server which is part of the SSH Deployment setup, we opted for the use of Deployment Groups which has enabled us to set up a persistent connection to our Linux server.

Do I need gunicorn for internal async microservices?

As far as I read all over the Internet - the best practice for deploying Flask/Django applications is to put the behind a web server such as nginx and bundle them with a pre-fork server such as gunicorn or uWSGI.
This is good for many reasons, such as ssl-termination, protection against HTTP attacks (nginx), forking to threads for concurrency, restarting the application after a memory leak, or other exceptions (gunicron).
I want to deploy an internal API micro-service on sanic with pm2, as it's not customer facing, but rather will only be called from internal services the SSH termination and protection against HTTP attacks is irrelevant, the concurrency is guaranteed by sanic's asyncio nature and the restarting upon exception is handled by pm2.
Do I still need gunicorn and nginx? Can't I just run the application process as is and let it talk directly to its callers?
You absolutely do not need to have gunicorn in front of your stack. Sanic can run just fine without having a web server in front of it since it has its own internal server.
I still would advocate for using nginx to terminate TLS and to handle static files (even though sanic could do both of these), since it is efficient at it.
here's a link to another answer I've given on the same question: https://community.sanicframework.org/t/gunicorn-uwsgi-vs-build-in-http-server/47/2?u=ahopkins
You don't need it. Look at http://supervisord.org/ to start, restart, autorestart, etc. your servies.
That said I use gunicorn and supervisord in conjunction.

Add nginx config on the fly

I am building a multi-tenant application where requests to multiple domains have to be serviced by the same nginx server.
In order to achieve this, a script creates nginx configs for each domain after a registration process and adds them into a folder. The base nginx configuration has been setup to read configs from this folder.
If I manually restart nginx using sudo service nginx restart the application works fine. However, I am looking for this to happen without a manual intervention. i.e. I want my script to refresh nginx config and I want to do it without entering a sudo password again.
Can someone help me achieve this?
I would strongly discourage using service ngnix restartto reload configs, especially in a multi-tenant environment. You risk interrupting ongoing requests, sessions, etc. That's potentially fine, but each tenant had to make that determination and has to do so at appropriate times. Nginx supports the command service ngnix reload to address this concern. Reload allows for configs to be reloaded without any downtime.
You could trigger the command at least 3 ways:
Periodic cron job (easiest to setup, least efficient)
Manually triggering the command
Trigger through filesystem monitoring
Option 2 would be good if, for example, you had some web interface that allows a tenant to modify a config and you know to manually trigger the command or to send a message to some other service that triggers it. You could avoid using sudo securely by granting the web application the ability to run a single command as root e.g. vi sudo and add the line www-data ALL=(ALL) NOPASSWD: /usr/sbin/service nginx reload where www-data should be whatever user your application runs under. Then you can just execute the shell command according to whatever api is appropriate for the language you are using.
Option 3 would be the most robust. There all several options for monitoring the filesystem but I would recommend incron. Here's a guide to install and configure incron. You could monitor changes to whichever directory you store configs in and use service nginx reload in place of the example command in the tutorial.

Implementing simple software updater using rsync

I'm trying to find a way to update client software while reducing traffic and update server load.
Case:
Server is just http server that has latest non compressed/packed version of software.
Client uses rsync to download changes
Does server have to run rsync instance/host/service (idk how to call it) in order to produce delta files?
Seen some forum question about downloading files with rsync. It seemed like server didn't need rsync instance. If server isn't running rsync instance is that download gonna be done without delta files?
Do you know other solutions which can reduce network and server load?
The server doesn't need any special software other than a ssh server.
I was incorrect about this for your use case. I believe what you are looking for is daemon mode rsync for the server. This has rsync listen on a port to serve requests.
I misunderstood what you were trying to do at first. However in theory it might still be able to be done with only ssh or telnet, I think the daemon mode is a better solution.
See: SSH vs Rsync Daemon

How to enable a maintenance page on the frontend server while you are restarting the backend service?

I am trying to improve the user experience while a backend service is down due to maintenance, shutdown manually.
We do had a frontend web proxy, which happens to be nginx but it could also be something else like a NetScaler instance. An important note is that the frontend proxy is running on a different machine than the backend application.
Now, for the backend service it takes a lot of time to start, even more than 10 minutes in some cases.
Note, I am asking this question on StackOverflow, as opposed to ServerFault because providing a solution for this problem is more likely to require writing some bash code inside the daemon startup script.
What we want to achive:
service mydaemon stop should enable the maintenance page on the frontend proxy
service mydaemon start should disabled the maintenance page on the frontend proxy
In the past we used to create a maintenance.html page and had nginx configured to check the existence of this page using try, before falling back to the backend.
Still, because we decided to move nginx to another machine we cannot do this and doing this using ssh raises security concerns.
We already considered writing this file to a NFS drive which would be accessible by both machine, but even this solution does not scale for a service that has a lot of traffic. Nginx will end-up trying this for every request, slowing down the responses quite a lot.
We are looking for another solution for this problem, one that would ideally be more flexible.
As a note, we still want to be able to trigger this behaviour from inside the daemon script of the backend application. So if the backend application stops responsing for other reasons, we do expect to see the same from the frontend.

Resources