I have a shared host with ASP MVC, my worker process times out after 5 minutes causing site to take up to 30 seconds to restart. I can't edit these settings with shared hosting. I found some info online where I can use a schedule task that will keep hitting the site every few minutes keeping it from going idle.
Executable C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Argument -c "(new-object
system.net.webclient).downloadstring('http://[domain.tld][path][file_name]')"
Not sure about the Executable and the Argument? I not sure what to put there. Should I put the path to the home page? Or to a page with few views like the privacy page?
What's a good practice setup to keep site from going idle, with a schedule task?
The executable & argument form the command that can be executed by a scheduler to make a request to a webpage and print out the data returned. For example, if you run this using the command line terminal (assuming you have powershell), you should see a whole bunch of javascript and html code present on google.com:
powershell -c "(new-object system.net.webclient).downloadstring('https://google.com')"
I am not sure whether or not this is an acceptable practice to keep websites from going idle on shared hosting spaces.
Related
I have a C# program that uses a SQLite database to read/write data. This program requires UAC elevation and I require it to be running at all times. When I run this program manually, which I have to 'Run as admin', my SQLite database functions normally, able to read/write data to the database file. However, my issue is when I try to have this program execute automatically when the computer starts.
As I mentioned earlier, I require this program to execute at all times. So, I have put a couple things in place that re-executes the program in the event it crashes (which works great). However, I also need this to begin executing when the computer restarts. Normally this isn't a problem, but the program requires UAC and I will rarely be around to click Yes on the UAC dialog, so I read around and it seems the only way to do this was to set up a task in Task Scheduler. So, I have set up a task to run this at startup. Upon testing, the program does execute but not functioning correctly. Upon further debugging, I've found that each time my code reaches a SQLiteDataReader.read() line, it always seems to return false even though I know there are records there, but this only happens when the program is executed thru Task Scheduler. No errors seem to be coming from SQLite. I suspect file permissions to be the issue, but do not know how to resolve.
A couple things to note of what I've tried already.
1) In the Task Scheduler, I've set this up to execute using the same user account as I've been using to run it manually, which is also a Domain Admin, Admin, and a local Admin account.
2) The task is set to "run with highest privileges"
3) I've changed the security permissions to Full Control for just about every object I can think of (Admins, Domain Admins, Users, , Everyone, etc) on both the root folder of the program AND the SQLite database file.
4) I've even tried moving the entire application outside of the Program Files folder in case there was some sort of restricted access involved there as well.
I'm at my wits end trying to figure this out. Any ideas on what to try next? Or other solutions to get this to execute correctly at startup without user interaction?
I'm a bit late on reporting back on this issue. Stupid on my part... The task scheduler simply needed to include the applications file folder path as the Startup path. So it wasn't finding the database file as my path is using relative paths to reference. I personally don't understand why this shouldn't always just default to the app's folder, but you live and you learn and bang your head on everything in between.
When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.
I need to run 4 background gobs for cleaning temp files and proccessing some files. I have chosen Quart.net for the job.
I have a Asp.Net website, which accepts uploading files that will be processed by the Quartz Jobs at night.
First i thought about making a console application for the Quartz jobs, keeping the website and the jobs totally decoupled.
But then, i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config. So a question came to my mind:
Should i run the jobs through the asp.net instance or should i do this on a console application?
Furthermore, i want that when the Quartz jobs start running, the website show a special page (like "We are processing the files...).
What i care the most is the performance, i don't want the website to be affected by the Quartz jobs, neither the jobs' performance affected by the website.
So, what should i do? Have you done something like this and can give me an advice?
Should i run the jobs through the asp.net instance or should i do this on a console application?
If you want to have to manually trigger them each night, sure. But a console application using the host system's task scheduler seems like a more automated solution. A web application is more of a request/response system, it's not really suited for periodic or long-running actions. Scheduling some sort of background operation on the host, such as a scheduled console application or a windows service, would serve that purpose better.
Note that if it truly needs to be unattended and run even when there's nobody logged in to the server console, a windows service may be a more ideal approach than a console application.
i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config
Console application have App.config files which serve the same purpose. You can use that.
i want that when the Quartz jobs start running, the website show a special page
You definitely want to keep the two de-coupled. But you may be able to accomplish this easily enough. Maybe have some sort of status flag in the database which indicates if any particular record is "currently being processed". The website can simply look for any records with that flag when a page loads and display that message.
There are likely a couple of different ways to synchronize status here, it doesn't really matter what you choose. What does matter is that the systems remain decoupled and that any status which is statically persisted is handled somewhat carefully to avoid an errant process from leaving an incorrect status. (For example, a background task sets a status of "processing" and then fails in some way. The website would forever indicate that it's processing.)
I deploy applications to Unix boxes, we will work on around 100 boxes and lets say application A will be deployed on 5 boxes like Box1,Box2,Box3,Box4,Box5, Every time we deploy an application A we will go to each Box1,2,3,4,5 and check whether the application A which i s deployed has started properly or not in the path of BOX1/A/B/C/logs folder on each and every box and for every single application.
Is there a way we can pull the logs to local from all the Boxes 1,2,3,4,5 and it should allow me to Search the logs by Application A name.
Thanks for your help in advance ...
Of course, your question is not so specific that I can tell you exactly what to do in your particular case, but something very like this will nonetheless aggregate the data on your local stdout, after which you can process it locally as you like:
for I in $(seq 1 5); do echo "box$I:"; ssh username#box$I 'cat /var/log/mylog'; echo; done
Many variations on the theme are possible, but if you can get this one to work, then you should soon be able to see how to adapt it to your own need.
Note that, for Ssh to do its work without requiring a manual login on each machine requires some setup on both the local and remote boxes: man 1 ssh and review the AUTHENTICATION section, especially the paragraph that speaks of the authorized_keys file.
In my ASP.NET application, I have a script that I periodically automatically call to update the database. Due to a bug in this code, several invocation of it entered an infinite loop.
I am on a shared hosting, so I can't just restart IIS. I have tried “stopping” the website from the hosting's management website, but it had no effect.
Since they are running for seveal hours now, I assume there is no timeout configured. So, I would like to kill those “processes” (I assume it's actually just threads), is there a way to do that without contacting my hosting company?
Updating the web.config will stop your application. Assuming you can access the files of the website.
Have you tried stopping the application pool for the website?
I was logging some information from the threads I wanted to kill to a file. The code is able to create the file if it doesn't exists, but the same does not apply, to the folder it's in.
So I temporarily renamed the folder with the log and the threads seem to have stopped.