I have configured reverse proxy to kestrel with nginx. When I redeploy the new DLLs using a bash script the site is not getting updated. I can see only the old site. When I try to kill dotnet process also it doesn't work. Even I tried restarting nginx, still no luck. I have to restart the server(linux) to see the changes. Why is it happening?
I want kestrel to detect changes when I update the files and automatically restart itself (or do something to publish the new site). Is there any other way to do it?
Related
I have a .NET Core 3.1 app (api) that i'm publishing online and I'm using the publish method FTP in Visual Studio.
It works perfectly but when I try to publish again after coding, I always get these errors:
Publishing folder /...
Unable to add 'AutoMapper.dll' to the Web site. The process cannot access the file because it is being used by another process (550).
Unable to add 'projectname.API.dll' to the Web site. The process cannot access the file because it is being used by another process (550).
Unable to add 'projectname.Domain.dll' to the Web site. The process cannot access the file because it is being used by another process (550).
...
This goes on for quite some time on all the dll's in my project. The only way I can publish after working, is full resetting my PC, remove all bin and obj folders in my projects that have to get published, open VS and publish before doing anything else.
I suspect this has to do with VS still using the dll's somewhere/somehow while i'm trying to publish.
I have tried locating processes in task manager but haven't seen anything unusual/problematic that could cause this.
Any help regarding this would be greatly appreciated, because I like to update my API for every route I write to test on the deployed version aswell, but it's quite the hassle this way...
Thanks!
I don't think it's your pc. Error 550 is an FTP error. What I think is happening here is that when you publish your website via FTP and open your website in your browser what you actually do is run your application on your server.
When your application is running you can not modify the DLLs that are in use.
So why restarting your pc will fix the problem? It is because these hosting providers have a configuration that if no one requests a website for a while then kill the application. it is because of saving memory and cpu.
So when you restart your pc and delete bin and obj folders and rebuild your project that takes time and because in the meantime there is no request sent to your website your application is killed and now you can update your website with FTP again.
To test this scenario simply close your browsers and do not open your website for a while like half an hour and then try to update your website via FTP.
Or you can restart your PC and delete bin and obj folders but before pushing your files with FTP open your site in a browser. This must run your application and cause the 550 error again.
Environment: ASP.NET Core 2.1, Ubuntu.
In old style ASP.NET, when I did a bin deploy (uploaded some dll files for example), the webapp would detect that and reload itself - very useful.
With Core it doesn't do that. I need to stop and restart the dotnet MyApp.dll process.
How do I make it detect changes to binaries and reload?
There are file watchers in Ubuntu that can issue restart commands whenever files are changed like systemd or inotify, but I would strongly advise against that. Uploads can pause or be slow and when uploading 50 files imagine restart after every single one every couple seconds. Server has no way to know when you have finished uploading last DLL. IIS has same problem, it's reliable in development because you refresh the page after the full DLL rebuild. But when in production you don't want random visitors to boot your site midway while it's still uploading. Errors, file locks, all kinds of weirds things can happen.
As pointed by Chris Pratt you want to script your deployment workflow. Idk what environment you are developing on, but if you have Visual Studio and WinSCP it's as easy as writing couple lines of code with Scripting and Task Automation.
Then your publish workflow can be for example as following:
Hit publish in Visual Studio
VS will execute winscp script after publish is finished
Authenticate on remote server
Upload publish folder to a remote folder
Remove old files
Prune logs
After all done issue systemctl restart kestrel-myapp command
Then your site is deployed, cleaned and restarted in the most reliable fashion with a single click.
There's nothing I'm aware of that will do this for you. IIS watches things like the bin directory, web.config, etc. and recycles the App Pool when it detects changes, but that's because it knows to. It's also a full-featured web server, and App Pool recycling on file changes is one of those features. Kestrel, which I assume your using is not. It's a very simple web server that does just what it needs to do as strictly a web server. That's why a more traditional web server like IIS, Apache, Nginx, etc. is normally used as a reverse proxy in front of Kestrel - to provide more advanced functionalities.
All that said, though, this is really just a matter of your release strategy. Personally, I'd encourage you to go with something far more robust that copy-pasting DLLs, but if you want to go that route, you can also script it. Create a shell script to copy the bin directory and restart your app. Your release should be one rails as much as possible. Every time human intervention is called for, you have a potential point of failure, because humans are inherently fallible. A script, however, once tested and ensured to work, will pretty much work every time, because it always does the same things in the same order.
I have a couple applications (PDG Stores) running on a Plesk server running Apache both of them use a CGI include to run a php script that builds their menus. Upon moving to a new server Plesk12 Odin + Nginx + Apache none of these includes work any longer.
Not sure where to start here, is it Nginx, Apache? or SeLinux?
there are no errors in any logs at all, includes fail silently.
the Apache include module is enabled
Not sure how to troubleshoot for Nginx or SeLinux
Has anyone else had this issue?
I have a CentOS 5.x server running Mono 2.8.1 and mod_mono 2.8 with apache2.
Every time I deploy a site from visual studio 2010 to my server by ftp, and navigate to the site, I get a 404 not found error page.
Sites in other subdomains (virtual hosts) are not affected.
Performing a restart of httpd using /etc/init.d/httpd restart fixes the problem, and I can view my ASP site again. Obviously restarting the entire httpd process is less than adequate.
My guess is that this is similar to application domains in IIS. Is there a way to 'recycle' and app domain in mod_mono? Can I set this to happen on deployment?
You should be able to do /etc/init.d/httpd reload to force Apache to re-read its files from /etc/apache without it having to restart.
We have a process that attempts to download a hosted URL every minute, and if it returns 404, we kill -9 mod_mono. That should be enough, you shouldn't need to touch Apache.
When I make changes to a file, Mono ASP.NET doesn't see my changes, I have to do this:
sudo /etc/init.d/apache2 restart
I remember when Mono ASP.NET executes ASP.NET it caches the compilation somewhere. Before, when the updated page doesn't come up, I just delete that cached compiled code. I just forgot the exact path
How to make Mono ASP.NET detects the changes I made in program, without restarting the web server?
It sounds like the FileSystemWatcher cannot see changes coming from the windows side.
After you update on the windows side, go to the linux side and do:
touch Web.Config
or any file on the directory. This should notify ASP.Net to load your new code.
This is the command to restart your App, and recompile if its needed. (with out restart the web server).
Can you test it if do the work ?
HttpRuntime.UnloadAppDomain();
The other way is to create a file app_offline.htm on root, upload your changes, and then delete it.