I don't know why this is so difficult, but everytime I update a file in my flask application I have to restart gunicorn so that the file updates on the server. I am mostly a front-end developer and don't play with servers enough to remember these things, and I have to spend hours google searching various phrases to find the right commands. This time I can't seam to find anything, and the file I created to save these things has conveniently disappeared.
My server:
Ubuntu 18.04
nginx
gunicorn
supervisor
I am updating a .py file. I placed the updated version on the server using ftp. I'm logged into the server, using ssh, through a git bash shell. sudo systemctl gunicorn restart give me the error Failed to restart gunicorn.service: Unit gunicorn.service not found.. Rereading and restarting supervisor does not do the trick, and neither does restarting nginx. Is there not a simple command to apply updates? I'm use to using servers on general hosting sites, and updating a file via ftp just works. I was really enjoying learning flask up until this point, but now I regret it. I keep thinking that there has to be some kind of simple trick to make such a simple thing go smoothly, but I'm at the end of my rope trying to figure this out. Any suggestions?
I finally found it.
sudo supervisorctl stop app_name
sudo supervisorctl start app_name
Related
When I created a new .conf file inside /etc/supervisor/conf.d/ and tried to start this program it was showing some errors (fatal error) and restarting frequently by itself. Then I ran the command sudo service supervisor restart but now the supervisor also stopped and couldn't be restarted it. During solving my error the nginx server also got stuck also.
After spending a vast time I recovered it Alhamdulillah and writing the solution in the answer section.
Don't trust the solution entirely for your problem. Your problem may belong to another issues as well.
Sometime Supervisor can show the below horrible error when you restart the
service (by the command sudo service supervisor restart):
unix:///var/run/supervisor.sock refused connection
Try to diagnosis the problem with the command supervisord. You can also run journalctl -xe.
Problems and Solutions:
When you write a new .conf file to inside the /etc/supervisor/conf.d directory which contains some statements that are generating error.
Like, you write some statements that will run a script. That script contains some statements that runsGunicorn to deploy a python web apps. In the script you wrote a statement to bind an unix socket. But the mentioned directory where the unix socket will be created doesn't give permission to create the .sock file there. This can lead the permission error.
The demo gunicorn command is below:
SOCKFILE = /home/shamim/python_project/another_directroy/gunicorn.sock
gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--bind=unix:$SOCKFILE
If the another_directory doesn't give the permission to create a .sock file inside it then an error can be occurred. So give it enough permission to create something here from outside. Or, Bind IP and port instead unix socket (like 127.0.0.1:ANY_PORT). Be sure first the port is not used by another application.
Sometimes the error can be occurred if any directory path is used inside .conf file but actually that directory doesn't exist at all.
Now run the command supervisord.
If the error persists after fixing the above issues and now shows a error like -
another program is already listening on a port that one of our HTTP servers is configured to use
then run the below command to fix this issue:
sudo unlink /var/run/supervisor.sock
If the command above does not work you should check run unlink the file at /tmp/supervisor.sock
Keep in mind that the nginx server can also show some errors and fail to
restart (or start) if any .conf file contains some statement where a socket
is used but actually the socket file doesn't exist or doesn't have enough permission to be executed.
Example: If you write the below code in any nginx file config:
upstream surveyapp_payment_stripe {
server unix:/home/shamim/python_project/another_directroy/gunicorn.sock fail_timeout=0 weight=5 max_fails=3;
}
If the above socket doesn't exist or not have enough permission then some error may be occurred.
Nginx can also show error if any directory path is used here but not exists at all. To run nginx at this time quickly just delete the .conf file or edit it's extension (make another another extension type other than .conf).
Hopefully this explanation will help someone in future.
Due to the need to direct shiny-server logs to stdout so that "docker logs" (and monitoring utilities relying on it) can see them i'm trying to do some kind of :
tail -f <logs_directory>/*
Which works as needed when no new files are added to the directory, the problem is shiny-server dynamically creates files in this directory which we need to automatically consider.
I found other users have solved this via the xtail package, the problem is i'm using Centos and xtail is not available for centos.
The question is , is there any "clean" way of doing this via standard tail command without needing xtail ? or maybe there exists an equivalent package to xtail for centos?
You will probably find it easier to use the docker run -v option to mount a host directory into the container and collect logs there. Then you can use any tool you want that collects log files out of a directory (logstash is popular but far from the only option) to collect those log files.
This also avoids the problem of having to both run your program and a log collector inside your container; you can just run the service as the main container process, and not have to do gymnastics with tail and supervisord and whatever else to try to keep everything running.
I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!
I run my meteor app on EC2 like this: node main.js (in tmux session)
Here are the steps I use to update my meteor app:
1) meteor bundle app.tgz
2) scp app.tgz EC2-server:/path
3) ssh EC2-server and attach to tmux
4) kill the current meteor-node process by C-c
5) extract app.tgz
6) run "node main.js" of the extracted app.tgz
Is this the standard practice?
I realize forever can be used too but still do you have to kill the old node process and start a new one every time I update my app? Can the upgrade be more seamless without killing the Node process?
You can't do this without killing the node process, but I haven't found that really matters. What's actually more annoying is the browser refresh on the client, but there isn't much you can do about that.
First, let's assume the application is already running. We start our app via forever with a script like the one in my answer here. I'd show you my whole upgrade script but it contains all kinds of Edthena-specific stuff, so I'll outline the steps we take below:
Build a new bundle. We do this on the server itself, which avoids any missing fibers issues. The bundle file is written to /home/ubuntu/apps/edthena/edthena.tar.gz.
We cd into the /home/ubuntu/apps/edthena directory and rm -rf bundle. That will blow away the files used by the current running process. Because the server is still running in memory it will keep executing. However, this step is problematic if your app regularly does uncached disk operatons like reading from the private directory after startup. We don't, and all of the static assets are served by nginx, so I feel safe in doing this. Alternatively, you can move the old bundle directory to something like bundle.old and it should work.
tar xzf edthena.tar.gz
cd bundle/programs/server && npm install
forever restart /home/ubuntu/apps/edthena/bundle/main.js
There really isn't any downtime with this approach - it just restarts the app in the same way it would if the server threw an exception. Forever also keeps the environment from your original script, so you don't need to specify your environment variables again.
Finally, you can have a look at the log files in your ~/.forever directory. The exact path can be found via forever list.
David's method is better than this once, because there's less downtime when using forever restart compared to forever stop; ...; forever start.
Here's the deploy script spelled out, using the latter technique. In ~/MyApp, I run this bash script:
echo "Meteor bundling..."
meteor bundle myapp.tgz
mkdir ~/myapp.prod 2> /dev/null
cd ~/myapp.prod
forever stop myapp.js
rm -rf bundle
echo "Unpacking bundle"
tar xzf ~/MyApp/myapp.tgz
mv bundle/main.js bundle/myapp.js
# `pwd` is there because ./myapp.log would create the log in ~/.forever/myapp.log actually
PORT=3030 ROOT_URL=http://myapp.example.com MONGO_URL=mongodb://localhost:27017/myapp forever -a -l `pwd`/myapp.log start myapp.js
You're asking about best practices.
I'd recommend mup and cluster
They allow for horizontal scaling, and a bunch of other nice features, while using simple commands and configuration.
I've been trying to set up nginx 0.8.53 and passenger 3.0.0 on my dev
environment - osx snow leopard and REE. I manually compiled nginx
with the passenger module linked in.
When I tried running passenger, it had a problem - ENV['PATH']
appeared to be null, so the split on it when call
PlatformInfo.find_command raised an exception. It was called when
trying to find out the osname - looking for the sw_vers command.
I tweaked the source and told it that it was macosx and then it
complained that it couldn't find the Rails 2.3.8 gem. This is
probably related to the first problem.
I'm not sure how to troubleshoot this? When I su -i and sudo nobody,
both users let me start irb and see the expected value for
ENV['PATH'], so I'm not sure why it's not working when passenger is
running?
One possibility: Passenger launches as the user that owns the config/environment.rb file (or of the config.ru file, if you have one) - make sure that file's owner is something sensible.
I don't know how you start Nginx, but you can write a launcher script for Nginx that starts Nginx with a specific environment, like this:
#!/bin/bash
export PATH=whatever
exec /path/to/nginx