Gitlab: Problems running Unicorn, Resque with Passenger/Nginx - nginx

I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.

As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)

Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!

Related

Website available even with no NGINX processes running

I have pulled into my web server so it has the latest code from my repo, i try to restart nginx - this doesnt do anything.
So I try the command
sudo nginx -s stop, and get the response that its failed because there is no such file or directory "run/nginx.pid" failed.
Trying to run the command ps aux | grep nginx gives me the response: unsupported option (BSD syntax) -- it actually comes out as ps aux > grep nginx in the digital ocean console.
Basically it seems that even though there are apparently no nginx processes running (although the command to check isnt working) my website is still running and using the old code, is there a way for me to check more definitively on the running processes?
Thanks if you can help.
Try sudo netstat -plunt to check if there's any nginx process running. See if there's anything running on port 80 or 443 and then look at the corresponding program name. You might have another server running possibly apache since it ships by default with most distributions which may be why nginx failed to start.
Another reason why it won't start might be due to faulty config. Go to /etc/nginx/ and double check that it's correct. You can also run sudo nginx -t to ensure that the config syntax is correct.
Alternatively, just check your nginx access log to see if it's actually serving any requests. You can also check the error log to see why it might fail to start. These resides in /var/log/nginx by default or check your nginx.conf for any custom path to logs.

how to resolve "Error: No module named 'airflow.www'" while starting airflow websever

Getting below error while starting Airflow webserver
balajee#Balajees-MacBook-Air.local:~$ airflow webserver -p 8080
[2018-12-03 00:29:37,066] {init.py:51} INFO - Using executor SequentialExecutor
[2018-12-03 00:29:38,776] {models.py:271} INFO - Filling up the DagBag from /Users/balajee/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Error: No module named 'airflow.www'
Fixed for me
pip3 uninstall -y gunicorn
pip3 install gunicorn==19.4.0
I got this problem this morning, and I found a strange solution, may it helps you. I think maybe you just need to change the command running directory.
I install airflow basic dependence in my virtualenv directory venv with PyCharm help, and I use PyCharm build-in Terminal tab to directly access my venv, and I use airflow initdb to init sqlite database to store all my logs and ops, then according to the official tutorial I use airflow webserver to start the webserver. But somehow today I use my Mac terminal, and start virtulenv, and start airflow webserver, and I encounter this problem with:
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
Error: No module named 'airflow.www'
[2019-05-26 07:45:27,130] {cli.py:833} ERROR - No response from gunicorn master within 120 seconds
[2019-05-26 07:45:27,130] {cli.py:834} ERROR - Shutting down webserver
And I tried #Evgeniy Sobolev's solution by reinstall gunicorn and nothing changed, but when I still using my PyCharm Terminal, it can still running successfully. I guess maybe it is because the first directory you init your db and running webserver is critical. By default when I use PyCharm Terminal to init db and start webserver is the Project root directory, like:
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
But today I check into venv to start virtualenv, and the root directory changed!
root#root:~/GitHub/FakeProject/SubDir$ source venv/bin/activate
(venv) root#root:~/GitHub/FakeProject/SubDir$ airflow webserver
** Error **
So in this way it encounters Error: No module named 'airflow.www', so I check out the directory, and the webserver running successfully just like PyCharm Terminal:
(venv) root#root:~/GitHub/FakeProject/SubDir$ cd ..
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
** It works **
I thought maybe airflow store some metadata (like setup a PATH, maybe) in the first time init your airflow db, so you can not change your command running directory.
I hope it may help somebody in the future. Just check your directory!
Looks like you have a problem with gunicorn.
Try to execute this two commands:
sudo -H pip3 uninstall -y gunicorn
sudo -H pip3 install gunicorn
It should resolve your problem, cause airflow show you not clear error message related to gunicorn problems
I did this steps for the problem happens:
create a separate virtualenv only for airflow (I use anaconda distribution)
activate this env with conda activate
install airflow: pip install apache-airflow
at this moment the error No module named 'airflow.www' was showed for me
To fix follow this steps:
Look for where is your gunicorn in: whereis gunicorn
gunicorn have to stay only in your virtualenv directory: /home/yourname/anaconda3/envs/airflow_env/bin/gunicorn
If it stay in two directories, let it just in your airflow enviroment. Remove it all from another.
Another way to verify if gunicorn is in another directories is printing your PATH variable: echo $PATH. Look for gunicorn in /home/yourname/.local/bin and another anaconda directories from PATH. Remove all references. Remove gunicorn from conda base env as well: pip uninstall gunicorn.
With this steps, I think your problem will be solved.
I used anaconda distribution, but I think the same process can be done without it. I used airflow 1.10.0 and python 3.6.
If you have defined a custom home directory for airflow other than default one (~/airflow) during the installation:
You need first export the custom path:
export AIRFLOW_HOME=/your/custom/path/airflow
Go to the airflow directory and then Run the webserver
airflow webserver -p 8080
Run scheduler too
airflow scheduler
please check if gunicorn is installed already in server. for me it was installed in /usr/local/bin and it was taking precedence over gunicorn version installed with airflow. uninstall earlier one or fix $PATH variable
I solved this by starting the webserver from the airflow folder itself.
I was previously trying to open the server from the home directory but the required modules could not be found which may be the case here.
Late to the party but could help others who get here.
I got the same issue using latest airflow version 2.5.0
Make sure env variable AIRFLOW_HOME is pointing to right location
Thanks all for sharing
I added sudo and it actually worked just fine.
I got the same error today and a sudo did the trick to me

Something missing in configuration for publishing from VS to Docker on Ubuntu?

I want to publish my projects from Visual Studio to Docker service on my own server. So there are some questions rising:
1) Install Docker on Ubuntu - plenty of manuals, for example: http://blog.tonysneed.com/2015/05/25/develop-and-deploy-asp-net-5-apps-to-docker-on-linux/
For me it ends (I think) at the point he going do "dockerize" something, but okay, at least I have the Docker installed.
2) Somehow find a way to publish VS projects to Docker. Again, plenty of manuals: http://www.hanselman.com/blog/PublishingAnASPNET5AppToDockerOnLinuxWithVisualStudio.aspx
3) And the problem is when I finally choose "Publish", specifying connection and other stuff, it fails checking connection. So, Docker out of the box isn't ready to receive deployments from VS? What do I need to fill the gap?
Edit for some details:
Docker was installed with these exact commands with no further configuration:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys
36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker
What I'm deploying is ASP.NET 5 beta 7 app, specifying:
URL: tcp://19.85.23.13:2376
Image: microsoft/aspnet
And leaving other parameters default. What I get is error:
An error occured during publish. The command [docker -H
tcp://19.85.23.13:2376 build -t microsoft/aspnet -f
"C:\Users\adski\AppData\Local\Temp\PublishTemp\DockTest185\approot\src\DockTest1\Dockerfile"
"C:\Users\adski\AppData\Local\Temp\PublishTemp\DockTest185"] exited
with code [1]: Post
http://19.85.23.13:2376/v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=approot%2Fsrc%2FDockTest1%2FDockerfile&memory=0&memswap=0&rm=1&t=microsoft%2Faspnet&ulimits=null:
dial tcp 19.85.23.13:2376: ConnectEx tcp: No connection could be made
because the target machine actively refused it..
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
Please visit http://go.microsoft.com/fwlink/?LinkID=529706 for
troubleshooting guide.
Well I'm not really a web-security expert. I've found this yet another manual: http://sheerun.net/2014/05/17/remote-access-to-docker-with-tls/ but can't really understand if it is what I need. After all, nobody in those "Visual Studio Publish to Docker" guides mentioned I need a certificate or something.
But obviously I need some credentials to access my server, otherwise, if it is on the web, anyone could dock something in it. And what are those cursed credentials? Any guides for dummies?
Edit 2: found something that looks like relevant: https://docs.docker.com/articles/https/
Er, is this really that complicated? But goddamit, none of those asp.net/docker tutorials mentioned that. Guides for dummies, pleeease?

Nginx not working properly after update

Okay, so I had nginx 1.4.6 running on ubuntu 13.10 without any problems.
I tried to update nginx to 1.6.0 via this url (http://leftshift.io/upgrading-nginx-to-the-latest-version-on-ubuntu-servers)
Now nginx is not running and not willing to start (no reaction at all). nginx -v gives "nginx: command not found" as a result. So looks like nginx can't be found.
I looked around here and on other sites, but wasn't able to find the solution. So, if anyone can.. plz do..
As this was a server without any active tools or software I decided to remove and reinstall nginx.
I used this answer: How can I restore /etc/nginx?
QUOTE:
To recreate it, first uninstall using purge to remove even configuration files and records:
sudo apt-get purge nginx nginx-common nginx-full
then reinstall:
sudo apt-get install nginx
After these two commands, nginx was up and running again. I can now use my backup to upload the predefined .vhosts files to sites-enabled again.

Dokku view logs? (hosted on digitalocean)

So I just started using dokku (with postegres). My app works on Heroku so I'm pretty sure it's a deployment issue. The app seems to be running but is however hitting issues at log in. I did dokku logs my_app_name however the logs seems to be old. On heroku whenever there is an issue there is an corresponding log, but here I cannot find.
Any ideas are appreciated! Thanks!
To get a continuous log you can type:
dokku logs yourappname -t
It acts as the tail -f command on linux and mac systems.
Dokku logs - docs
I think you can try to use docker logs -f `cat /home/dokku/<app-name>/CONTAINER for getting access to the logs.
In case if you want to see the logs of the specific container:
docker ps
then find your container with postgresql for example and run docker logs -f <CONTAINER_ID>
I hope it could help you to find out the problem.
Out of the topic I found dokku-alt and using it in my current DO image. If you are working with Ruby it's working out of the box comparing with original dokku project.
The easiest:
dokku logs -t *app_name*
up to 300000 lines:
dokku logs -t -n 300000 *app_name* > logs.txt
complete log of container: (needs to be executed on server)
docker container list # to get the container id
docker logs *container_id* > logs.txt
It may not be the answer you are looking for but I was seeing the same issue. I just waited about 30 seconds and the logs were updated. I don't know why they are updated live, but they eventually came through.

Resources