Redmine + Gitolite Raspberry - nginx

I have been at this for days now. I think I am simply too stupid. Please help.
I have redmine + gitolite 2.3-1 setup on nginx 1.2.1-2.2+wheezy1 (please find more details under (2) Both services are running on the same machine. The rake commands where run with user and group redmine. nginx however is running as www-data user. User
git#raspberrypi:/usr/share/redmine$ id redmine
uid=110(redmine) gid=113(redmine) groups=113(redmine),33(www-data),112(git)
My raspberry is behind a NAT, ports are forwarded correctly, ssh access works to user git, who is also running gitolite.
The repo that I want to use for my project was setup with redmine, which worked fine. Now redmine tells me ssh, http and https links to the repo to use for cloning and pushing etc.
Clone Repository :
git clone ssh://git#<myURL>.org/<myProjectName>-gitolite.git
Now when I try cloning the repo I get the famous message:
$ git clone git#<myURL>.org:<myProjectName>-gitolite.git
Cloning into '<myProjectName>-gitolite'...
fatal: '<myProjectName>-gitolite.git' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I have cloned the repo locally, there it shows that redmine has inserted the deployment keys just fine:
repo <myProjectName>-gitolite
RW+ = redmine__deploy_key__1385587228_1212 redmine__deploy_key_...
I am really lost here, please help. I give you more information if needed, because, as you can see from the long text I don't understand the setup too well as well. It is too complicated, I think.
Redmine tells me also that I could use http and https checkouts, but a netstat reveals that there is no server listening on 443.
I am exhausted now, please help.
(2) I have the following system:
Linux raspberrypi 3.6.11-rpi-aufs ... armv6l GNU/Linux
Redmine: svn checkout URL: http://redmine.rubyforge.org/svn/branches/2.3-stable
plugins: plugins/redmine_git_hosting/
plugins/redmine_plugin_views_revisions/
I am using nginx to access redmine, currently via http.
git#raspberrypi:/usr/share/redmine$ ps aux |grep nginx
root 2796 0.0 0.1 11544 828 ? Ss Nov27 0:00 nginx: master process /usr/sbin/nginx
www-data 2798 0.0 0.3 12016 1448 ? S Nov27 0:03 nginx: worker process
www-data 2799 0.0 0.2 11704 1152 ? S Nov27 0:03 nginx: worker process
www-data 2800 0.0 0.3 12028 1768 ? S Nov27 0:00 nginx: worker process
www-data 2801 0.0 0.2 12028 1392 ? S Nov27 0:03 nginx: worker process

Related

Datadog integration with NGiNX

I am new to Datadog and NGiNX. I noticed when I was creating a monitor for some integrations several of the integrations were labeled as misconfigured. My guess is someone clicked the install button but did finish the remaining integration steps. I started to work with NGiNX and quickly hit a roadblock.
I verified it is running http status module
$ nginx -V 2>&1| grep -o http_stub_status_module
http_stub_status_module
The NGiNX install is under a different directory than is usual
and the configuration file is under
/<dir>/parts/nginx/conf
I created the status.conf file there.
When I reload the NGINX I get a failure. I don't understand what it means or how to proceed from here.
nginx: [error] open() "/<dir>/parts/nginx/logs/nginx.pid" failed (2: No such file or directory)
There is a logs directory with nothing in it.
ps -ef|grep nginx
user 35958 88952 0 May24 ? 00:00:43 nginx: worker process
user 35959 88952 0 May24 ? 00:00:48 nginx: worker process
root 88952 1 0 Feb21 ? 00:00:00 nginx: master process <dir>/parts/nginx/sbin/nginx -c <dir>/etc/nginx/balancer.conf -g pid <dir>/var/nginx-balancer.pid; lock_file /<dir>/var/nginx-balancer.lock; error_log <dir>/var/logs/nginx-balancer-error.log;
user 109169 63043 0 13:13 pts/0 00:00:00 grep --color=auto nginx
I think the issue is that our install doesn't seem to be following the same defaults as the instructions and I'm pretty sure I'm not doing this correctly.
If anyone has any insights that would be great!
Chris

Understanding Docker container resource usage

I have server running Ubuntu 16.04 with Docker 17.03.0-ce running an Nginx container. That server also has ConfigServer Security & Firewall installed. Shortly after starting the Nginx container I start receiving emails about "Excessive resource usage" with the following details:
Time: Fri Mar 24 00:06:02 2017 -0400
Account: systemd-timesync
Resource: Process Time
Exceeded: 1820 > 1800 (seconds)
Executable: /usr/sbin/nginx
Command Line: nginx: worker process
PID: 2302 (Parent PID:2077)
Killed: No
I fully understand that I can add exe:/usr/sbin/nginx to csf.pignore to stop these email alerts but I would like to understand a few things first.
Why is the "systemd-timesync" account being reported? That does not seem to have anything to do with Docker.
Why does the host machine seem to be reporting the excessive resource usage (the extended process time) when that is something running in the container?
Why are other docker containers not running Nginx not resulting in excessive resource usage emails?
I'm sure there are other questions but basically, why is this being reported the way it is being reported?
I can at least answer the first two questions:
Unlike real VMs, Docker containers are simply a collection of processes run under the host system kernel. They just have a different view on certain system resources, including their own file hierarchy, their own PID namespace and their own /etc/passwd file. As a result, they will still show up if you ps aux on the host machine.
The nginx container's /etc/passwd includes a user 'nginx' with UID 104 that runs the nginx worker process. However, in the host's /etc/passwd, UID 104 might belong to a completely different user, such as systemd-timesync.
As a result, if you run ps aux | grep nginx in the container, you might see
nginx 7 0.0 0.0 32152 2816 ? S 11:20 0:00 nginx: worker process
while on the host, you see
systemd-timesync 22004 0.0 0.0 32152 2816 ? S 13:20 0:00 nginx: worker process
even though both are the are the same process (also note the different PID namespaces; in containers, PIDs are counted from 1 again).
As a result, container processes will still be subject to ConfigServer's resource monitoring, but they might show up with random, or even non-existent user accounts.
As to why nginx triggers the emails and other containers don't, I can only assume that nginx is the only one of your containers that crosses ConfigServer's resource thresholds.

NGINX Amazon EC2 keeps loading but shows nothing

I'm kinda new on setting up a production machine and I don't get why I'm not seeing the default index page for nginx on my EC2 machine. It's installed and running on my server, but when I try to access, it keeps loading and shows nothing, keeps on a blank page. I'm trying to access through the public ip (35.160.22.104) and through public dns(ec2-35-160-22-104.us-west-2.compute.amazonaws.com). Both does the same. What I'm doing wrong?
UPDATE:
I realized that when restarting nginx service, it didn't showed the "ok" message. So I took a look at error.log:
[ 2016-12-12 17:16:11.2439 709/7f3eebc93780 age/Cor/CoreMain.cpp:967 ]: Passenger core shutdown finished
2016/12/12 17:16:12 [info] 782#782: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:71
[ 2016-12-12 17:16:12.2742 791/7fb0c37a0780 age/Wat/WatchdogMain.cpp:1291 ]: Starting Passenger watchdog...
[ 2016-12-12 17:16:12.2820 794/7fe4d238b780 age/Cor/CoreMain.cpp:982 ]: Starting Passenger core...
[ 2016-12-12 17:16:12.2820 794/7fe4d238b780 age/Cor/CoreMain.cpp:235 ]: Passenger core running in multi-application mode.
[ 2016-12-12 17:16:12.2832 794/7fe4d238b780 age/Cor/CoreMain.cpp:732 ]: Passenger core online, PID 794
[ 2016-12-12 17:16:12.2911 799/7f06bb50a780 age/Ust/UstRouterMain.cpp:529 ]: Starting Passenger UstRouter...
[ 2016-12-12 17:16:12.2916 799/7f06bb50a780 age/Ust/UstRouterMain.cpp:342 ]: Passenger UstRouter online, PID 799
Anyway, it doesn't looks like an error, but a usual log.
UPDATE 2:
Nginx is running:
root 810 1 0 17:16 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 815 810 0 17:16 ? 00:00:00 nginx: worker process
ubuntu 853 32300 0 17:44 pts/0 00:00:00 grep --color=auto nginx
And when I try do curl localhost, it returns the HTML as expected!
UPDATE3:
When I run systemctl status nginx, I get the following error:
Dec 12 17:54:48 ip-172-31-40-156 systemd[1]: nginx.service: Failed to read PID from file /run/nginx.pid: Invalid argument
Trying to figure out what it is
UPDATE4:
Ran the command nmap 35.160.22.104 -Pn PORT STATE SERVICE 22/tcpand got the output:
Starting Nmap 7.01 ( https://nmap.org ) at 2016-12-12 18:05 UTC
Failed to resolve "PORT".
Failed to resolve "STATE".
Failed to resolve "SERVICE".
Unable to split netmask from target expression: "22/tcp"
Nmap scan report for ec2-35-160-22-104.us-west-2.compute.amazonaws.com (35.160.22.104)
Host is up (0.0015s latency).
Not shown: 999 filtered ports
PORT STATE SERVICE
22/tcp open ssh
UPDATE5:
Output for netstat -tuanp | grep 80:
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
Your ec2 instance have a security group associated.
You should go to AWS console EC2 -> Instances -> Click on your instance -> On the bottom 'Description' -> Security Group. Click on the name and you will be redirect to EC2-> Network and Security. Click on 'Edit inbound rules' Add a rule:
Type: HTTP
Save. And that should be fine!

502 Bad Gateway nginx (1.9.7) in Homestead [ Laravel 5 ]

Did google and various other search engines but still could not sort it out.
Here is my scenario:
Larave 5 on homestead
1) ps -eo pid,comm,euser,supgrp | grep nginx
[following is the output ]
2333 nginx root root
2335 nginx vagrant adm,cdrom,sudo,dip,www-data,plugdev,lpadmin,sambashare,vagrant
2) Based on some search result, did make the following on : /etc/php/7.0/fpm/pool.d
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
3) Output with sudo service php7.0-fpm restart
Restarting PHP 7.0 FastCGI Process Manager php-fpm7.0 [ OK ]
4) Output with sudo service nginx restart
nginx stop/waiting
nginx start/running, process 2650
5)output with :
sudo /etc/init.d/nginx restart
Restarting nginx nginx [fail]
6)output with: tail -f /var/log/nginx/error.log
> 2015/12/26 15:35:23 [notice] 2088#2088: signal process started
2015/12/26 15:45:23 [notice] 2266#2266: signal process started
2015/12/26 15:45:23 [alert] 2095#2095: *9 open socket #3 left in connection 5
2015/12/26 15:45:23 [alert] 2095#2095: aborting
2015/12/26 15:49:02 [alert] 2303#2303: *1 open socket #3 left in connection 3
2015/12/26 15:49:02 [alert] 2303#2303: aborting
2015/12/26 16:00:39 [notice] 2475#2475: signal process started
2015/12/26 16:02:25 [notice] 2525#2525: signal process started
2015/12/26 16:03:08 [notice] 2565#2565: signal process started
2015/12/26 16:14:45 [notice] 2645#2645: signal process started
`
I am just having bad time with this 502 Bad Gateway
> nginx/1.9.7
and php
> PHP 7.0.1-1+deb.sury.org~trusty+2 (cli) ( NTS )
`
If anyone can please help me move on with this situation, that would be great. And, thank you in advance.
Finally solved this here. I want to thank Miguel from laracast discussion.
You need to change your configuration file under:
/etc/nginx/sites-enabled
change line fastcgi_pass for
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
php7.0-fpm.sock is located under:
/var/run/php
Since the new VM uses php 7.* and your configuration file might have the php location for 5.6 version.
Then restart Nginx and PHP
sudo service nginx restart
sudo service php7.*-fpm restart
7.3 and the xdebug version in Homestead 8..* are incompatible. Further info found here*
try this in /etc/php/7.0/fpm/pool.d/www.conf
listen.owner = nginx
listen.group = nginx
listen.mode = 0660
finally restart php7.0-fpm
service php7.0-fpm restart
I've got same error, 502 Bad Gateway (Ngix 1.blablabla)
It's EASY to solve it.
just type into your terminal.
if your VM is running:
vagrant reload --provision
else:
vagrant halt
and later:
vagrant up --provision
I had the same problem ... and solved it in an easy way:
If you use composer, just replace old:
laravel/homestead (v2.*)
with:
laravel/homestead (v3.0.1)
If you change the sites property after provisioning the Homestead box, you should re-run vagrant reload --provision to update the Nginx configuration on the virtual machine.
Here is my story I installed fresh latest homestead and try to run my Laravel 5.4 project but after a day of debugging so gave custom php for my project . This is how it work.
1. vi Homestead.yaml
2. sites:
- map: homestead.test
to: /home/vagrant/code/my-project/public
php: "7.1"
php 7.1 works from Laravel 5.4 to 5.7
3. vagrant up --provision

Phusion Passenger Standalone seems to be on but nothing appears in browser

I ssh to the dev box where I am suppose to setup Redmine. Or rather, downgrade Redmine. In January I was asked to upgrade Redmine from 1.2 to 2.2. But the plugins we wanted did not work with 2.2. So now I'm being asked to setup Redmine 1.3.3. We figure we can upgrade from 1.2 to 1.3.3.
In January I had trouble getting Passenger to work with Nginx. This was on a CentOS box. I tried several installs of Nginx. I'm left with different error logs:
This:
whereis nginx.conf
gives me:
nginx: /etc/nginx
but I don't think that is in use.
This:
find / -name error.log
gives me:
/opt/nginx/logs/error.log
/var/log/nginx/error.log
When I tried to start Passenger again I was told something was already running on port 80. But if I did "passenger stop" I was told that passenger was not running.
So I did:
passenger start -p 81
If I run netstat I see something is listening on port 81:
netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:81 localhost:42967 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:51874 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62993 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62905 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:50886 ESTABLISHED
tcp 0 0 localhost:81 localhost:42966 TIME_WAIT
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62992 ESTABLISHED
tcp 0 0 localhost:42967 localhost:81 ESTABLISHED
but if I point my browser here:
http: // 10.0.1.253:81 /
(StackOverFlow does not want me to publish the IP address, so I have to malform it. There is no harm here as it is an internal IP that no one outside my company could reach.)
In Google all I get is "Oops! Google Chrome could not connect to 10.0.1.253:81".
I started Phusion Passenger at the command line, and the output is verbose, and I expect to see any error messages in the terminal. But I'm not seeing anything. It's as if my browser request is not being heard, even though netstat seems to indicate the app is listening on port 81.
A lot of other things could be wrong with this app (I still need to reverse migrate the database schema) but I'm not seeing any of the error messages that I expect to see. Actually, I'm not seeing any error messages, which is very odd.
UPDATE:
If I do this:
ps aux | grep nginx
I get:
root 20643 0.0 0.0 103244 832 pts/8 S+ 17:17 0:00 grep nginx
root 23968 0.0 0.0 29920 740 ? Ss Feb13 0:00 nginx: master process /var/lib/passenger-standalone/3.0.19-x86_64-ruby1.9.3-linux-gcc4.4.6-1002/nginx-1.2.6/sbin/nginx -c /tmp/passenger-standalone.23917/config -p /tmp/passenger-standalone.23917/
nobody 23969 0.0 0.0 30588 2276 ? S Feb13 0:34 nginx: worker process
I tried to cat the file /tmp/passenger-standalone.23917/config but it does not seem to exist.
I also killed every session of "screen" and every terminal window where Phusion Passenger might be running, but clearly, looking at ps aux, it looks like something is running.
Could the Nginx be running even if the Passenger is killed?
This:
ps aux | grep phusion
brings back nothing
and this:
ps aux | grep passenger
Only brings back the line with nginx.
If I do this:
service nginx stop
I get:
nginx: unrecognized service
and:
service nginx start
gives me:
nginx: unrecognized service
This is a CentOS machine, so if I had Nginx installed normally, this would work.
The answer is here - Issue Uploading Files from Rails app hosted on Elastic Beanstalk
You probably have /etc/cron.daily/tmpwatch removing the /tmp/passenger-standalone* files every day, and causing you all this grief.

Resources