Gitlab on an OpenVZ Server with nginx - Authentication Required popup - nginx

I am experiencing an issue where trying to access GitLab through my browser is providing me with a popup for Authentication. It looks like a .htaccess authentication popup, but I have not configured htaccess authentication with my nginx configuration.
Authentication Required
The server at http://git.servername.com:80 requires a username and password.
The server says: Password Protected.
Username:
Password:
I have tried troubleshooting this issue for a couple of days now, but I am running into some dead ends, since I see no information in my nginx error logs, or my GitLab production.log.
I recently performed an installation of GitLab on Ubuntu 12.04 (following this guide: https://www.digitalocean.com/community/articles/how-to-set-up-gitlab-as-your-very-own-private-github-clone), and ran into some issues with not having enough memory on OpenVZ. I created some fake swap to get over this hurdle, and I have verified that the GitLab server is running. My verification and configuration information is as follows:
GitLab Verification
user#server:/home/git/gitlab$ sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.7.0 ? ... OK (1.7.0)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
post-receive hook up-to-date? ... yes
post-receive hooks in repos are links: ... can't check, you have no projects
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Checking Sidekiq ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
Projects have satellites? ... can't check, you have no projects
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.9.1)
Checking GitLab ... Finished
Environment Info
user#server:/home/git/gitlab$ sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 2.0.0p247
Gem Version: 2.0.3
Bundler Version:1.6.0
Rake Version: 10.1.0
GitLab information
Version: 6.0.2
Revision: 10b0b8f
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: http://git.servername.com
HTTP Clone URL: http://git.servername.com/some-project.git
SSH Clone URL: git#git.servername.com:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.7.0
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/bin/git
Nginx Configuration
In my default.conf file:
server {
listen 80;
server_name git.servername.com;
location / {
proxy_pass http://git.servername.com:80;
}
}
Hosts file on my personal machine (not my VPS)
<IP Address of VPS> git.servername.com

Related

How to configure jboss-eap 6.2 as a service and set Auto-start on CentOS 6.x and Linux1 AMI

We were using an older version of Jboss 4.x with JDK 5 on CentOS 5.x, this version of Jboss is very old now, even this version is also not supported by RedHat now.
Now we are upgrading to jboss-eap 6.2 and jdk1.7.0_60 and CentOS 6.x in UAT and AWS Linux1 AMI for Production, I have installed Jboss 6.2 in /var/lib/jboss-eap-6.2 folder. Necessary modification in the code are already done, application is working fine and Jboss is running as a process using below command.
./standalone.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0
Below command shows that process is running on port 8080.
netstat -aptn | grep LISTEN
Please help configure Jboss 6.2 as a service and set Auto-start, when OS starts?
Follow the below steps to Configure Jboss6.2 as Service and configure it as Auto restart. The process is same for CentOS 6.x and AWS Linux1 AMI.
Copy files into system directories
a. Copy the modified configuration file to the /etc/jboss-as directory.
mkdir /etc/jboss-as
cp /var/lib/jboss-eap-6.2/bin/init.d/jboss-as.conf /etc/jboss-as/
Uncomment following line
JBOSS_USER=root
and add the following line at the end of this file.
export JBOSS_USER
b. Copy the start-up script to the /etc/init.d directory.
cp /var/lib/jboss-eap-6.2/bin/init.d/jboss-as-standalone.sh /etc/init.d/jboss-62
Do following changes in /etc/init.d/jboss-62 file
i)-Set Java Home JAVA_HOME=/usr/java/jdk1.7.0_60 export JAVA_HOME
ii)-Set JBOSS Home JBOSS_HOME=/var/lib/jboss-eap-6.2 export JBOSS_HOME
iii)-Change Configuration xml file name(you may give what ever configuration file name you are using)
JBOSS_CONFIG=standalone-full.xml
iv)-Add "-b 0.0.0.0 -bmanagement 0.0.0.0" in the following line, so that binding is set for every IP address on this system
daemon --user $JBOSS_USER LAUNCH_JBOSS_IN_BACKGROUND=1
JBOSS_PIDFILE=$JBOSS_PIDFILE $JBOSS_SCRIPT -b 0.0.0.0 -bmanagement
0.0.0.0 -c $JBOSS_CONFIG 2>&1 > $JBOSS_CONSOLE_LOG &
Add the start-up script as a service.
Add the new jboss-as-standalone.sh ( i.e jboss-62 ) service to list of automatically started services, using the chkconfig command.
chkconfig --add jboss-62
Start the service.
service jboss-62 start
Make the service to start automatically when you restart your
server.
chkconfig jboss-62 on
Restart the service
service jboss-62 restart
Now Jboss6.2 configuration as a Service as Auto restart is complete.
Reboot os and check that service is running. Run below command ot verify that service is running on port 8080
netstat -aptn | grep LISTEN | grep 8080

OpenVAS: OSPD scanner can't be used as scanner in new task

After understanding how to add an ospd scanner, verify it etc ...
I though I could finally use it but got an error through UI to add it to a task.
In my case, I run OpenVAS 9 on a debian 9 and I'm trying to include a w3af scanner but I got the same issue with every OSP scanner I add.
my pip freeze :
ospd==1.2.0
ospd-debsecan==1.2b1
ospd-nmap==1.0b1
ospd-w3af==1.0.0
Note that here is an example of w3af but the issue is the same for debsecan scanner and nmap scanner.
my openvas-check-setup :
Step 1: Checking OpenVAS Scanner ...
OK: OpenVAS Scanner is present in version 5.1.1.
OK: redis-server is present in version v=3.2.6.
OK: scanner (kb_location setting) is configured properly using the redis-server socket: /tmp/redis.sock
OK: redis-server is running and listening on socket: /tmp/redis.sock.
OK: redis-server configuration is OK and redis-server is running.
OK: NVT collection in /usr/local/var/lib/openvas/plugins contains 47727 NVTs.
WARNING: Signature checking of NVTs is not enabled in OpenVAS Scanner.
SUGGEST: Enable signature checking (see http://www.openvas.org/trusted-nvts.html).
OK: The NVT cache in /usr/local/var/cache/openvas contains 47727 files for 47727 NVTs.
Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /usr/local/var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 47727 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /usr/local/var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /usr/local/var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.
Step 3: Checking user configuration ...
WARNING: Your password policy is empty.
SUGGEST: Edit the /usr/local/etc/openvas/pwpolicy.conf file to set a password policy.
Step 4: Checking Greenbone Security Assistant (GSA) ...
OK: Greenbone Security Assistant is present in version 7.0.2.
OK: Your OpenVAS certificate infrastructure passed validation.
Step 5: Checking OpenVAS CLI ...
OK: OpenVAS CLI version 1.4.5.
Step 6: Checking Greenbone Security Desktop (GSD) ...
SKIP: Skipping check for Greenbone Security Desktop.
Step 7: Checking if OpenVAS services are up and running ...
OK: netstat found, extended checks of the OpenVAS services enabled.
OK: OpenVAS Scanner is running and listening on a Unix domain socket.
OK: OpenVAS Manager is running and listening on a Unix domain socket.
OK: Greenbone Security Assistant is listening on port 443, which is the default port.
Step 8: Checking nmap installation ...
WARNING: Your version of nmap is not fully supported: 7.40
SUGGEST: You should install nmap 5.51 if you plan to use the nmap NSE NVTs.
Step 10: Checking presence of optional tools ...
OK: pdflatex found.
WARNING: PDF generation failed, most likely due to missing LaTeX packages. The PDF report format will not work.
SUGGEST: Install required LaTeX packages.
OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
OK: rpm found, LSC credential package generation for RPM based targets is likely to work.
OK: alien found, LSC credential package generation for DEB based targets is likely to work.
OK: nsis found, LSC credential package generation for Microsoft Windows targets is likely to work.
To create the scanner in openvas, I use:
openvasmd --create-scanner="w3af" --scanner-host=127.0.0.1 --scanner-port=1235 --scanner-type="OSP" \
--scanner-ca-pub=/usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub=/usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv=/usr/local/var/lib/openvas/private/CA/clientkey.pem
To run ospd-w3af scanner, I use:
~# ospd-w3af -b 127.0.0.1 -p 1235 -k \
/usr/local/var/lib/openvas/private/CA/clientkey.pem -c \
/usr/local/var/lib/openvas/CA/clientcert.pem --ca-file \
/usr/local/var/lib/openvas/CA/cacert.pem -L DEBUG
When I verify the scanner with openvasmd --verify-scanner xxxxx I got
Scanner version: 2018.8.22.
note: in the logs of the scanner I got this for every verify I do, I don't know if it's related or no and I didn't find a way to fix this:
2018-10-15 14:27:47,413 ospd.ospd: DEBUG: New connection from 127.0.0.1:60078
2018-10-15 14:27:49,430 ospd.ospd: DEBUG: Error: ('The read operation timed out',)
2018-10-15 14:27:49,433 ospd.ospd: DEBUG: 127.0.0.1:60078: Connection closed
So, my verification made, I want to create a task that uses this scanner but I can't save it due to error "Given scanner_type was invalid" :
https://i.stack.imgur.com/fvIJd.png
I got 0 connection to the chosen scanner at this moment and I can't find anything in the logs (maybe I can't search). I suspect the gsad UI being responsible for this but I can't find it.
I don't know what to do and if someone more expert than me (not very hard) could help that'd be great :)
Thanks in advance.
I solved this issue by creating a scan configuration for the ospd scanner (I though it didn't need one since it import them)
I faced another issue concerning ospd-w3af configuration, I couldn't create one because it needs ospd 1.0.0 installed, I modified the dependencies few days ago and it doesn't work with ospd 1.2.0
Now I'm facing the issue where the scans doesn't start properly. It stops at 1%
Getting openvas 9 running on new install of Ubuntu 18 was a pain. once i got past all my errors by creating files and ln -s for redis-server socks connections my tasks crapped out at 1%. My fix was install sudo apt install libopenvas-dev after that scans work and check-setup worked. Check-setup report no scanner but openvassd was running and openvasmd --verify-scanner (uuid) showed the scanner.

Why is yum re-creating default nginx config file on yum update on RHEL7?

I installed nginx via yum package on RHEL7. I added my config as
/etc/nginx/conf.d/my.conf
and deleted the config file shipped with the package
/etc/nginx/conf.d/default.conf
Recently, nginx package was updated via yum update. Now, the default.conf file is present again. I would have expected that yum doesn't touch default config files if they were changed or deleted.
Is this normal yum behavior? Here some information about the RHEL version and nginx package.
root#host: [~]# yum info nginx
Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos
This system is receiving updates from RHN Classic or Red Hat Satellite.
Installed Packages
Name : nginx
Arch : x86_64
Epoch : 1
Version : 1.14.1
Release : 1.el7_4.ngx
Size : 2.6 M
Repo : installed
From repo : nginx
Summary : High performance web server
URL : http://nginx.org/
License : 2-clause BSD-like license
Description : nginx [engine x] is an HTTP and reverse proxy server, as well as
: a mail proxy server.
I upgrade the package from 1.14.0 to version 1.14.1 shown above.
root#host: [~]# nginx -v
nginx version: nginx/1.14.1
Redhat version:
root#host: [~]# hostnamectl
Static hostname: host.example.com
Icon name: computer-vm
Chassis: vm
Machine ID: SOME-ID
Boot ID: ANOTHER-ID
Virtualization: vmware
Operating System: Red Hat Enterprise Linux
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server
Kernel: Linux 3.10.0-862.14.4.el7.x86_64
Architecture: x86-64
If I rename my.conf to default.conf, it doesn't get replaced on a yum update.

Gitlab docker not working if external_url is set

I've been struggling for a while with a problem that I'm still unable to solve. Help would be much appreciated!
What I did:
1) Install Gitlab-CE using the docker image (8.9.6-ce.0) on an Ubuntu 16.04.1 LTS virtual machine in my server following http://docs.gitlab.com/omnibus/docker/README.html
2) Setup a user locally and push some projects for a machine in the same LAN >> all working ok
3) Add a new mapping to my firewall to map gitlab-machine-ip:80 > example.org:8138 so that I can access gitlab with http
I'm now able to access the web interface at http://example.org:8138 and use it
NOW THE PROBLEM: the URLs for cloning projects show up incorrectly since they miss the :8138 port (they get the example.org part from the --host setting passed to the docker container).
Cloning works ok if I manually add my custom ports to the URLs
I wanted to solve this so gave a try to the external_url setting in gitlab.rb setting it to:
external_url 'http://example.org:8138'
and restarted (also tried calling gitlab-ctl configure manually)
STATUS IS THAT I CANNOT ACCESS THE WEB INTERFACE ANYMORE AT http://example.org:8138 getting a ERR_CONNECTION_REFUSED in my browser
If I just comment out the external_url setting everything is back working (apart from the missing port in URLs obviously)
I've read a bunch of issues report but none of them helped in solving the issue:
https://gitlab.com/gitlab-org/omnibus-gitlab/issues/244 >> (I'm NOT using an external NGINX)
also tried updating to 8.11 after I read about this: https://gitlab.com/gitlab-org/gitlab-ce/issues/20131 but it did not help
Don't really know what's going on here.
Output of gitlab-rake gitlab:env:info and gitlab-rake gitlab:check follows
System information
System:
Current User: git
Using RVM: no
Ruby Version: 2.3.1p112
Gem Version: 2.6.6
Bundler Version:2.3.0
Rake Version: 10.5.0
Sidekiq Version:4.1.4
GitLab information
Version: 8.11.3
Revision: 6cd4edb
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://example.org:8138
HTTP Clone URL: http://example.org:8138/some-group/some-project.git
SSH Clone URL: git#example.org:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 3.4.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git: /opt/gitlab/embedded/bin/git
Checking GitLab Shell ...
GitLab Shell version >= 3.4.0 ? ... OK (3.4.0)
Repo base directory exists?
default... yes
Repo storage directories are symlinks?
default... no
Repo paths owned by git:git?
default... yes
Repo paths access is drwxrws---?
default... yes
hooks directories in repos are links: ...
telemed / banca ... ok
telemed / calcolatrice ... ok
telemed / chat ... ok
telemed / collections ... ok
telemed / interfacce ... ok
telemed / partite ... ok
telemed / polimorfismo ... ok
telemed / ristoranti ... ok
Running /opt/gitlab/embedded/service/gitlab-shell/bin/check
Check GitLab API access: OK
Access to /var/opt/gitlab/.ssh/authorized_keys: OK
Send ping to redis server: OK
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking Reply by email ...
Reply by email is disabled in config/gitlab.yml
Checking Reply by email ... Finished
Checking LDAP ...
LDAP is disabled in config/gitlab.yml
Checking LDAP ... Finished
Checking GitLab ...
Git configured with autocrlf=input? ... yes
Database config exists? ... yes
All migrations up? ... yes
Database contains orphaned GroupMembers? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Uploads directory setup correctly? ... yes
Init script exists? ... skipped (omnibus-gitlab has no init script)
Init script up-to-date? ... skipped (omnibus-gitlab has no init script)
projects have namespace: ...
telemed / banca ... yes
telemed / calcolatrice ... yes
telemed / chat ... yes
telemed / collections ... yes
telemed / interfacce ... yes
telemed / partite ... yes
telemed / polimorfismo ... yes
telemed / ristoranti ... yes
Redis version >= 2.8.0? ... yes
Ruby version >= 2.1.0 ? ... yes (2.3.1)
Your git bin path is "/opt/gitlab/embedded/bin/git"
Git version >= 2.7.3 ? ... yes (2.7.4)
Active users: 4
Checking GitLab ... Finished
Ok was able to figure out the problem on my own.
Apparetly when you change the external_url parameter in gitlab.rb there's the side effect (not very clearly explained in the documentation if you ask me!) that nginx will now run on the port you put in the http://example.org:8138
Since I instead mapped port 80 on my external URL through my firewall then the gitlab website was not reachable anymore. I would suggest to clearly state in the documentation that changing external_url (if a port number is included) would cause nginx and the website to run http on a different port than the standard 80!!!!
Hope this helps some other people having a problem similar to mine :slight_smile:

How to add nginx modules to puppet manifest?

I need to install Nginx with some modules on my virtual machine (Debian 7 x64). I use Vagrant and one recipe from Puphpet. Puphpet uses Hiera to configure Vagrant and Puppet installation way. By default in puphpet/config.yaml I have nginx section:
nginx:
install: '1'
settings:
default_vhost: 1
proxy_buffer_size: 128k
proxy_buffers: '4 256k'
upstreams: { }
vhosts:
rpfrz3ldtf65m:
proxy: ''
server_name: awesome.dev
server_aliases:
- www.awesome.dev
www_root: /var/www/awesome
listen_port: '80'
location: \.php$
index_files:
- index.html
- index.htm
- index.php
envvars:
- 'APP_ENV dev'
engine: php
client_max_body_size: 1m
ssl_cert: ''
ssl_key: ''
I need Nginx modules image_filter , so where could I place the correspond information in this config? I could place the Puppet manifect provided by puphpet configuring Nginx, but it's huge and too hard to understand.
Author of puphpet here.
From my understanding, Nginx needs to be compiled with your chosen modules, they cannot be enabled/disabled like Apache.
If the module you want is not installed in the Nginx package installed via puphpet, then that means it wasn't compiled in. You'll need to find another source that has that module compiled in, or compile Nginx yourself.
This is the Nginx Puppet module used in puphpet: https://github.com/jfryman/puppet-nginx/tree/v0.0.10/manifests/package

Resources