Gitlab docker not working if external_url is set - nginx

I've been struggling for a while with a problem that I'm still unable to solve. Help would be much appreciated!
What I did:
1) Install Gitlab-CE using the docker image (8.9.6-ce.0) on an Ubuntu 16.04.1 LTS virtual machine in my server following http://docs.gitlab.com/omnibus/docker/README.html
2) Setup a user locally and push some projects for a machine in the same LAN >> all working ok
3) Add a new mapping to my firewall to map gitlab-machine-ip:80 > example.org:8138 so that I can access gitlab with http
I'm now able to access the web interface at http://example.org:8138 and use it
NOW THE PROBLEM: the URLs for cloning projects show up incorrectly since they miss the :8138 port (they get the example.org part from the --host setting passed to the docker container).
Cloning works ok if I manually add my custom ports to the URLs
I wanted to solve this so gave a try to the external_url setting in gitlab.rb setting it to:
external_url 'http://example.org:8138'
and restarted (also tried calling gitlab-ctl configure manually)
STATUS IS THAT I CANNOT ACCESS THE WEB INTERFACE ANYMORE AT http://example.org:8138 getting a ERR_CONNECTION_REFUSED in my browser
If I just comment out the external_url setting everything is back working (apart from the missing port in URLs obviously)
I've read a bunch of issues report but none of them helped in solving the issue:
https://gitlab.com/gitlab-org/omnibus-gitlab/issues/244 >> (I'm NOT using an external NGINX)
also tried updating to 8.11 after I read about this: https://gitlab.com/gitlab-org/gitlab-ce/issues/20131 but it did not help
Don't really know what's going on here.
Output of gitlab-rake gitlab:env:info and gitlab-rake gitlab:check follows
System information
System:
Current User: git
Using RVM: no
Ruby Version: 2.3.1p112
Gem Version: 2.6.6
Bundler Version:2.3.0
Rake Version: 10.5.0
Sidekiq Version:4.1.4
GitLab information
Version: 8.11.3
Revision: 6cd4edb
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://example.org:8138
HTTP Clone URL: http://example.org:8138/some-group/some-project.git
SSH Clone URL: git#example.org:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 3.4.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git: /opt/gitlab/embedded/bin/git
Checking GitLab Shell ...
GitLab Shell version >= 3.4.0 ? ... OK (3.4.0)
Repo base directory exists?
default... yes
Repo storage directories are symlinks?
default... no
Repo paths owned by git:git?
default... yes
Repo paths access is drwxrws---?
default... yes
hooks directories in repos are links: ...
telemed / banca ... ok
telemed / calcolatrice ... ok
telemed / chat ... ok
telemed / collections ... ok
telemed / interfacce ... ok
telemed / partite ... ok
telemed / polimorfismo ... ok
telemed / ristoranti ... ok
Running /opt/gitlab/embedded/service/gitlab-shell/bin/check
Check GitLab API access: OK
Access to /var/opt/gitlab/.ssh/authorized_keys: OK
Send ping to redis server: OK
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking Reply by email ...
Reply by email is disabled in config/gitlab.yml
Checking Reply by email ... Finished
Checking LDAP ...
LDAP is disabled in config/gitlab.yml
Checking LDAP ... Finished
Checking GitLab ...
Git configured with autocrlf=input? ... yes
Database config exists? ... yes
All migrations up? ... yes
Database contains orphaned GroupMembers? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Uploads directory setup correctly? ... yes
Init script exists? ... skipped (omnibus-gitlab has no init script)
Init script up-to-date? ... skipped (omnibus-gitlab has no init script)
projects have namespace: ...
telemed / banca ... yes
telemed / calcolatrice ... yes
telemed / chat ... yes
telemed / collections ... yes
telemed / interfacce ... yes
telemed / partite ... yes
telemed / polimorfismo ... yes
telemed / ristoranti ... yes
Redis version >= 2.8.0? ... yes
Ruby version >= 2.1.0 ? ... yes (2.3.1)
Your git bin path is "/opt/gitlab/embedded/bin/git"
Git version >= 2.7.3 ? ... yes (2.7.4)
Active users: 4
Checking GitLab ... Finished

Ok was able to figure out the problem on my own.
Apparetly when you change the external_url parameter in gitlab.rb there's the side effect (not very clearly explained in the documentation if you ask me!) that nginx will now run on the port you put in the http://example.org:8138
Since I instead mapped port 80 on my external URL through my firewall then the gitlab website was not reachable anymore. I would suggest to clearly state in the documentation that changing external_url (if a port number is included) would cause nginx and the website to run http on a different port than the standard 80!!!!
Hope this helps some other people having a problem similar to mine :slight_smile:

Related

Why does azure iotedge device can't run edge module image uri , which pulled from azure container registry?(IOTEDGE_WORKLOADURI is required)

Set up : Azure iot edge running on raspberry linux arm32v7.
used raspberry pi 4
IoTedge version : iotedge 1.4.3
Signed in to the Azure container registry from the edge device. Built and pushed a custom module to the container registry.Pulled that module image from the Azure container registry and tried to run the module using the docker run <image> command.
But it shows an error:
Unhandled exception. System.InvalidOperationException: Environment variable IOTEDGE_WORKLOADURI is required.
at Microsoft.Azure.Devices.Client.Edge.EdgeModuleClientFactory.CreateInternalClientFromEnvironmentAsync()
at SampleModuletest.ModuleBackgroundService.ExecuteAsync(CancellationToken cancellationToken) in /app/ModuleBackgroundService.cs:line 23
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Program.<Main>$(String[] args) in /app/Program.cs:line 7
output screenshot
Found a post, but it's not the same problem I'm guessing. Outpus is different.
link
I have some doubts and if anyone can clear it would be very helpful.
What are the possible methods to deploy azure iot edge modules.
2.Is it possible to deploy a module from an edge device using a pulled module image from container registry? ?
Thanks in advance. Any suggestions would be appreciated.
Currently running edge modules:
NAME STATUS DESCRIPTION Config
edgeAgent running Up an hour mcr.microsoft.com/azureiotedge-agent:1.4
edgeHub running Up an hour mcr.microsoft.com/azureiotedge-hub:1.4
Docker images:
Docker images
iotedge check:
```
Configuration checks (aziot-identity-service)
---------------------------------------------
√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
√ read all preloaded certificates from the Certificates Service - OK
√ read all preloaded key pairs from the Keys Service - OK
√ check all EST server URLs utilize HTTPS - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK
Connectivity checks (aziot-identity-service)
--------------------------------------------
√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK
Configuration checks
--------------------
√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
√ configuration has correct URIs for daemon mgmt endpoint - OK
√ aziot-edge package is up-to-date - OK
√ container time is close to host time - OK
√ DNS server - OK
‼ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
‼ production readiness: Edge Hub's storage directory is persisted on the host filesystem - Warning
The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ Agent image is valid and can be pulled from upstream - OK
√ proxy settings are consistent in aziot-edged, aziot-identityd, moby daemon and config.toml - OK
Connectivity checks
-------------------
√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
32 check(s) succeeded.
3 check(s) raised warnings. Re-run with --verbose for more details.
2 check(s) were skipped due to errors from other checks. Re-run with --verbose for more details.
When you execute the docker run <image> command, it will attempt to spin up your module with no additional configuration. However, you're using the Azure IoT Edge SDK, which requires additional settings. One of those is the IOTEDGE_WORKLOADURI environment variable.
To answer your questions directly:
What are the possible methods to deploy azure iot edge modules.
There's one way of doing this on an Azure IoT Edge device. It's by creating a deployment manifest in your IoT Hub. That deployment manifest will tell the Azure IoT Edge runtime on your device to pull the correct containers and set them up. You can learn how to do that here
Is it possible to deploy a module from an edge device using a pulled module image from container registry?
I'm going to assume you mean on an edge device, not from. You can execute a docker pull command to get the container, but deploying it really needs to happen with the beforementioned deployment manifest.

How to install JupyterLab and Jupyter Notebbok in Rstudio Workbench?

Currently in my Studio Workbench I can start an RStudio session. However I would like to be able to use other editors like JupiterLab and Jupiter Notebook.
So forst I used the rstudio-server license-manager status command in the VM, and I get this:
RStudio License Manager 2021.09.0+351.pro6
-- Local license status --
Status: Activated
Product-Key: TAGE-TFK8-EIFJ-CLNS-AOSU-OI96-NBFY
Has-Key: Yes
Has-Trial: Yes
Enable-Launcher: 1
Users: 125
Sessions: 0
Expiration: 2022-12-31 00:00:00
Days-Left: 229
License-Engine: 4.4.3.0
License-Scope: System
-- Floating license status --
License server not in use.
So according to me the launcher is activated here.
Then I created a file jupyter.conf which has the path /etc/rstudio/jupyter.conf and I filled it as follows:
# /etc/rstudio/jupyter.conf
jupyter-exe=/usr/bin/jupyter
labs-enabled=1
notebooks-enabled=1
session-cull-minutes=240
default-session-cluster=Kubernetes
default-session-container-image=rstudio:jupyter-session
Then concerning the version of Jupiter I do not know if in my case I can put :
version-notebook=auto
Or if I need to specify the version, for example :
# /etc/rstudio/jupyter.conf
lab-version=3.0.6
notebook-version=6.2.0
Concerning the Launcher configuration, my launcher-mounts file located in /etc/rstudio/launcher-mounts is filled as follows:
# Required home directory mount for RSP, Launcher, and Kubernetes
MountType: NFS
Host: 172.16.128.2
Path: /user_workspaces/{USER}
MountPath: /home/{USER}
ReadOnly: false
Cluster: Kubernetes
Can you tell me what is missing to be able to start a Jupiter session please?
thank you in advance.

OpenVAS: OSPD scanner can't be used as scanner in new task

After understanding how to add an ospd scanner, verify it etc ...
I though I could finally use it but got an error through UI to add it to a task.
In my case, I run OpenVAS 9 on a debian 9 and I'm trying to include a w3af scanner but I got the same issue with every OSP scanner I add.
my pip freeze :
ospd==1.2.0
ospd-debsecan==1.2b1
ospd-nmap==1.0b1
ospd-w3af==1.0.0
Note that here is an example of w3af but the issue is the same for debsecan scanner and nmap scanner.
my openvas-check-setup :
Step 1: Checking OpenVAS Scanner ...
OK: OpenVAS Scanner is present in version 5.1.1.
OK: redis-server is present in version v=3.2.6.
OK: scanner (kb_location setting) is configured properly using the redis-server socket: /tmp/redis.sock
OK: redis-server is running and listening on socket: /tmp/redis.sock.
OK: redis-server configuration is OK and redis-server is running.
OK: NVT collection in /usr/local/var/lib/openvas/plugins contains 47727 NVTs.
WARNING: Signature checking of NVTs is not enabled in OpenVAS Scanner.
SUGGEST: Enable signature checking (see http://www.openvas.org/trusted-nvts.html).
OK: The NVT cache in /usr/local/var/cache/openvas contains 47727 files for 47727 NVTs.
Step 2: Checking OpenVAS Manager ...
OK: OpenVAS Manager is present in version 7.0.2.
OK: OpenVAS Manager database found in /usr/local/var/lib/openvas/mgr/tasks.db.
OK: Access rights for the OpenVAS Manager database are correct.
OK: sqlite3 found, extended checks of the OpenVAS Manager installation enabled.
OK: OpenVAS Manager database is at revision 184.
OK: OpenVAS Manager expects database at revision 184.
OK: Database schema is up to date.
OK: OpenVAS Manager database contains information about 47727 NVTs.
OK: At least one user exists.
OK: OpenVAS SCAP database found in /usr/local/var/lib/openvas/scap-data/scap.db.
OK: OpenVAS CERT database found in /usr/local/var/lib/openvas/cert-data/cert.db.
OK: xsltproc found.
Step 3: Checking user configuration ...
WARNING: Your password policy is empty.
SUGGEST: Edit the /usr/local/etc/openvas/pwpolicy.conf file to set a password policy.
Step 4: Checking Greenbone Security Assistant (GSA) ...
OK: Greenbone Security Assistant is present in version 7.0.2.
OK: Your OpenVAS certificate infrastructure passed validation.
Step 5: Checking OpenVAS CLI ...
OK: OpenVAS CLI version 1.4.5.
Step 6: Checking Greenbone Security Desktop (GSD) ...
SKIP: Skipping check for Greenbone Security Desktop.
Step 7: Checking if OpenVAS services are up and running ...
OK: netstat found, extended checks of the OpenVAS services enabled.
OK: OpenVAS Scanner is running and listening on a Unix domain socket.
OK: OpenVAS Manager is running and listening on a Unix domain socket.
OK: Greenbone Security Assistant is listening on port 443, which is the default port.
Step 8: Checking nmap installation ...
WARNING: Your version of nmap is not fully supported: 7.40
SUGGEST: You should install nmap 5.51 if you plan to use the nmap NSE NVTs.
Step 10: Checking presence of optional tools ...
OK: pdflatex found.
WARNING: PDF generation failed, most likely due to missing LaTeX packages. The PDF report format will not work.
SUGGEST: Install required LaTeX packages.
OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is likely to work.
OK: rpm found, LSC credential package generation for RPM based targets is likely to work.
OK: alien found, LSC credential package generation for DEB based targets is likely to work.
OK: nsis found, LSC credential package generation for Microsoft Windows targets is likely to work.
To create the scanner in openvas, I use:
openvasmd --create-scanner="w3af" --scanner-host=127.0.0.1 --scanner-port=1235 --scanner-type="OSP" \
--scanner-ca-pub=/usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub=/usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv=/usr/local/var/lib/openvas/private/CA/clientkey.pem
To run ospd-w3af scanner, I use:
~# ospd-w3af -b 127.0.0.1 -p 1235 -k \
/usr/local/var/lib/openvas/private/CA/clientkey.pem -c \
/usr/local/var/lib/openvas/CA/clientcert.pem --ca-file \
/usr/local/var/lib/openvas/CA/cacert.pem -L DEBUG
When I verify the scanner with openvasmd --verify-scanner xxxxx I got
Scanner version: 2018.8.22.
note: in the logs of the scanner I got this for every verify I do, I don't know if it's related or no and I didn't find a way to fix this:
2018-10-15 14:27:47,413 ospd.ospd: DEBUG: New connection from 127.0.0.1:60078
2018-10-15 14:27:49,430 ospd.ospd: DEBUG: Error: ('The read operation timed out',)
2018-10-15 14:27:49,433 ospd.ospd: DEBUG: 127.0.0.1:60078: Connection closed
So, my verification made, I want to create a task that uses this scanner but I can't save it due to error "Given scanner_type was invalid" :
https://i.stack.imgur.com/fvIJd.png
I got 0 connection to the chosen scanner at this moment and I can't find anything in the logs (maybe I can't search). I suspect the gsad UI being responsible for this but I can't find it.
I don't know what to do and if someone more expert than me (not very hard) could help that'd be great :)
Thanks in advance.
I solved this issue by creating a scan configuration for the ospd scanner (I though it didn't need one since it import them)
I faced another issue concerning ospd-w3af configuration, I couldn't create one because it needs ospd 1.0.0 installed, I modified the dependencies few days ago and it doesn't work with ospd 1.2.0
Now I'm facing the issue where the scans doesn't start properly. It stops at 1%
Getting openvas 9 running on new install of Ubuntu 18 was a pain. once i got past all my errors by creating files and ln -s for redis-server socks connections my tasks crapped out at 1%. My fix was install sudo apt install libopenvas-dev after that scans work and check-setup worked. Check-setup report no scanner but openvassd was running and openvasmd --verify-scanner (uuid) showed the scanner.

Gitlab on an OpenVZ Server with nginx - Authentication Required popup

I am experiencing an issue where trying to access GitLab through my browser is providing me with a popup for Authentication. It looks like a .htaccess authentication popup, but I have not configured htaccess authentication with my nginx configuration.
Authentication Required
The server at http://git.servername.com:80 requires a username and password.
The server says: Password Protected.
Username:
Password:
I have tried troubleshooting this issue for a couple of days now, but I am running into some dead ends, since I see no information in my nginx error logs, or my GitLab production.log.
I recently performed an installation of GitLab on Ubuntu 12.04 (following this guide: https://www.digitalocean.com/community/articles/how-to-set-up-gitlab-as-your-very-own-private-github-clone), and ran into some issues with not having enough memory on OpenVZ. I created some fake swap to get over this hurdle, and I have verified that the GitLab server is running. My verification and configuration information is as follows:
GitLab Verification
user#server:/home/git/gitlab$ sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.7.0 ? ... OK (1.7.0)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
post-receive hook up-to-date? ... yes
post-receive hooks in repos are links: ... can't check, you have no projects
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Checking Sidekiq ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
Projects have satellites? ... can't check, you have no projects
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.9.1)
Checking GitLab ... Finished
Environment Info
user#server:/home/git/gitlab$ sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 2.0.0p247
Gem Version: 2.0.3
Bundler Version:1.6.0
Rake Version: 10.1.0
GitLab information
Version: 6.0.2
Revision: 10b0b8f
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: http://git.servername.com
HTTP Clone URL: http://git.servername.com/some-project.git
SSH Clone URL: git#git.servername.com:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.7.0
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/bin/git
Nginx Configuration
In my default.conf file:
server {
listen 80;
server_name git.servername.com;
location / {
proxy_pass http://git.servername.com:80;
}
}
Hosts file on my personal machine (not my VPS)
<IP Address of VPS> git.servername.com

Redmine under sub-directory on nginx

My vhost configuration: http://pastebin.com/ZyXUmQtx (only one domain on this installation)
I've been racking my head and Google for a solution the last two days and can't quite seem to come up with a solution that works.
My setup (from the above configuration):
IP.Board 3.4 installation in %root_domain%/forums/
IP.Content 2.3 installation in %root_domain/forums/ (with external access index.php on the top-level)
Redmine 2.2.2 install at /usr/share/redmine (this is working because Thin is running and there are no errors in either log file)
Stale phpMyAdmin configuration at /usr/share/phpmyadmin/ that also kinda doesn't load html/css properly.
Symlink to /usr/share/redmine/public to /srv/www/tiberian-genesis.net/public_html/redmine
I'm trying to get redmine setup to run under %root_domain%/redmine/, but I keep getting a 404 page from my IP.Content installation.
Accessing it takes me to the url: /redmine/login?back_url=http://redmine_thin_servers/redmine/ (which now that notice it, it seems to not like my upstream...)
In case someone requests the Thin configuration file:
---
pid: /var/run/thin/redmine.pid
group: tgmod
prefix: /redmine
timeout: 30
log: /var/log/thin/redmine.log
max_conns: 1024
require: []
max_persistent_conns: 512
environment: production
user: tgmod
servers: 1
daemonize: true
chdir: /usr/share/redmine
socket: /var/run/thin/redmine.sock
I'm out of ideas here.
Thanks in advance!
I just ended up setting it up on a sub-domain. I wanted to try to proxy it on a sub-directory, but my main website kept interfering with the rules.

Resources