I have the following
a virtual docker repo docker-virtual
a remote docker repo dockerhub
a local docker repo docker-local
docker-local is the default deployment repo. Can I use a multidomain certificate to configure the virtual repo in my reverse proxy?
Does the certificate need to support the local repo?
"Does the certificate need to support the local repo?"
Not really, as long as you are using the Default Deployment Repository feature of your Virtual docker repository in Artifactory, you only have to use one registry endpoint with the client for pushing and pulling images.
Wildcard certificates are good if you are going to work with more than just one registry endpoint. For example, consider this Nginx configuration snippet and the "server_name" directive specifically:
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.art-prod.com art-prod;
...
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
...
}
The regular expression here should capture the sub-domain portion of the URL, which would make it available for use later when re-writing the URL from "/v2/' to the full URI of the Artifactory API that includes the actual repository name. In this case your configuration will be handling more than just one hostname, so it'll be best if you used a wildcard certificate for *.art-prod.com.
Related
I'm using the gitea versioning system in a docker environment. The gitea used is a rootless type image.
The http port mapping is “8084:3000” and the ssh port mapping is “2224:2222”.
I generated the keys on my Linux host and added the generated public key to my Gitea account.
1.Test environment
Later I created the ssh config file nano /home/campos/.ssh/config :
Host localhost
HostName localhost
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
After finishing the settings i created the myRepo repository and cloned it.
To perform the clone, I changed the url from ssh://git#localhost:2224/campos/myRepo.git to git#localhost:/campos/myRepo.git
To clone the repository I typed: git clone git#localhost:/campos/myRepo.git
This worked perfectly!
2.Production environment
However, when defining a reverse proxy and a domain name, it was not possible to clone the repository.
Before performing the clone, I changed the ssh configuration file:
Host gitea.domain.com
HostName gitea.domain.com
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
Then I tried to clone the repository again:
git clone git#gitea.domain.com:/campos/myRepo.git
A connection refused message was shown:
Cloning into 'myRepo'...
ssh: connect to host gitea.domain.com port 2224: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the message is because by default the proxy doesn't handle ssh requests.
Searching a bit, some links say to use "stream" in Nginx.
But I still don't understand how to do this configuration. I need to continue accessing my proxy server on port 22 and redirect port 2224 of the proxy to port 2224 of the docker host.
The gitea.conf configuration file i use is as follows:
server {
listen 443 ssl http2;
server_name gitea.domain.com;
# SSL
ssl_certificate /etc/nginx/ssl/mycert_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/mycert.key;
# logging
access_log /var/log/nginx/gitea.access.log;
error_log /var/log/nginx/gitea.error.log warn;
# reverse proxy
location / {
proxy_pass http://192.168.10.2:8084;
include myconfig/proxy.conf;
}
}
# HTTP redirect
server {
listen 80;
server_name gitea.domain.com;
return 301 https://gitea.domain.com$request_uri;
}
3. Redirection in Nginx
I spent several hours trying to understand how to configure Nginx's "stream" feature. Below is what I did.
At the end of the nginx.conf file I added:
stream {
include /etc/nginx/conf.d/stream;
}
In the stream file in conf.d, I added the content below:
upstream ssh-gitea {
server 10.0.200.39:2224;
}
server {
listen 2224;
proxy_pass ssh-gitea;
}
I tested the Nginx configuration and restart your service:
nginx -t && systemctl restart nginx.service
I viewed whether ports 80,443, 22 and 2224 were open on the proxy server.
ss -tulpn
This configuration made it possible to perform the ssh clone of a repository with a domain name.
4. Clone with ssh correctly
After all the settings I made, I understood that it is possible to use the original url ssh://git#gitea.domain.com:2224/campos/myRepo.git in the clone.
When typing the command git clone ssh://git#gitea.domain.com:2224/campos/myRepo.git, it is not necessary to define the config file in ssh.
This link helped me:
https://discourse.gitea.io/t/password-is-required-to-clone-repository-using-ssh/5006/2
In previous messages I explained my solution. So I'm setting this question as solved.
I know almost nothing about nginx, please help me to see if it can be achieved ?
A public network IP with only 80 and 8080 ports open, Such as 182.148.???.135
A domain name with an SSL certificate, Such as mini.????.com
This domain name can resolve to this IP.
Using the above conditions, how to enable https ? So that I can pass visit https://mini.????.com to the target server 182.148.???.135
Thank you very much for your help!
Just came accross an issue. Doesn’t matter if its a local setup or one with a domain name.
When you create a symbolic frpom sites-available to sites-enabled you have to use the whole path to each location.
e.g. you can’t
cd /etc/nginx/sites-available/
ln -s monitor ../sites-enabled/
It has to be:
ln -s /etc/nginx/sites-available/monitor /etc/nginx/sites-enabled/
Inside /etc/nginx/sites-available you should have just edited the default file to change the root web folder you specified and left the server name part alone. Restart nginx, should work fine. You don’t need to specify the IP of your droplet. That’s the whole purpose of the default file.
You only need to copy the default file and change server names when you want to set up virtual hosts.
I'm trying to setup a Docker registry with Artifactory 5.2.1. It's a virtual repo that includes a docker-remote and docker-local (previously defined in Artifactory). I'd like to use the Port method of mapping and I"m running HAProxy 1.5 as a reverse proxy.
HAProxy has a SSL cert with a long list of SANs.
artifactrepo.company.com points to the main artifactory instance and works fine.
docker.company.com points to same server but HAProxy routes this to a Nexus served registry.
www.docker.company.com we intend to route to this Artifactory registry and
Per the HAProxy docs, I've set the reqirep ^([^\ :]*)\ /v2(.*$) \1\ /artifactory/api/docker/docker/v2\2 to get me to the intended port and path.
I have the "Registry Port" set to the default 6555 yet there is no process listening on that port. Artifactory and HAProxy have been restarted.
netstat -tulpn | grep 6555
gives no results.
Shouldn't Artifactory be listening on the Registry Port?
I figured this out. Turns out it was imagination poisoning from running Nexus repos.
Unlike Nexus, Artifactory doesn't actually listen on any port but the default (8081). The reference to a mapped port for a docker repo is simply to seed the generated reverse-proxy configs they give you. Those configs will have apache listen on 6555 (their default for docker) and then do the path rewrite and port map to 8081. I had intended to do this reverse-proxying using the hostname and had not scrolled all the way through their example to see that they had apache listening on 6555.
I am using a nginx reverse proxy to serve gitlab web app on port 80. ie nginx reverse proxy will redirect queries to http://ip-address/gitlab to http://ip-address:8000/gitlab . I have updated 'external_url' in my 'gitlab.rb' file. Everything is working (ie I am able to access the gitlab web-intrface via http://ip-address/gitlab ), except the generated git clone URLs. When I create new git projects, the repo URL is shown as http://ip-addeess:8000/gitlab/user/testproject.git. ie the port is still there. How can I remove the port?
The displayed repository URL is generated from the parameter external_url in your gitlab.rb file.
You should set it like this:
external_url 'http://ip-address/gitlab'
Then run sudo gitlab-ctl reconfigure to apply this change.
Add "proxy_set_header Host $http_host;" in your "location / { directive.
Then restart nginx.
It should resolve your issue
I have multiple upstream servers from an nginx load balancer:
upstream app {
# Make each client IP address stick to the same server
# See http://nginx.org/en/docs/http/load_balancing.html
ip_hash;
# Use IP addresses: see recommendation at https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
server 1.1.1.1:6666; # app-server-a
server 2.2.2.2:6666; # app-server-a
}
Right now I ue the servers in an active/passove configuration by taking down each servers (eg systemctl myapp stop) then letting nginx detect the server is down.
However I'd like to be able to change the upstream server dyamically, without having to take either app server or nginx OSS down. I'm aware of the proprietary upstream_conf module for nginx Plus but am using nginx OSS.
How can I dynamically dynamically reconfigure the upstream server on nginx OSS?
You can use:
openresty an OSS nginx bundle with lua scripting ability
nginx with lua scripting (you can configure it by yourself using nginx OSS and luajit) to achieve this.
dynx can achieve exactly what you are looking for, it's still work in progress but the dynamic upstream functionality is there and it's configurable through a rest API.
I'm adding the details on how to deploy and configure dynx:
you need to have a docker swarm up and running (for testing purpose
you can have a 1 swarm machine), follow the docker documentation to do that.
after you need to deploy the stack, for example, with this command (you need to be on the dynx git root):
docker stack deploy -c docker-compose.yml dynx
To check if the application deployed correctly, you can use this command:
docker stack services dynx
To configure an location you can use through the api you can for instance do:
curl -v "http://localhost:8888/configure?location=/httpbin&upstream=http://www.httpbin.org/anything&ttl=10"
To test if it works:
curl -v http://localhost:8666/httpbin
Do not hesitate to contact me or open an issue on github if you are not able to get it to work