Configure Artifactory Cloud as private Docker Registry - artifactory

I'm using following documentation to configure private Docker Registry in Artifactory Cloud.
I've configured Docker Registry from UI with V2 API and force authentication.
Let's skip reverse proxy configuration for now. I'm trying to call artifactory directly to verify if it works:
curl --user test:test --connect-timeout 5 http://myaccountname-docker-dockerv2-local.artifactoryonline.com/artifactory/api/docker/dockerv2-local/v2/_ping
Notes:
test - username and password
myaccountname - my account name in Artifactory Cloud
dockerv2-local - created docker registry repository (in Artifactory notation)
I've got response:
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
I tried to use:
myaccountname.artifactoryonline.com - the same result
and this
myaccountname.artifactoryonline.com:8080
myaccountname-docker-dockerv2-local.artifactoryonline.com:8080
myaccountname.artifactoryonline.com:8081
myaccountname-docker-dockerv2-local.artifactoryonline.com:8081
I got connection timeout.
I've spent a lot of time to figure out the root cause of this problem. I need you help, guys.
Can you tell me what I've done wrong? How to fix it? Or at least give me direction to look at.

Related

Unable to access GCP service after gke upgrade

Automatic upgrade in mid-May.
1.14.10-gke.27 → 1.14.10-gke.36
https://cloud.google.com/kubernetes-engine/docs/release-notes?_ga=2.196679278.-315187236.1572486593#may_13_2020
After that, I got Memorystore(Redis)connection error and crul6 error.
crul error
cURL error 6: Could not resolve host: www.googleapis.com (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
This problem happens occasionally, not always.
It worked fine before the upgrade.
Role-based access control not use
Workload Identity not use
Please advise.
Close it once.
Probably not a gke version issue.
The reason is that the node was a preemptible node.
That is likely.
When I ran the reboot test of the Pretentive Node, I was able to reproduce the error.
gcloud compute instances simulate-maintenance-event <instance name> --zone <zone>

Jfrog API for list docker tags

Getting 302 for the JFrog REST api for listing docker tags.
Documentation :-
https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-ListDockerTags
Usage:
GET /api/docker/{repo-key}/v2/{image name}/tags/list?n=<n from the request>&last=<last tag value from previous response>
My query:-
repo-key - docker-local
My image name is like -> /eric/com.jfrog/test-app
So the query is,
https://jfrog.test.com/api/docker/docker-local/v2/eric/com.jfrog/test-app/tags/list
Response :-
<html>
<head><title>302 Found</title></head>
<body>
<center><h1>302 Found</h1></center>
<hr><center>nginx/1.17.5</center>
</body>
</html>
Artifactory API is exposed under /artifactory/api... path. At least for my Pro and JCR versions configured with nginx subdomains.
Try following paths:
Using general artifactory url:
curl -u user:pass https://jfrog.test.com/artifactory/api/docker/docker-local/v2/my-docker-image/tags/list?
If you're using subdomain and reverse proxies e.g. image is available at docker-local.test.com/my-docker-image:latest than following path should be correct as well:
curl -u user:pass https://docker-local.test.com/artifactory/api/docker/docker-local/v2/my-docker-image/tags/list?
For both cases /artifactory/api/docker are always. docker-local is the name of the repo (local or virtual) and my-docker-image is the name of the image. Probably for your path you should replace my-docker-image with eric/com.jfrog/test-app.
Instead using -u user:pass how to use the token or key to mask the password.

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

CFHTTP does not connect over SSL connection?

I have just installed an intermediate & primary SSL certificate on my VPS. Everything is working well, except when I make a cfhttp call:
<cfhttp url="https://advert.establishmindfulness.com/ad-zone-1/?categoryid=1" method="get" result="adzone" />
<cfdump var="#adzone#" />
From https://app.establishmindfulness.com to https://advert.establishmindfulness.com. These 2 subdomains are on the same server, and I am using a wildcard SSL certificate:
*.establishmindfulness.com
That covers all sub domains.
VPS environment
OS: Windows 2008R2 with IIS7
Application server: Lucee 4.5.2.018 final
Servlet Container: Apache Tomcat/8.0.28
Java: 1.8.0_66 (Oracle Corporation) 64bit
Do I need to install the intermediate.crt & primaryssl.crt into my keystore cacerts? Is this the problem?
I tried just installing the certificate.cer that I grabbed from Internet Explorer, but maybe this is the wrong approach?
I still get the error:
Error Detail
Unknown host: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
OK. For anyone who comes across this issue, instead of having to spend several hours pulling your hair out, I managed to get the connection to work:
This is taken from the following link:
https://groups.google.com/forum/#!topic/lucee/BPm8vYdgkPQ
Thank you Dominic Watson
I've just tried this and got it working:
Log in to Lucee server admin and navigate to "SSL Certificates"
Enter your host name "establishmindfulness.com" in the Host field (without the quotes)
Hit "list" button
Hit "install" button
That's it. The cfhttp call started working.

Getting "connection refused" when trying to access etcd from within a Docker container

I am trying to access etcd from within a running Docker container. When I run
curl http://172.17.42.1:4001/v2/keys
I get
curl: (7) Failed to connect to 172.17.42.1 port 4001: Connection refused
I have four other hosts where this works fine, but every container on this machine has this problem. I'm really at a loss as to what's going on, and I don't know how to debug it.
My etcd environment variables are
ETCD_ADVERTISE_CLIENT_URLS=http://10.242.10.2:2379
ETCD_DISCOVERY=https://discovery.etcd.io/<token_removed>
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.242.10.2:2380
ETCD_LISTEN_CLIENT_URLS=http://10.242.10.2:2379,http://127.0.0.1:2379,http://0.0.0.0:4001
ETCD_LISTEN_PEER_URLS=http://10.242.10.2:2380
I can also access etcd from the host with
curl http://localhost:4001/v2/keys
So there seems to be some error when routing from the container out to the host. But I can't figure out what it is. Can anyone point me in the right direction?
I observed I had to use the --advertise-client-urls and --listen-client-urls. Like so:
./etcd --advertise-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001' --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
Then I was able to successfully do
curl -L http://hostname:2379/version
from any machine that could reach that server and it worked.
It turns out etcd was only listening on localhost:4001 on that machine, which is why I couldn't access it from within a container. This is despite me configuring one of the listen client urls to 0.0.0.0:4001.
It turns out that I had run sudo systemctl enable etcd2, which caused it to run before the cloud-config service ran. As such, etcd started with default configuration instead of the one that I had specified in my cloud config. Running sudo systemctl disable etcd2 fixed the issue.

Resources