We were trying to Integrate Xray with our Jfrog Artifactory. In Amazon Linux 2 we are trying to install with docker compose,while we run the config.sh
After running the bellow docker compose commands
start rabbitmq: docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d
start postgresql: docker-compose -p xray-postgres -f docker-compose-postgres.yaml up -d
start: docker-compose -p xray up -d
xray router is getting restarting after 20sec with following error:
We have checked whether any selinux, firewalld, or iptables are blocking,but all are in disable state.
Can someone help us to resolve the issue?
Now Private IP is able to reach Artifactory server,we have created Xray in same VPC of Artifactory.
Now all containers of Xray are running in Xray server,but now we have a different issue.
In xray server container we are getting the below logs
2021-08-12T13:41:17.601Z [jfxr ] [INFO ] [469946e5f04dd2c6] [updates_service:486 ] [main ] Initializing JFrog vendor
2021-08-12T13:41:17.700Z [jfxr ] [ERROR] [ ] [bin_mgr_cache:50 ] [main ] Failed to get binary managerid:failed on GetAllBinaryManagerIds query
--- at /go/src/jfrog.com/xray/internal/dbaccess/dao/binary_managers_dao.go:367 (binMgrDao.GetBinaryManagerId) ---
Caused by: not found
2021-08-12T13:41:17.701Z [jfxr ] [ERROR] [ ] [bin_mgr_cache:59 ] [main ] Failed to get binary manager'' version, err :failed to fetch binary manager
--- at /go/src/jfrog.com/xray/internal/dbaccess/dao/binary_managers_dao.go:290 (binMgrDao.GetBinMgrByID) ---
Caused by: not found
2021-08-12T13:41:17.701Z [jfxr ] [WARN ] [ ] [indexed_resources_cache:36 ] [main ] Failed to get binary managerfor cache:failed to fetch binary manager
--- at /go/src/jfrog.com/xray/internal/dbaccess/dao/binary_managers_dao.go:290 (binMgrDao.GetBinMgrByID) ---
Caused by: not found
Any idea on this?
#praseeb It appears you are giving JFrogURL as the node IP of xray. It should be the reachable URL of artifactory from the xray machine, Please pick it from Admin > Security > Settings as indicated.
I had similar issue with some custom Docker Compose files.
It was a network issue, the containers (server, indexer, analysis, persist) did not start in the same network as the router. This occurs because I use docker-compose [...] --no-start.
With the --no-start option, the network_mode: service:router was ignored and the containers goes to the default bridge network. So they cannot communicate with the router on local ports (8046, etc).
Related
I'm in the process of upgrading and migrating Artifactory version 6.11 (zip install, housed on RH7) to the 7.35 version (housed on a new server and hostname, rpm install). I'm doing this on a cloned VM as a test, so the only thing that is different from our original system is the hostname. As the documentation recommends, I first upgraded 6.11 to 7.35 and everything seemed to go well. I followed the upgrade steps and the migration.sh script completed successfully.
The major issue I'm having is that when I go into Artifacts, the 'url to file' is bringing up a 502 Bad Gateway nginx error. It seems to me that a pointer is incorrect somewhere and I'm confused as to where it could be. The upgrade was successful, so I know the data is there, but Artifactory is not able to link to it properly.
Update/clarification: To improve my description: When I head into Application bar / Artifactory / Artifacts and select a repo from the left-hand column, the 'url to file' fails to load. I'm assuming this is the tree view?
On the server that is currently working, a url such as https://acme/artifactory/repo leads to a directory listing. However, on the new server, a url such as https://new-acme-server/artifactory/repo would lead to a 502 Bad Gateway or an nginx error if I use http (no cert is installed on the test VM, but is installed on the orignal server).
In v7.35, I went into the 'http settings' and switched the server provider as both nginx and apache (Tomcat was set as default) and while the site operated fine under both, the url to the repo files still fails with an nginx error, regardless of the server provider.
When I did a full system export of the original server, the documentation had me uncheck "Exclude data". I also exported the repos out as well and imported those in via a path. Everything seems to show up correctly just like on the original server, but I'm still unable to view a directory listing when I click on the url.
Could it be the location of the filestore being different? If so, how would I go about pointing it to the right location?
V7.35: /opt/jfrog/artifactory/var/data/artifactory/filestore
V6.11: /opt/artifactory/artifactory-pro-6.11.3/data/filestore
The base URL is the same as the original installation http(s)://domain/artifactory
Output from artifactory-service.log
2022-03-25T16:58:40.429Z [jfrt ] [INFO ] [3bb67ba1f30d560e] [ifactoryApplicationContext:564] [ttp-nio-8081-exec-10] - Artifactory application context set to READY by reload
2022-03-25T16:58:40.430Z [jfrt ] [INFO ] [3bb67ba1f30d560e] [c.CentralConfigServiceImpl:933] [ttp-nio-8081-exec-10] - Configuration reloaded.
2022-03-25T17:09:04.013Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:914] [http-nio-8081-exec-5] - Reloading configuration... old revision 212, new revision 213
2022-03-25T17:09:04.121Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:542] [http-nio-8081-exec-5] - New configuration with revision 213 saved.
2022-03-25T17:09:04.121Z [jfrt ] [INFO ] [708a8ae7c307ec92] [ifactoryApplicationContext:564] [http-nio-8081-exec-5] - Artifactory application context set to NOT READY by reload
2022-03-25T17:09:04.181Z [jfrt ] [INFO ] [708a8ae7c307ec92] [ifactoryApplicationContext:564] [http-nio-8081-exec-5] - Artifactory application context set to READY by reload
2022-03-25T17:09:04.181Z [jfrt ] [INFO ] [708a8ae7c307ec92] [c.CentralConfigServiceImpl:933] [http-nio-8081-exec-5] - Configuration reloaded.
2022-03-25T17:36:47.707Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [aseBundleCleanupServiceImpl:84] [art-exec-20 ] - Starting to cleanup incomplete Release Bundles
2022-03-25T17:36:47.708Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [b.ReleaseBundleServiceImpl:415] [art-exec-20 ] - Finished deleting orphan/unidentified items from _intransit repository
2022-03-25T17:36:47.709Z [jfrt ] [INFO ] [d7bb51eedd93b03c] [aseBundleCleanupServiceImpl:90] [art-exec-20 ] - Finished incomplete Release Bundles cleanup
Your filestore location for both Artifactory 6 and Artifactory7 is correct.
This indicates to me that the issue is with your reverse proxy.
In order to confirm, can you check the below two things.
Open your Artifactory on its IP and Port.
http://localhost:8082/ (The default port will be 8082 if you have not modified). Now go to tree view in Application tab of Artifactory and try to download a specific file. If you are able to download, then the issue is might not be with filestore or upgrade. Mostly it should be reverse proxy.
In that case, navigate to Artifactory > Administration > Artifactory > HTTP Settings > Genereate new settings > Place in reverse proxy and restart.
In the above test, if you are still not able to download, then check in the logs ($JFROG_HOME/artifactory/var/log/artifactory-service.log) if you are observing a message something similar to the below.
2022-03-24T20:15:35.072Z [jfrt ] [WARN ] [2a73d62655afd1ad] [.r.ArtifactoryResponseBase:136] [http-nio-8081-exec-9] - Sending HTTP error code 500: Could not process download request: Binary provider has no content for '165c79f8dff2f9e7d3ccadcbc295f7ef8e6e95f0'
If yes, then it indicates that, Artifactory is not able to find the binary. If none of the above helped, post the log snippet from the above log file here while you are trying to download a file along with the file name.
For your update/clarification questions, please allow me to clarify.
In Artifactory 6.x Artifactory was acting as both the server and UI, therefore you could use the "http://acme/artifactory URL, however in Artifactory 7.x, Artifactory changed to work with multiple microservices, and the UI has moved to its own microservice (now it is named "Frontend"). You can try access the "Native Browser" by using this URL http://acme/ui/native/REPOSITORY/.
To add to the above and to Ganapathi's reply, the URL for your Artifactory has changed from http://acme:8081/artifactory to http://acme:8082 since Artifactory is now utilizing the "router" (external port is 8082 and internal 8046) microservice to redirect all the requests to the respective microservices. You can check the full list here.
I hope this clarifies more.
I'm newbie in Jfrog Artifactory and I having problems with starting the container with Artifactory.
It works, and I see the web-interface, but I'm concerned about the errors in the console.
Here is the docker-compose:
version: "3.9"
services:
artifactory:
image: docker.bintray.io/jfrog/artifactory-oss
container_name: artifactory
environment:
JF_SHARED_NODE_IP: "127.0.0.1"
JF_SHARED_NODE_ID: "artifactory"
JF_SHARED_NODE_NAME: "artifactory"
ports:
- 8082:8082
- 8081:8081
volumes:
- ./jfrog/artifactory/var/:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
Before I launch it, I execute the following:
sudo mkdir -p ./jfrog/artifactory/var/etc/
touch ./jfrog/artifactory/var/etc/system.yaml
chown -R 1030:1030 ./jfrog/artifactory/var
chmod -R 777 ./jfrog/artifactory/var
Here is the startup log:
artifactory_startup.log (sorry for uploading the log on Goggle Drive. It's too large for the stackoverflow post)
I'm concerned about these strings:
[jfac ] [WARN ] [976f1c9489fa2680] [c.z.h.u.DriverDataSource:70 ] [ocalhost-startStop-1] - Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
[jfac ] [WARN ] [976f1c9489fa2680] [o.j.c.ExecutionUtils:165 ] [pool-8-thread-2 ] - Retry 10 Elapsed 5.22 secs failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
[jfrt ] [ERROR] [ ] [o.j.c.w.FileWatcher:221 ] [Thread-6 ] - Unknown exception while watching for file changes: null
artifactory | java.lang.NullPointerException: null
artifactory | at org.jfrog.config.watch.FileWatcher.lambda$doWatch$2(FileWatcher.java:202)
artifactory | at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
artifactory | at org.jfrog.config.watch.FileWatcher.doWatch(FileWatcher.java:201)
artifactory | at java.base/java.lang.Thread.run(Thread.java:829)
[jfrou] [WARN ] [6424ea6f8b2dc101] [local_topology.go:256 ] [main ] - Readiness test failed with the following error: "required node services are missing or unhealthy"
Please, help me find out what these errors mean. Or I can just use the service and all is OK?
After some digging I've found out that Tomcat and JVM can't get enough CPU threads. The solution is to give more recourses to the containers. I've completely forgotten to read about system requirements.
For the past few days, I've been trying to install Openstack with Packstack on Centos 7 running through Oracle's Virtual Box on my linux distro. I've downloaded the CentOS 7 DVD image, and installed the server with gui type (including some some features that I now can't remember) and ran these commands as root:
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
At first, internet worked fine. After disabling and stopping the NetworkManager service, I couldn't access the internet anymore; I've edited the config file /etc/sysconfig/network-scripts/ifcfg-enp0s3 so that now it looks like this:
TYPE=Ethernet
PROXY_METHOD=no
DNS=yes
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
NM_CONTROLLED=no
UUID=1ce94676-c997-4772-9253-a3ac48a7814f
DEVICE=enp0s3
ONBOOT=yes
HWADDR=08:00:27:A5:FA:0F
DNS1=208.67.222.222
DNS2=208.67.220.220
PEERDNS=no
DOMAIN=localdomain
And the /etc/resolv.conf file (which was empty btw) and added the lines:
nameserver 208.67.222.222
nameserver 208.67.220.220
search localdomain
After this, internet seems to be working fine on the VM, so I began installing packstack with the following commands (as root as always):
yum install -y centos-release-openstack-train
dnf update -y //dnf wasn't present, so I installed it.
dnf install -y openstack-packstack
There had been no issues to this point. Sorry for the long post, I'm trying to include everything I did.
Now, when I run (as root) packstack --allinone to install packstack I get the following output:
[root#localhost smaug]# packstack --allinone
Welcome to the Packstack setup utility
The installation log file is available at: /var/tmp/packstack/20210701-104910-If_Lz5/openstack-setup.log
Installing:
Clean Up [ DONE ]
Discovering ip protocol version [ DONE ]
Setting up ssh keys [ DONE ]
Preparing servers [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries [ DONE ]
Setting up CACERT [ DONE ]
Preparing AMQP entries [ DONE ]
Preparing MariaDB entries [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Preparing Keystone entries [ DONE ]
Preparing Glance entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Preparing Cinder entries [ DONE ]
Preparing Nova API entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Preparing Nova Compute entries [ DONE ]
Preparing Nova Scheduler entries [ DONE ]
Preparing Nova VNC Proxy entries [ DONE ]
Preparing OpenStack Network-related Nova entries [ DONE ]
Preparing Nova Common entries [ DONE ]
Preparing Neutron API entries [ DONE ]
Preparing Neutron L3 entries [ DONE ]
Preparing Neutron L2 Agent entries [ DONE ]
Preparing Neutron DHCP Agent entries [ DONE ]
Preparing Neutron Metering Agent entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Preparing OpenStack Client entries [ DONE ]
Preparing Horizon entries [ DONE ]
Preparing Swift builder entries [ DONE ]
Preparing Swift proxy entries [ DONE ]
Preparing Swift storage entries [ DONE ]
Preparing Gnocchi entries [ DONE ]
Preparing Redis entries [ DONE ]
Preparing Ceilometer entries [ DONE ]
Preparing Aodh entries [ DONE ]
Preparing Puppet manifests [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.112.221_controller.pp
192.168.112.221_controller.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.112.221_controller.pp
Notice: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: Error: (pymysql.err.OperationalError) (1045, u"Access denied for user 'nova'#'192.168.112.221' (using password: YES)") (Background on this error at: http://sqlalche.me/e/e3q8)
You will find full trace in log /var/tmp/packstack/20210701-104910-If_Lz5/manifests/192.168.112.221_controller.pp.log
Please check log file /var/tmp/packstack/20210701-104910-If_Lz5/openstack-setup.log for more information
Additional information:
* Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS or FWaaS services. Geneve will be used as the encapsulation method for tenant networks
* A new answerfile was created in: /root/packstack-answers-20210701-104911.txt
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.112.221. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.112.221/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
It seems a DBAPI Operational Error, as pointed by http://sqlalche.me/e/e3q8, but that seemed off for some reason (I didn't even complete the installation of openstack through packstack and the error message points to a kind of runtime error to my knowledge). Investigating the log file /var/tmp/packstack/20210701-104910-If_Lz5/manifests/192.168.112.221_controller.pp.log gave the following result:
[1;33mWarning: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Skipping because of failed dependencies[0m
[1;33mWarning: /Stage[main]/Placement::Deps/Anchor[placement::service::end]: Skipping because of failed dependencies[0m
[1;33mWarning: /Stage[main]/Keystone::Cron::Fernet_rotate/Cron[keystone-manage fernet_rotate]: Skipping because of failed dependencies[0m
[1;31mError: Failed to apply catalog: Execution of '/usr/bin/openstack domain list --quiet --format csv' returned 1: Unable to establish connection to http://127.0.0.1:5000/v3/domains?: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /v3/domains (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff001fce090>: Failed to establish a new connection: [Errno 111] Connection refused',)) (tried 36, for a total of 170 seconds)[0m
If I try to connect with telnet localhost 5000 I'm greeted by the following:
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
However, everything seems fine if I connect normally without specifying the port.
If the connection is refused,I thought, then port 5000 is not open/nothing is listening on it. And so tried to open it and move on with the installation.
Here lies the arcanum: no matter what I do, the connection is always refused.
I tried to open the port to the loopback and the main internet device (enp0s3), but nothing. I tried, even if it dind't make sense, to restart the firewalld service to add the port with firewall-cmd --zone=public --add-port=5000/tcp --permanent and still the connection was refused with telnet.
iptables-save | grep 5000 yields the following:
-A INPUT -i lo -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -i enp0s3 -p tcp -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5000 -m comment --comment "001 keystone incoming keystone" -j ACCEPT
Looking at this, there seems to be rules that allow communication on port 5000, but the connection is refused still and the install of openstack keeps halting.
To add some more info lsof -i :5000, ss -antup | grep 5000 and netstat -lntu | grep 5000 do not produce any output, so this means (If I understand correctly) that no process is listening on that port and/or that port is still closed.
I'm confused on what to do, can you help?
Thanks,
I'm trying to launch artifactory within a container, and it won't seem to connect to my postgres database.
I am launching it like so:
docker run --network mynet --name artifactory -e JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver -e JF_SHARED_DATABASE_URL="jdbc:postgresql://postgres:5432/artifactory" -e JF_SHARED_DATABASE_TYPE=postgresql -e JF_SHARED_DATABASE_HOST=postgres -e JF_SHARED_DATABASE_PORT=5432 -e JF_SHARED_DATABASE_USER=artifactory -e JF_SHARED_DATABASE_PASSWORD=password -p 9081:8081-i -t --rm docker.bintray.io/jfrog/artifactory-pro:7.10.2
The output it as such:
Preparing to run Artifactory in Docker
Running as uid=1030(artifactory) gid=1030(artifactory)
Dockerfile for this image can found inside the container.
To view the Dockerfile: 'cat /docker/artifactory-pro/Dockerfile.artifactory'.
Resolved JF_SHARED_DATABASE_TYPE (postgresql) from environment variable
Resolved JF_SHARED_DATABASE_URL (jdbc:postgresql://postgres:5432/artifactory) from environment variable
Waiting for DB postgresql to be ready on postgres/5432 for 30 second
This shouldn't take 30 seconds, so this is an immediate red flag.
Then after the 30s expires, I see many of these:
2020-10-16T13:19:48.662Z [jfmd ] [INFO ] [75be6b93f5c12126] [database_bearer.go:100 ] [main ] - Connecting to (db config: {postgresql jdbc:postgresql://postgres:5432/artifactory}) [database]
2020-10-16T13:19:48.671Z [jfmd ] [WARN ] [75be6b93f5c12126] [jobs.go:92 ] [main ] - [RETRY] Initiating database connection: pq: no PostgreSQL user name specified in startup packet. Retrying in 1s ... (1/120) [database]
The startup output confirms it is set:
JF_SHARED_NODE_NAME : da6d3e81a7c5
JF_SHARED_DATABASE_PORT : 5432
JF_SHARED_DATABASE_USER : artifactory
JF_SYSTEM_YAML : /opt/jfrog/artifactory/var/etc/system.yaml
JF_ARTIFACTORY_PID : /opt/jfrog/artifactory/app/run/artifactory.pid
JF_SHARED_DATABASE_TYPE : postgresql
JF_SHARED_DATABASE_PASSWORD : ******
JF_PRODUCT_DATA_INTERNAL : /var/opt/jfrog/artifactory
JF_PRODUCT_HOME : /opt/jfrog/artifactory
JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES : jfrt,jfac,jfmd,jffe,jfevt
JF_SHARED_DATABASE_HOST : postgres
JF_SHARED_DATABASE_URL : jdbc:postgresql://postgres:5432/artifactory
JF_SHARED_DATABASE_DRIVER : org.postgresql.Driver
JF_SHARED_NODE_IP : 172.19.0.3
JF_SHARED_NODE_ID : da6d3e81a7c5
JF_ARTIFACTORY_USER : artifactory
Not sure where else to take this. Thanks for any help!
You are almost there,
you have a typo, it should be
JF_SHARED_DATABASE_USERNAME
and not
JF_SHARED_DATABASE_USER
When I download, unpack and launch artifactory.sh I see the following error
2020-02-26T21:32:50.496Z [jfac ] [ERROR] [c1b4de79a3f11666] [.j.a.s.s.r.JoinServiceImpl:253] [http-nio-8040-exec-1] - Could not validate router Check-url: http://XXXXXX:8082/router/api/v1/system/ping
And then
2020-02-26T21:32:55.636Z [jfac ] [WARN ] [67b9d42698f5614c] [o.j.c.ExecutionUtils:141 ] [pool-6-thread-2 ] - Retry 20 Elapsed 9.04 secs failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
I realize I'm missing something obvious but couldn't figure it out yet. Any suggestions? Thanks.
Alexey, I suspect ipv6 ip being picked up by the start-script, causing this.
Can you update your system.yaml (will be in var/etc/ folder) with the following and try?
shared:
node:
ip: <your ipv4 IP>
Might be helpful to anyone. I had tried many things mentioned on google to solve the issue but does not work. Finally, it gets resolved by setting up proper resources. I used 4 GB RAM and 2 core and it worked
Alexey, Do you see any error in:
tomcat localhost log - will be at var/log/tomcat
router_service.log - will be at var/log
access_service.log - will be at var/log
Potential issue may be -
your box is not allowing localhost calls (due to some security set-up), or
all dependent services are not starting for some other reason
Also - please check which script are you using, there is a new artifactory.sh script packed in app/bin folder, which should be used.
In my case, I had a similar error the moment i was trying to access artifactory. Logs of router-service:
[root#artifactory-master log]# tail -f router-service.log
2020-03-20T22:17:05.328Z [jfrou] [INFO ] [ ] [bootstrap.go:70 ] [main ] - Router (jfrou) service initialization started. Version: 1.1.0 Revision: c2646fcb28e2d4ca095b07aacebe509d934cef77 PID: 19062 Home: /opt/jfrog/artifactory
2020-03-20T22:17:05.329Z [jfrou] [INFO ] [ ] [bootstrap.go:73 ] [main ] - JFrog Router IP: ::1
2020-03-20T22:17:05.334Z [jfrou] [INFO ] [ ] [bootstrap.go:159 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2020-03-20T22:17:05.336Z [jfrou] [INFO ] [ ] [bootstrap.go:78 ] [main ] - JFrog Router Service ID: jfrou#01e3wgemz9esckmd8v48etdy18
2020-03-20T22:17:05.336Z [jfrou] [INFO ] [ ] [bootstrap.go:79 ] [main ] - JFrog Router Node ID: artifactory-master
2020-03-20T22:17:07.354Z [jfrou] [INFO ] [ ] [config_holder.go:107 ] [main ] - configuration update detected
2020-03-20T22:17:10.738Z [jfrou] [FATAL] [ ] [bootstrap.go:100 ] [main ] - Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://::1:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http:///:1:8082/router/api/v1/system/ping": URI does not specify a valid host name: http:///:1:8082/router/api/v1/system/ping; nested exception is org.apache.http.client.ClientProtocolException: URI does not specify a valid host name: http:///:1:8082/router/api/v1/system/ping
To give you some context, i am running artifactory in a centos 8 VM and I'm accessing artifactory graphical interface via windows machine.
That means, I am using a web browser (Chrome) to navigate to the artifactory instance.
For that, in the VM side i updated the files "hosts" and "hostname" (/etc/):
hosts:
127.0.0.1 localhost artifactory-master
::1 localhost artifactory-master
hostname:
artifactory-master
In the windows machine, i updated the hosts file located in "C:\Windows\System32\drivers\etc" with the VM host ip and hostname:
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.100.100.10 artifactory-master
(in order to get the ip of the VM machine run the command ifconfig)
Then, I started artifactory by running the command:
service artifactory start
And tried to access artifactory in the browser unsucessfuly:
http://artifactory-master:8082/ui
I stopped the service and in order to accomplish my goal after some tryouts, I realized that I had to comment out the address "::1" in the "hosts" file (/etc/):
hosts:
127.0.0.1 localhost artifactory-master
#::1 localhost artifactory-master
Finally, I started the service again and i was able to access artifactory. Logs of router-service :
2020-03-24T23:02:17.219Z [jfrou] [INFO ] [ ] [bootstrap.go:70 ] [main ] - Router (jfrou) service initialization started. Version: 1.1.0 Revision: c2646fcb28e2d4ca095b07aacebe509d934cef77 PID: 14542 Home: /opt/jfrog/artifactory
2020-03-24T23:02:17.220Z [jfrou] [INFO ] [ ] [bootstrap.go:73 ] [main ] - JFrog Router IP: 127.0.0.1
2020-03-24T23:02:17.224Z [jfrou] [INFO ] [ ] [bootstrap.go:159 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2020-03-24T23:02:17.227Z [jfrou] [INFO ] [ ] [bootstrap.go:78 ] [main ] - JFrog Router Service ID: jfrou#01e3wgemz9esckmd8v48etdy18
2020-03-24T23:02:17.227Z [jfrou] [INFO ] [ ] [bootstrap.go:79 ] [main ] - JFrog Router Node ID: artifactory-master
2020-03-24T23:02:19.572Z [jfrou] [INFO ] [ ] [config_holder.go:107 ] [main ] - configuration update detected
2020-03-24T23:02:25.663Z [jfrou] [INFO ] [ ] [join_executor.go:180 ] [main ] - Cluster join: Successfully joined the cluster
2020-03-24T23:02:25.813Z [jfrou] [INFO ] [ ] [registry_handler.go:89 ] [main ] - the following services were registered automatically based on persisted data: jfac#01e3wgdn6q0gvj0czswc8k0gp8, jffe#000, jfmd#01e3wges9tvwawj403y5mxfjp7, jfrt#01e3wgfass87mh1nbcv5rv1t98
2020-03-24T23:02:25.984Z [jfrou] [INFO ] [ ] [main.go:36 ] [main ] - Router (jfrou) service initialization completed in 8.808 seconds. Listening on port: 8082
2020-03-24T23:03:01.281Z [jfrou] [INFO ] [7e7df2f621a4e1aa] [local_topology.go:212 ] [main ] -
###############################################################
### All services started successfully in 44.081 seconds ###
###############################################################
PS: my artifactory version is the OSS 7.2.1
we have put the IP in our node, and this doesnt work.
we can see that the router is still using LOCALHOST and that its using the ::1 IPV6, despite our system.yaml being indented correctly.
this was working fine (running system for more than 4 months) and then the most recent update this started to fail.
Does anyone have anything better than "check the file" - that actually addresses the issue, which is the following:
OS's generally come with both localhost set to the 127.0.0.1 and the ::1 to localhost.
JFROG router is "dumb " in that its picking up the url https://localhost:8046, but then trying to do resolution to the ::1 (catch all for IPv6).
many SO ticket shows #prasanna and others doing "edits" to the file, but even with correct syntax, we can see JFROG's product is not doing what is says its doing.
example of system.yaml (you can see the the indents are correct and in fact, this is from the system.yaml-full-example template that JFROG provides.
SHARED CONFIGURATIONS
A shared section for keys across all services in this config
shared:
## Security Configuration
security:
## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
#joinKey: ""
## Join key file location
#joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"
## Master key file location
## Generated by the product on first startup if not provided
#masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"
## Maximum time to wait for key files (master.key and join.key)
#bootstrapKeysReadTimeoutSecs: 120
## Node Settings
node:
## A unique id to identify this node.
## Default auto generated at startup.
id: "art-00"
## Default auto resolved by startup script
ip: 10.x.34.63 (x is there on purpose)
## Sets this node as primary in HA installation
you can see plainly inside the logs when you start what is happening as the OP showed.
SAMPLE LOG