i am facing issue sometime when i need to deploy my apps usign helm charts:
the update of repository is getting timed out and we can't deploy.
in the logs i see:
2-06-14T09:33:53.151Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Helm Virtual Metadata stuck tasks: 4651
2022-06-14T09:33:53.151Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Docker Catalog Index stuck tasks: 23
2022-06-14T09:33:53.152Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Helm Metadata stuck tasks: 2
2022-06-14T09:33:53.152Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Maven Metadata stuck tasks: 243
2022-06-14T09:33:53.152Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Conan Metadata stuck tasks: 272
2022-06-14T09:33:53.152Z [jfrt ] [INFO ] [185bdda5673ce024] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Npm Metadata stuck tasks: 1
2022-06-14T09:33:53.313Z [jfrt ] [WARN ] [9da4cfb83d1da2c5] [.r.ArtifactoryResponseBase:174] [tp-nio-8081-exec-159] - Client Closed Request 499: java.io.IOException: Broken pipe
2022-06-14T09:33:58.153Z [jfrt ] [INFO ] [d095d20ce78b7e4b] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Helm Virtual Metadata stuck tasks: 4652
2022-06-14T09:33:58.153Z [jfrt ] [INFO ] [d095d20ce78b7e4b] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Docker Catalog Index stuck tasks: 23
2022-06-14T09:33:58.153Z [jfrt ] [INFO ] [d095d20ce78b7e4b] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Helm Metadata stuck tasks: 2
2022-06-14T09:33:58.153Z [jfrt ] [INFO ] [d095d20ce78b7e4b] [.AsyncWorkQueueServiceImpl:275] [WorkQueueJob ] - Executing Maven Metadata stuck tasks: 243
Version used: 7.38.10 rev 73810900
A restart of Artifactory fixes the issue however this is really inconvenient.
Any idea how I can fix this?
Check if there are any memory or cpu intensive process are running on the host where the artifactory is hosted. I have the same problem with Artifactory 7.37.xx version and I found that one of our artifactory replication process was using a lot of resources which lead to these stuck messages on helm indexes, inturn causing helm repo add and update to mostly fail and there by helm install.
Related
I have Artifactory Pro (trial) installed on VM in Azure. I have to check how will it work without internet access. I use only Maven repositories so I created NSG outbound role for blocking HTTPS access. After reboot Artifactory won't load. From log:
2023-01-04T11:27:40.641Z [jfrou] [WARN ] [0e5f4bf9d6dc7f57] [local_topology.go:274 ] [main ] [] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2023-01-04T11:27:41.087Z 35[jfob ] [INFO ] [44efcd4b59cba041] [artifactory_client.go:159 ] [main ] [] - Fetching Artifactory version [artifactory_client]
2023-01-04T11:27:41.092Z 35[jfob ] [INFO ] [44efcd4b59cba041] [artifactory_client.go:176 ] [main ] [] - Got HTTP 503 from Artifactory [artifactory_client]
2023-01-04T11:27:41.092Z 35[jfob ] [INFO ] [44efcd4b59cba041] [artifactory_client.go:183 ] [main ] [] - Fetch base URL from Artifactory returned: 503 <!--
~ Artifactory is a binaries repository manager.
I'm newbie in Jfrog Artifactory and I having problems with starting the container with Artifactory.
It works, and I see the web-interface, but I'm concerned about the errors in the console.
Here is the docker-compose:
version: "3.9"
services:
artifactory:
image: docker.bintray.io/jfrog/artifactory-oss
container_name: artifactory
environment:
JF_SHARED_NODE_IP: "127.0.0.1"
JF_SHARED_NODE_ID: "artifactory"
JF_SHARED_NODE_NAME: "artifactory"
ports:
- 8082:8082
- 8081:8081
volumes:
- ./jfrog/artifactory/var/:/var/opt/jfrog/artifactory
- /etc/localtime:/etc/localtime:ro
logging:
driver: json-file
options:
max-size: "50m"
max-file: "10"
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
Before I launch it, I execute the following:
sudo mkdir -p ./jfrog/artifactory/var/etc/
touch ./jfrog/artifactory/var/etc/system.yaml
chown -R 1030:1030 ./jfrog/artifactory/var
chmod -R 777 ./jfrog/artifactory/var
Here is the startup log:
artifactory_startup.log (sorry for uploading the log on Goggle Drive. It's too large for the stackoverflow post)
I'm concerned about these strings:
[jfac ] [WARN ] [976f1c9489fa2680] [c.z.h.u.DriverDataSource:70 ] [ocalhost-startStop-1] - Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
[jfac ] [WARN ] [976f1c9489fa2680] [o.j.c.ExecutionUtils:165 ] [pool-8-thread-2 ] - Retry 10 Elapsed 5.22 secs failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
[jfrt ] [ERROR] [ ] [o.j.c.w.FileWatcher:221 ] [Thread-6 ] - Unknown exception while watching for file changes: null
artifactory | java.lang.NullPointerException: null
artifactory | at org.jfrog.config.watch.FileWatcher.lambda$doWatch$2(FileWatcher.java:202)
artifactory | at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
artifactory | at org.jfrog.config.watch.FileWatcher.doWatch(FileWatcher.java:201)
artifactory | at java.base/java.lang.Thread.run(Thread.java:829)
[jfrou] [WARN ] [6424ea6f8b2dc101] [local_topology.go:256 ] [main ] - Readiness test failed with the following error: "required node services are missing or unhealthy"
Please, help me find out what these errors mean. Or I can just use the service and all is OK?
After some digging I've found out that Tomcat and JVM can't get enough CPU threads. The solution is to give more recourses to the containers. I've completely forgotten to read about system requirements.
I'm trying to upgrade my current artifactory 6.23.1 to 7.10.1
All upgrade logs are ok but when I restart the service I've the following errors:
[root ~]# cat /opt/jfrog/artifactory/var/log/console.log | grep ERROR
2020-11-14T14:12:06.384Z [jfrt ] [ERROR] [bca759f3b3ef4148] [o.a.l.v.LoggingVersion:83 ] [ocalhost-startStop-2] - Error occurred while converting logback config for conversion: File '/opt/jfrog/artifactory/var/etc/artifactory/logback.xml' does not exist.
2020-11-14T14:12:06.387Z [jfrt ] [ERROR] [bca759f3b3ef4148] [o.a.l.c.LoggingConverter:69 ] [ocalhost-startStop-2] - Failed to execute logging conversion.
2020-11-14T14:12:20.353Z [jfrt ] [ERROR] [ea891217e8aa6df ] [d.c.m.ConverterManagerImpl:249] [art-init ] - Conversion failed. You should analyze the error and retry launching Artifactory. Error is: The current Artifactory config schema namespace is 'http://artifactory.jfrog.org/xsd/3.1.2' The provided config does not seem to be compliant with it.
2020-11-14T14:12:20.355Z [jfrt ] [ERROR] [ea891217e8aa6df ] [ctoryContextConfigListener:126] [art-init ] - Application could not be initialized: The current Artifactory config schema namespace is 'http://artifactory.jfrog.org/xsd/3.1.2' The provided config does not seem to be compliant with it.
2020-11-14T14:12:22.366Z [jfrt ] [ERROR] [ ] [o.a.w.s.ArtifactoryFilter:213 ] [http-nio-8081-exec-5] - Artifactory failed to initialize: Context is null
2020-11-14T14:15:20.393Z [jffe ] [ERROR] [ ] [ ] [main ] - Error: Error starting application Failed pinging artifactory for 180Request failed with status code 404
2020-11-14T14:15:20.397Z [jffe ] [ERROR] [ ] [ ] [main ] - exit code : 0
And the website is not working (error 500)
What can I do to solve this error ?
Nicolas, it complains about 2 issues,
1.Logback.xml not in the place under $JFROG_HOME/artifactory/var/etc/artfactory location, use the find command to check the location of logback.xml
The Artifactory config complains about incorrect xsd values, can you tell me exactly from which version to which version the Artifactory is upgraded and the exact steps, as well as navigate to $JFROG_HOME/artifactory/var/etc/artfactory and open the artifactory.config.latest.xml and share the first 4 lines.
When I download, unpack and launch artifactory.sh I see the following error
2020-02-26T21:32:50.496Z [jfac ] [ERROR] [c1b4de79a3f11666] [.j.a.s.s.r.JoinServiceImpl:253] [http-nio-8040-exec-1] - Could not validate router Check-url: http://XXXXXX:8082/router/api/v1/system/ping
And then
2020-02-26T21:32:55.636Z [jfac ] [WARN ] [67b9d42698f5614c] [o.j.c.ExecutionUtils:141 ] [pool-6-thread-2 ] - Retry 20 Elapsed 9.04 secs failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
I realize I'm missing something obvious but couldn't figure it out yet. Any suggestions? Thanks.
Alexey, I suspect ipv6 ip being picked up by the start-script, causing this.
Can you update your system.yaml (will be in var/etc/ folder) with the following and try?
shared:
node:
ip: <your ipv4 IP>
Might be helpful to anyone. I had tried many things mentioned on google to solve the issue but does not work. Finally, it gets resolved by setting up proper resources. I used 4 GB RAM and 2 core and it worked
Alexey, Do you see any error in:
tomcat localhost log - will be at var/log/tomcat
router_service.log - will be at var/log
access_service.log - will be at var/log
Potential issue may be -
your box is not allowing localhost calls (due to some security set-up), or
all dependent services are not starting for some other reason
Also - please check which script are you using, there is a new artifactory.sh script packed in app/bin folder, which should be used.
In my case, I had a similar error the moment i was trying to access artifactory. Logs of router-service:
[root#artifactory-master log]# tail -f router-service.log
2020-03-20T22:17:05.328Z [jfrou] [INFO ] [ ] [bootstrap.go:70 ] [main ] - Router (jfrou) service initialization started. Version: 1.1.0 Revision: c2646fcb28e2d4ca095b07aacebe509d934cef77 PID: 19062 Home: /opt/jfrog/artifactory
2020-03-20T22:17:05.329Z [jfrou] [INFO ] [ ] [bootstrap.go:73 ] [main ] - JFrog Router IP: ::1
2020-03-20T22:17:05.334Z [jfrou] [INFO ] [ ] [bootstrap.go:159 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2020-03-20T22:17:05.336Z [jfrou] [INFO ] [ ] [bootstrap.go:78 ] [main ] - JFrog Router Service ID: jfrou#01e3wgemz9esckmd8v48etdy18
2020-03-20T22:17:05.336Z [jfrou] [INFO ] [ ] [bootstrap.go:79 ] [main ] - JFrog Router Node ID: artifactory-master
2020-03-20T22:17:07.354Z [jfrou] [INFO ] [ ] [config_holder.go:107 ] [main ] - configuration update detected
2020-03-20T22:17:10.738Z [jfrou] [FATAL] [ ] [bootstrap.go:100 ] [main ] - Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://::1:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http:///:1:8082/router/api/v1/system/ping": URI does not specify a valid host name: http:///:1:8082/router/api/v1/system/ping; nested exception is org.apache.http.client.ClientProtocolException: URI does not specify a valid host name: http:///:1:8082/router/api/v1/system/ping
To give you some context, i am running artifactory in a centos 8 VM and I'm accessing artifactory graphical interface via windows machine.
That means, I am using a web browser (Chrome) to navigate to the artifactory instance.
For that, in the VM side i updated the files "hosts" and "hostname" (/etc/):
hosts:
127.0.0.1 localhost artifactory-master
::1 localhost artifactory-master
hostname:
artifactory-master
In the windows machine, i updated the hosts file located in "C:\Windows\System32\drivers\etc" with the VM host ip and hostname:
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.100.100.10 artifactory-master
(in order to get the ip of the VM machine run the command ifconfig)
Then, I started artifactory by running the command:
service artifactory start
And tried to access artifactory in the browser unsucessfuly:
http://artifactory-master:8082/ui
I stopped the service and in order to accomplish my goal after some tryouts, I realized that I had to comment out the address "::1" in the "hosts" file (/etc/):
hosts:
127.0.0.1 localhost artifactory-master
#::1 localhost artifactory-master
Finally, I started the service again and i was able to access artifactory. Logs of router-service :
2020-03-24T23:02:17.219Z [jfrou] [INFO ] [ ] [bootstrap.go:70 ] [main ] - Router (jfrou) service initialization started. Version: 1.1.0 Revision: c2646fcb28e2d4ca095b07aacebe509d934cef77 PID: 14542 Home: /opt/jfrog/artifactory
2020-03-24T23:02:17.220Z [jfrou] [INFO ] [ ] [bootstrap.go:73 ] [main ] - JFrog Router IP: 127.0.0.1
2020-03-24T23:02:17.224Z [jfrou] [INFO ] [ ] [bootstrap.go:159 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2020-03-24T23:02:17.227Z [jfrou] [INFO ] [ ] [bootstrap.go:78 ] [main ] - JFrog Router Service ID: jfrou#01e3wgemz9esckmd8v48etdy18
2020-03-24T23:02:17.227Z [jfrou] [INFO ] [ ] [bootstrap.go:79 ] [main ] - JFrog Router Node ID: artifactory-master
2020-03-24T23:02:19.572Z [jfrou] [INFO ] [ ] [config_holder.go:107 ] [main ] - configuration update detected
2020-03-24T23:02:25.663Z [jfrou] [INFO ] [ ] [join_executor.go:180 ] [main ] - Cluster join: Successfully joined the cluster
2020-03-24T23:02:25.813Z [jfrou] [INFO ] [ ] [registry_handler.go:89 ] [main ] - the following services were registered automatically based on persisted data: jfac#01e3wgdn6q0gvj0czswc8k0gp8, jffe#000, jfmd#01e3wges9tvwawj403y5mxfjp7, jfrt#01e3wgfass87mh1nbcv5rv1t98
2020-03-24T23:02:25.984Z [jfrou] [INFO ] [ ] [main.go:36 ] [main ] - Router (jfrou) service initialization completed in 8.808 seconds. Listening on port: 8082
2020-03-24T23:03:01.281Z [jfrou] [INFO ] [7e7df2f621a4e1aa] [local_topology.go:212 ] [main ] -
###############################################################
### All services started successfully in 44.081 seconds ###
###############################################################
PS: my artifactory version is the OSS 7.2.1
we have put the IP in our node, and this doesnt work.
we can see that the router is still using LOCALHOST and that its using the ::1 IPV6, despite our system.yaml being indented correctly.
this was working fine (running system for more than 4 months) and then the most recent update this started to fail.
Does anyone have anything better than "check the file" - that actually addresses the issue, which is the following:
OS's generally come with both localhost set to the 127.0.0.1 and the ::1 to localhost.
JFROG router is "dumb " in that its picking up the url https://localhost:8046, but then trying to do resolution to the ::1 (catch all for IPv6).
many SO ticket shows #prasanna and others doing "edits" to the file, but even with correct syntax, we can see JFROG's product is not doing what is says its doing.
example of system.yaml (you can see the the indents are correct and in fact, this is from the system.yaml-full-example template that JFROG provides.
SHARED CONFIGURATIONS
A shared section for keys across all services in this config
shared:
## Security Configuration
security:
## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
#joinKey: ""
## Join key file location
#joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"
## Master key file location
## Generated by the product on first startup if not provided
#masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"
## Maximum time to wait for key files (master.key and join.key)
#bootstrapKeysReadTimeoutSecs: 120
## Node Settings
node:
## A unique id to identify this node.
## Default auto generated at startup.
id: "art-00"
## Default auto resolved by startup script
ip: 10.x.34.63 (x is there on purpose)
## Sets this node as primary in HA installation
you can see plainly inside the logs when you start what is happening as the OP showed.
SAMPLE LOG
Currently, I am using Corda V3.1 and there is one issue which I could not figure out the root cause of. The behavior of the error occurs when the application processes a transaction. It is hanged at the last step in the below logs:
>> Verifying contractCode constraints.
>> Signing transaction with our private key.
>> Collecting signatures from counterparties.
>> Done
>> Obtaining notary signature and recording transaction.
>> Requesting signature by notary service
>> Requesting signature by Notary service(hanged here)
I didn't make any changes, but it stopped working. From the log, I could see:
[INFO ] 2018-06-10T07:06:35,287Z [main] BasicInfo.printBasicNodeInfo - Node for "Notary" started up and registered in 42.91 sec {}
[INFO ] 2018-06-10T07:06:40,305Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005], legalIdentitiesAndCerts=[O=CompanyA, L=London, C=GB], platformVersion=3, serial=1528610763747) {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Previous node was identical to incoming one - doing nothing {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005], legalIdentitiesAndCerts=[O=CompanyA, L=London, C=GB], platformVersion=3, serial=1528610763747) {}
[INFO ] 2018-06-10T07:06:40,336Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10008], legalIdentitiesAndCerts=[O=CompanyB, L=New York, C=US], platformVersion=3, serial=1528610765829) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Previous node was identical to incoming one - doing nothing {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Done adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10008], legalIdentitiesAndCerts=[O=CompanyB, L=New York, C=US], platformVersion=3, serial=1528610765829) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Adding node with info: NodeInfo(addresses=[[2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10002], legalIdentitiesAndCerts=[O=Notary, L=London, C=GB], platformVersion=3, serial=1528610765215) {}
[INFO ] 2018-06-10T07:06:40,352Z [RxIoScheduler-2] network.PersistentNetworkMapCache.addNode - Discarding older nodeInfo for O=Notary, L=London, C=GB {}
[INFO ] 2018-06-10T07:06:53,654Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:06:54,663Z [nioEventLoopGroup-2-2] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:15,687Z [nioEventLoopGroup-2-3] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:16,696Z [nioEventLoopGroup-2-4] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:37,720Z [nioEventLoopGroup-2-5] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:38,728Z [nioEventLoopGroup-2-6] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:07:59,747Z [nioEventLoopGroup-2-7] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:00,747Z [nioEventLoopGroup-2-8] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:21,768Z [nioEventLoopGroup-2-9] netty.AMQPClient.operationComplete - Failed to connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
[INFO ] 2018-06-10T07:08:22,779Z [nioEventLoopGroup-2-10] netty.AMQPClient.run - Retry connect to [2002:aafc:ce75:1007:34eb:f37b:e811:c350]:10005 {}
The last two steps are repeating again and again. The only approach to resolve it is to clean and re-deploy nodes but, for sure, that is not correct. Anyone able to help with this? Thanks a lot.
So it's not clear based on your description exactly how you were running your corda nodes.
The issue is that the corda nodes are having trouble communicating with each other but it's not clear why. if it was running on localhost than this is really strange.
If you're running these in the cloud than I'd try to regenerate your node configuration or maybe take another look at the network map corda node as it's definitely gotten wonky.
It also could be that the cordapp it's trying to run is making mistakes when trying to execute on the nodes or the notary.
You may have an easier time getting this to work with some of the newer developer samples in order to determine whether corda updates solved this problem.
The most basic sample that basically always works is the yo cordapp (https://github.com/corda/samples-java/tree/master/Basic/yo-cordapp). Try running it to see if you can isolate the problem to the flows or to corda.