JFrog Artifactory OSS - Getting 401 unauthorized error when trying the Set Me Up or Edit User - artifactory

I am using JFrog Artifactory OSS 7.46.8 (Self hosted - docker version)
When using the "Set Me Up" option or editing any user from Admin view, basically any operation that calls the "/ui/api/v1/ui/userApiKey/" endpoint fails with 401.
I enabled the debug logging and below are the log I was able to find related to this.
Error generated when user logs in:
User-Changed authentication cache accessed
2022-12-14T08:18:28.679Z [jfrt ] [DEBUG] [ ] [o.a.o.ArtifactoryTracer:78 ] [http-nio-8081-exec-5] - Found existing Tracing Span. A new child Server Tracing Span will be associated with it
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [.AuthenticationFilterUtils:146] [http-nio-8081-exec-5] - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:434 ] [http-nio-8081-exec-5] - Cached key has been found for request: '/artifactory/api/onboarding/initStatus' with method: 'GET'
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:232 ] [http-nio-8081-exec-5] - Non-UI authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:227 ] [http-nio-8081-exec-5] - UI authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-5] - User-Changed authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:440 ] [http-nio-8081-exec-5] - Header authentication AccessTokenAuthentication [Principal=admin, Credentials=[PROTECTED], Authenticated=true, Details=null, Granted Authorities=[]] found in cache.
2022-12-14T08:18:28.681Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.RepoFilter:113 ] [http-nio-8081-exec-5] - Entering request GET ("IP-REMOVED") /api/onboarding/initStatus.
2022-12-14T08:18:28.681Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-5] - Executing : POST http://localhost:8046/access/api/v1/auth/authenticate
2022-12-14T08:18:28.692Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [rtifactoryInitStatusService:91] [http-nio-8081-exec-5] - Unable to authenticate user: 'admin'
org.jfrog.access.client.AccessClientHttpException: HTTP response status 401:Failed on executing request. Response: {
"errors" : [ {
"code" : "UNAUTHORIZED",
"message" : "Invalid credentials"
} ]
}
at org.jfrog.access.client.http.AccessHttpClient.createRestResponse(AccessHttpClient.java:184)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:131)
at org.jfrog.access.client.auth.AuthClientImpl.authenticate(AuthClientImpl.java:93)
at org.artifactory.ui.rest.service.onboarding.GetArtifactoryInitStatusService.execute(GetArtifactoryInitStatusService.java:86)
at org.artifactory.rest.common.service.ServiceExecutor.process(ServiceExecutor.java:39)
at org.artifactory.rest.common.resource.BaseResource.runService(BaseResource.java:141)
at org.artifactory.ui.rest.resource.onboarding.ArtifactoryOnboardingResource.getInitStatus(ArtifactoryOnboardingResource.java:62)
at jdk.internal.reflect.GeneratedMethodAccessor957.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
Error generated when the userApiKey is called.
User-Changed authentication cache accessed
2022-12-14T08:22:13.600Z [jfrt ] [DEBUG] [61b44ab36079c799] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-3] - User-Changed authentication cache accessed
2022-12-14T08:22:13.601Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-2] - Executing : POST http://localhost:8046/access/api/v1/auth/authenticate
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [nRepositoryResponseWrapper:230] [http-nio-8081-exec-8] - Skip invoking on
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.RepoFilter:208 ] [http-nio-8081-exec-8] - Exiting request GET (127.0.0.1) /api/securityconfig
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-8] - User-Changed authentication cache accessed
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-8] - User-Changed authentication cache accessed
2022-12-14T08:22:13.604Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [PassAuthenticationProvider:126] [http-nio-8081-exec-2] - 401, Access didn't authenticate user: 'admin'
2022-12-14T08:22:13.605Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [priseAuthenticationProvider:87] [http-nio-8081-exec-2] - Non docker request. Github provider supports only docker requests.
2022-12-14T08:22:13.605Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-2] - Executing : GET http://localhost:8046/access/api/v1/users/admin?expand=groups
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [nRepositoryResponseWrapper:230] [http-nio-8081-exec-2] - Skip invoking on
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.RepoFilter:208 ] [http-nio-8081-exec-2] - Exiting request GET ("IP-REMOVED") /api/userApiKey/admin
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-2] - User-Changed authentication cache accessed
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-2] - User-Changed authentication cache accessed
2022-12-14T08:22:13.610Z [jffe ] [ERROR] [03e9a4f8ab10ac74] [frontend-service.log] [main ] - Error: Request failed with status code 401
at createError (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/createError.js:16:15)
at settle (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (node:events:539:35)
at endReadableNT (node:internal/streams/readable:1345:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
What will be the solution of this?

This can be a problem with your loadbalancer/reverse proxy. Check if it allows "OPTIONS" method. At least I had a similar case.

Related

artifactory after upgrade to 7.47.11 not started - Unexpected response status code: 500

My original version was not old, around 7.41.x, after updating to the latest version, artifactory no longer starts. I can't understand the problem as I don't understand the architecture well. Please help me figure it out. There are no specific errors in the logs.
Artifactory status web page
As far as I understand, the problem is that for some reason the service that implements the user interface on port 8070 does not start.
There is nothing interesting in the logs for this service.
last errors in logs console.log
2022-12-10T19:48:33.842Z [jfrou] [ERROR] [5fb2daba52b5d736] [healthcheck.go:66 ] [main ] [] - Checking health of service 'jffe_000-artifactory' using URL 'http://localhost:8070/readiness' returned an error: Get "http://localhost:8070/readiness": dial tcp 127.0.0.1:8070: connect: connection refused
2022-12-10T19:48:33.852Z [jfrou] [ERROR] [5fb2daba52b5d736] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
2022-12-10T19:48:33.853Z [jfrou] [WARN ] [5fb2daba52b5d736] [local_topology.go:274 ] [main ] [] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2022-12-10T19:48:35.991Z [jfac ] [ERROR] [52b88c5fcb86903d] [o.j.c.ExecutionUtils:190 ] [jf-common-pool-2 ] - Router readiness check failed, cannot start Access
2022-12-10T19:48:35.994Z [jfac ] [ERROR] [52b88c5fcb86903d] [a.s.b.AccessServerRegistrar:79] [jf-common-pool-1 ] - Could not register access
2022-12-10T20:09:28.988Z [jfrou] [ERROR] [37b0a206df08bfae] [healthcheck.go:66 ] [main ] [] - Checking health of service 'jffe_000-artifactory' using URL 'http://localhost:8070/readiness' returned an error: Get "http://localhost:8070/readiness": dial tcp 127.0.0.1:8070: connect: connection refused
2022-12-10T20:09:28.995Z [jfrou] [ERROR] [37b0a206df08bfae] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
I checked the access rights, the availability of the database, the correctness of the configuration files. Tried running ./artifactory.sh and watching the launch progress.
[jfrou] [ERROR] [1dd615d3e5a6357a] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
curl http://localhost:8091/artifactory/api/v1/system/readiness
{
"errors" : [ {
"status" : 500,
"message" : "Bad credentials"
} ]
add:
I spent more than 10 hours on the analysis, turned on the debug. I found this problem, maybe this is just the reason.
I can't find how to fix it yet.
2022-12-11T06:33:14.769Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [.AuthenticationFilterUtils:146] [http-nio-8081-exec-8] - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2022-12-11T06:33:14.769Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:519 ] [http-nio-8081-exec-8] - Using anonymous
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:232 ] [http-nio-8081-exec-8] - Non-UI authentication cache accessed
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:227 ] [http-nio-8081-exec-8] - UI authentication cache accessed
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:523 ] [http-nio-8081-exec-8] - Creating the Anonymous token
2022-12-11T06:33:14.775Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [PassAuthenticationProvider:126] [http-nio-8081-exec-8] - 401, Access didn't authenticate user: 'anonymous'
2022-12-11T06:33:14.776Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [priseAuthenticationProvider:87] [http-nio-8081-exec-8] - Non docker request. Github provider supports only docker requests.
2022-12-11T06:33:14.781Z [jfrt ] [DEBUG] [ ] [o.a.w.s.ArtifactoryFilter:130 ] [http-nio-8081-exec-8] - org.artifactory.webapp.servlet.ArtifactoryFilter
org.springframework.security.authentication.BadCredentialsException: Bad credentials
at org.artifactory.security.db.DbAuthenticationProvider.additionalAuthenticationChecks(DbAuthenticationProvider.java:64)
at org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:147)
at org.artifactory.security.db.DbAuthenticationProvider.authenticate(DbAuthenticationProvider.java:53)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:182)
at org.artifactory.security.RealmAwareAuthenticationManager.authenticate(RealmAwareAuthenticationManager.java:68)
at org.artifactory.webapp.servlet.AccessFilter.useAnonymousIfPossible(AccessFilter.java:533)
at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:301)
at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:218)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:83)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:67)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:38)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:289)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:924)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1743)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:833)

Stuck in the partial helm release on Terraform to Kubernetes

I'm trying to apply a terraform resource (helm_release) to k8s and the apply command is failed half way through.
I checked the pod issue now I need to update some values in the local chart.
Now I'm in a dilemma, where I can't apply the helm_release as the names are in use, and I can't destroy the helm_release since it is not created.
Seems to me the only option is to manually delete the k8s resources that were created by the helm_release chart?
Here is the terraform for helm_release:
cat nginx-arm64.tf
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress"
chart = "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz"
}
BTW: I need to use the local chart as the official chart does not support the ARM64 architecture.
Thanks,
Edit #1:
Here is the list of helm release -> there is no gninx ingress
/data/terraform/k8s$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager default 1 2021-12-08 20:57:38.979176622 +0000 UTC deployed cert-manager-v1.5.0 v1.5.0
/data/terraform/k8s$
Here is the describe pod output:
$ k describe pod/nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Name: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Namespace: default
Priority: 0
Node: ocifreevmalways/10.0.0.189
Start Time: Wed, 08 Dec 2021 11:11:59 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress-controller
helm.sh/chart=nginx-ingress-controller-9.0.9
pod-template-hash=99cddc76b
Annotations: <none>
Status: Running
IP: 10.244.0.22
IPs:
IP: 10.244.0.22
Controlled By: ReplicaSet/nginx-ingress-nginx-ingress-controller-99cddc76b
Containers:
controller:
Container ID: docker://0b75f5f68ef35dfb7dc5b90f9d1c249fad692855159f4e969324fc4e2ee61654
Image: docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1
Image ID: docker-pullable://rancher/nginx-ingress-controller#sha256:177fb5dc79adcd16cb6c15d6c42cef31988b116cb148845893b6b954d7d593bc
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=default/nginx-ingress-nginx-ingress-controller-default-backend
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--configmap=default/nginx-ingress-nginx-ingress-controller
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 08 Dec 2021 22:02:15 +0000
Finished: Wed, 08 Dec 2021 22:02:15 +0000
Ready: False
Restart Count: 132
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzqqn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wzqqn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 8m38s (x132 over 10h) kubelet Container image "docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1" already present on machine
Warning BackOff 3m39s (x3201 over 10h) kubelet Back-off restarting failed container
The terraform state list shows nothing:
/data/terraform/k8s$ t state list
/data/terraform/k8s$
Though the terraform.tfstate.backup shows the nginx ingress (I guess that I did run the destroy command in between?):
/data/terraform/k8s$ cat terraform.tfstate.backup
{
"version": 4,
"terraform_version": "1.0.11",
"serial": 28,
"lineage": "30e74aa5-9631-f82f-61a2-7bdbd97c2276",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "helm_release",
"name": "nginx-ingress",
"provider": "provider[\"registry.terraform.io/hashicorp/helm\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"atomic": false,
"chart": "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz",
"cleanup_on_fail": false,
"create_namespace": false,
"dependency_update": false,
"description": null,
"devel": null,
"disable_crd_hooks": false,
"disable_openapi_validation": false,
"disable_webhooks": false,
"force_update": false,
"id": "nginx-ingress",
"keyring": null,
"lint": false,
"manifest": null,
"max_history": 0,
"metadata": [
{
"app_version": "1.1.0",
"chart": "nginx-ingress-controller",
"name": "nginx-ingress",
"namespace": "default",
"revision": 1,
"values": "{}",
"version": "9.0.9"
}
],
"name": "nginx-ingress",
"namespace": "default",
"postrender": [],
"recreate_pods": false,
"render_subchart_notes": true,
"replace": false,
"repository": null,
"repository_ca_file": null,
"repository_cert_file": null,
"repository_key_file": null,
"repository_password": null,
"repository_username": null,
"reset_values": false,
"reuse_values": false,
"set": [],
"set_sensitive": [],
"skip_crds": false,
"status": "failed",
"timeout": 300,
"values": null,
"verify": false,
"version": "9.0.9",
"wait": true,
"wait_for_jobs": false
},
"sensitive_attributes": [],
"private": "bnVsbA=="
}
]
}
]
}
When I try to apply in the same directory, it prompts the error again:
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.nginx-ingress: Creating...
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
Please share your thoughts. Thanks.
Edit2:
The DEBUG logs show some more clues:
2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Release validated: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Done: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.119Z [WARN] Provider "registry.terraform.io/hashicorp/helm" produced an invalid plan for helm_release.nginx-ingress, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .cleanup_on_fail: planned value cty.False for a non-computed attribute
- .create_namespace: planned value cty.False for a non-computed attribute
- .verify: planned value cty.False for a non-computed attribute
- .recreate_pods: planned value cty.False for a non-computed attribute
- .render_subchart_notes: planned value cty.True for a non-computed attribute
- .replace: planned value cty.False for a non-computed attribute
- .reset_values: planned value cty.False for a non-computed attribute
- .disable_crd_hooks: planned value cty.False for a non-computed attribute
- .lint: planned value cty.False for a non-computed attribute
- .namespace: planned value cty.StringVal("default") for a non-computed attribute
- .skip_crds: planned value cty.False for a non-computed attribute
- .disable_webhooks: planned value cty.False for a non-computed attribute
- .force_update: planned value cty.False for a non-computed attribute
- .timeout: planned value cty.NumberIntVal(300) for a non-computed attribute
- .reuse_values: planned value cty.False for a non-computed attribute
- .dependency_update: planned value cty.False for a non-computed attribute
- .disable_openapi_validation: planned value cty.False for a non-computed attribute
- .atomic: planned value cty.False for a non-computed attribute
- .wait: planned value cty.True for a non-computed attribute
- .max_history: planned value cty.NumberIntVal(0) for a non-computed attribute
- .wait_for_jobs: planned value cty.False for a non-computed attribute
helm_release.nginx-ingress: Creating...
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [DEBUG] helm_release.nginx-ingress: applying the planned Create change
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] setting computed for "metadata" from ComputedKeys: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Started: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting helm configuration: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration start: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] Using kubeconfig: /home/ubuntu/.kube/config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [INFO] Successfully initialized kubernetes config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration success: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting chart: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Preparing for installation: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 ---[ values.yaml ]-----------------------------------
{}: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Installing chart: timestamp=2021-12-09T04:30:14.125Z
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
│
╵
2021-12-09T04:30:14.158Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/helm/2.4.1/linux_arm64/terraform-provider-helm_v2.4.1_x5 pid=558800
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin exited
You don't have to manually delete all the resources using kubectl. Under the hood the Terraform Helm provider still uses Helm. So if you run helm list -A you will see all the Helm releases on your cluster, including the nginx-ingress release. Deleting the release is then done via helm uninstall nginx-ingress -n REPLACE_WITH_YOUR_NAMESPACE.
Before re-running terraform apply do check if the Helm release is still in your Terraform state via terraform state list (run this from the same directory as where you run terraform apply from). If you don't see helm_release.nginx-ingress in that list then it is not in your Terraform state and you can just rerun your terraform apply. Else you have to delete it via terraform state rm helm_release.nginx-ingress and then you can run terraform apply again.
Just faced a similar issue like this, but in my case nor there was terraform state for the helm, and nor there was a helm release.
so helm list -A or helm list in the current namespace does not.
I found this that solved: helm/helm#4174
With Helm 3, all releases metadata are saved as Secrets in the same
Namespace of the release. If you got "cannot re-use a name that is
still in use", this means you may need to check some orphan secrets
and delete them
and then its start working

JFrog Artifactory failed to initialize with error 500

Installed Artifactory on centos VM using open source rpm package but it fails to initialize.
{
"errors" : [ {
"status" : 500,
"message" : "Artifactory failed to initialize: check Artifactory logs for errors."
} ]
}
Tried adding below in system.yaml file:
shared:
node:
ip: <your ipv4 IP>
Post checking console logs says below error:
Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
2021-01-05T10:36:18.597Z [jfrt ] [ERROR] [f61fe454765979f3] [ctoryContextConfigListener:126] [art-init ] - Application could not be initialized: Connection refused (Connection refused)
java.lang.reflect.InvocationTargetException: null
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:265)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:122)
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is java.lang.reflect.UndeclaredThrowableException
at org.artifactory.spring.ArtifactoryApplicationContext.initReloadableBeans(ArtifactoryApplicationContext.java:302)
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:284)
at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:174)
... 6 common frames omitted
Caused by: java.lang.reflect.UndeclaredThrowableException: null
at com.sun.proxy.$Proxy184.init(Unknown Source)
at org.artifactory.spring.ArtifactoryApplicationContext.initReloadableBeans(ArtifactoryApplicationContext.java:300)
... 8 common frames omitted
Caused by: java.util.concurrent.ExecutionException: org.jfrog.common.ExecutionFailed: Cluster join: Service registry ping failed; Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.jfrog.access.client.AccessServerStartupValidator.waitForServer(AccessServerStartupValidator.java:39)
at org.jfrog.access.client.AccessClientBootstrap.waitForServer(AccessClientBootstrap.java:149)
at org.jfrog.access.client.AccessClientBootstrap.<init>(AccessClientBootstrap.java:104)
at org.jfrog.access.client.AccessClientBootstrap.<init>(AccessClientBootstrap.java:134)
at org.artifactory.security.access.AccessServiceImpl.bootstrapAccessClient(AccessServiceImpl.java:1290)
at org.artifactory.security.access.AccessServiceImpl.lambda$bootstrapAccessClient$23(AccessServiceImpl.java:1251)
at io.vavr.control.Try.mapTry(Try.java:634)
at io.vavr.control.Try.map(Try.java:585)
at org.artifactory.security.access.AccessServiceImpl.bootstrapAccessClient(AccessServiceImpl.java:1251)
at org.artifactory.security.access.AccessServiceImpl.initAccessService(AccessServiceImpl.java:421)
at org.artifactory.security.access.AccessServiceImpl.initAccessClientIfNeeded(AccessServiceImpl.java:410)
at org.artifactory.security.access.AccessServiceImpl.init(AccessServiceImpl.java:403)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118)
at org.artifactory.storage.fs.lock.aop.LockingAdvice.invoke(LockingAdvice.java:76)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
... 10 common frames omitted
Caused by: org.jfrog.common.ExecutionFailed: Cluster join: Service registry ping failed; Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.common.ExecutionUtils.handleStopError(ExecutionUtils.java:156)
at org.jfrog.common.ExecutionUtils.handleFunctionExecution(ExecutionUtils.java:103)
at org.jfrog.common.ExecutionUtils.lambda$generateExecutionRunnable$0(ExecutionUtils.java:67)
at org.jfrog.common.ExecutionUtils$MDCRunnableDecorator.run(ExecutionUtils.java:172)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.jfrog.common.RetryException: Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.access.client.AccessServerStartupValidator.convertToRetryException(AccessServerStartupValidator.java:56)
at io.vavr.API$Match$Case0.apply(API.java:5135)
at io.vavr.API$Match.option(API.java:5105)
at io.vavr.control.Try.mapFailure(Try.java:602)
at org.jfrog.access.client.AccessServerStartupValidator.pingAccess(AccessServerStartupValidator.java:46)
at org.jfrog.common.ExecutionUtils.handleFunctionExecution(ExecutionUtils.java:100)
... 7 common frames omitted
Caused by: org.jfrog.access.client.AccessClientException: Unable to connect to Access server: Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:143)
at org.jfrog.access.client.http.AccessHttpClient.ping(AccessHttpClient.java:114)
at org.jfrog.access.client.AccessClientImpl.ping(AccessClientImpl.java:252)
at io.vavr.control.Try.run(Try.java:118)
at org.jfrog.access.client.AccessServerStartupValidator.pingAccess(AccessServerStartupValidator.java:45)
... 8 common frames omitted
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at org.jfrog.client.http.CloseableHttpClientDecorator.doExecute(CloseableHttpClientDecorator.java:109)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:130)
... 12 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
... 24 common frames omitted
2021-01-05T10:36:19.367Z [jfrt ] [ERROR] [ ] [o.a.w.s.ArtifactoryFilter:213 ] [http-nio-8081-exec-8] - Artifactory failed to initialize: Context is null
2021-01-05T10:36:21.058Z [jfac ] [WARN ] [f455f35e70ecf56 ] [o.j.c.ExecutionUtils:142 ] [pool-23-thread-2 ] - Retry 180 Elapsed 1.52 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
^C
Below is the snippet of router.log:
#HumayunM - Below is the snippet of router.log:
2021-01-05T10:34:55.424Z [jfrou] [FATAL] [112e7162aa72477c] [bootstrap.go:101 ] [main ] - Could not join access, err: Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://108.167.159.189:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http://108.167.159.189:8082/router/api/v1/system/ping": Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused)
2021-01-05T11:05:29.439Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:72 ] [main ] - Router (jfrou) service initialization started. Version: 7.12.4-1 Revision: 5060ba45bc3229a899aee49cb87d680398ab017f PID: 20257 Home: /opt/jfrog/artifactory
2021-01-05T11:05:29.440Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:75 ] [main ] - JFrog Router IP: 108.167.159.189
2021-01-05T11:05:29.441Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:175 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2021-01-05T11:05:29.442Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:80 ] [main ] - JFrog Router Service ID: jfrou#01ev5km60hpft7szeaf1n24e48
2021-01-05T11:05:29.442Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:81 ] [main ] - JFrog Router Node ID: osboxes.org
2021-01-05T11:05:29.476Z [jfrou] [INFO ] [d21e36b02d1a444 ] [http_client_holder.go:155 ] [main ] - System cert pool contents were loaded as trusted CAs for TLS communication
2021-01-05T11:05:29.476Z [jfrou] [INFO ] [d21e36b02d1a444 ] [http_client_holder.go:175 ] [main ] - Following certificates were successfully loaded as trusted CAs for TLS communication:
[/opt/jfrog/artifactory/var/data/router/keys/trusted/access-root-ca.crt]
2021-01-05T11:05:31.486Z [jfrou] [INFO ] [d21e36b02d1a444 ] [config_holder.go:107 ] [main ] - Configuration update detected
2021-01-05T11:05:31.780Z [jfrou] [INFO ] [d21e36b02d1a444 ] [join_executor.go:118 ] [main ] - Cluster join: Trying to rejoin the cluster
2021-01-05T11:05:32.629Z [jfrou] [FATAL] [d21e36b02d1a444 ] [bootstrap.go:101 ] [main ] - Could not join access, err: Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://108.167.159.189:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http://108.167.159.189:8082/router/api/v1/system/ping": Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused)

Artifactory : Application could not be initialized: Invalid DNS name: maven.domain.org:-1

I'm trying to upgrade Artifactory from 6.18 to 7.3.2 and I'm facing error like Invalid DNS name maven.domain.org:-1 . This is not the real dns name I have but the DNS name reported already exist and was running for version 6.18.
2020-10-22T19:05:51.377Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [actorySchedulerFactoryBean:727] [art-init ] - Starting Quartz Scheduler now
2020-10-22T19:05:51.470Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [ifactoryApplicationContext:271] [art-init ] - Artifactory context starting up 58 Spring Beans...
2020-10-22T19:05:51.959Z [jfrt ] [ERROR] [cdd01d4e0b52e2fc] [OnboardingYamlBootstrapper:100] [art-init ] - can't import file artifactory.config.import.yml - Artifactory repositories have already been created
2020-10-22T19:05:51.977Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [o.a.s.a.AccessServiceImpl:408 ] [art-init ] - Initialized new service id: jfrt#01bpj5k40d37v91t153m5y15sa
2020-10-22T19:05:52.035Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [oryAccessClientConfigStore:590] [art-init ] - Using Access Server URL: https://maven.domain.org/access source: System Property
2020-10-22T19:05:52.826Z [jfac ] [INFO ] [5c8eee656ba614dc] [s.r.NodeRegistryServiceImpl:63] [http-nio-8081-exec-3] - Cluster join: Successfully joined jfrt#01bpj5k40d37v91t153m5y15sa with node id artifactory.server
2020-10-22T19:05:52.869Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [.a.c.AccessClientBootstrap:169] [art-init ] - Cluster join: Successfully joined the cluster
2020-10-22T19:05:52.871Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [o.j.a.c.g.AccessGrpcClient:74 ] [art-init ] - Connecting to grpc server on maven.domain.org:-1
2020-10-22T19:05:52.884Z [jfrt ] [INFO ] [ ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-17 ] - [Node ID: artifactory.server] detected local modify for config 'artifactory.security.access/access.admin.token'
2020-10-22T19:05:53.121Z [jfrt ] [ERROR] [cdd01d4e0b52e2fc] [ctoryContextConfigListener:115] [art-init ] - Application could not be initialized: Invalid DNS name: maven.domain.org:-1
java.lang.reflect.InvocationTargetException: null
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:249)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:111)
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is java.lang.IllegalArgumentException: Invalid DNS name: maven.domain.org:-1
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:281)
at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:159)
... 6 common frames omitted
Caused by: java.lang.IllegalArgumentException: Invalid DNS name: maven.domain.org:-1
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217)
It makes a week I'm searching why the configuration don't grab the port instead of -1 in dns name.
Thank you for your help.
Patrice

corda-webserver connect to p2pAddress of corda node

We tried to start up corda-webserver to connect to existing corda node but encountered following error message.
corda-webserver.jar, corda.jar, and node.conf are in the folder. Would you please help us? Thank you.
[ERROR] 2017-12-13T09:32:48,957Z [main] core.client.createConnection - AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
[INFO ] 2017-12-13T09:32:48,959Z [main] internal.RPCClient.logElapsedTime - Startup took 10010 msec
[INFO ] 2017-12-13T09:32:49,960Z [main] internal.NodeWebServer.connectLocalRpcAsNodeUser - Connecting to node at 1.115.29.253:**10002** as node user
Corda version : 1.0
node.conf
myLegalName="O=XXX,L=XXXX,C=XX"
networkMapService {
address="one-networkmap.corda.r3cev.com:10002"
legalName="L=Dublin, C=IE, O=TestNet NetworkMap"
}
p2pAddress="1.115.29.253:10002"
rpcAddress="1.115.29.253:10003"
webAddress="1.115.29.253:10004"
keyStorePassword : "XXX"
trustStorePassword : "XXX"
extraAdvertisedServiceIds: [ "" ]
useHTTPS : false
devMode : false
rpcUsers=[
{
user=XXX
password=XXX
permissions=[
ALL
]
}
]
certificateSigningService="https://one-doorman.corda.r3cev.com"
full log of corda-webserver
[INFO ] 2017-12-15T10:29:08,523Z [main] webserver.CmdLineOptions.loadConfig - Config:
{
# hardcoded value
"baseDirectory" : "/home/ubuntu/Corda/r3_Testnet/PartyA",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 23
"certificateSigningService" : "https://one-doorman.corda.r3cev.com",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 13
"devMode" : false,
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 11
"extraAdvertisedServiceIds" : [
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 11
""
],
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 9
"keyStorePassword" : "cordacadevpass",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 1
"myLegalName" : "O=XXX,L=XXXX,C=XX",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 2
"networkMapService" : {
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 3
"address" : "one-networkmap.corda.r3cev.com:10002",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 4
"legalName" : "L=Dublin, C=IE, O=TestNet NetworkMap"
},
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 6
"p2pAddress" : "1.115.29.253:10002",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 7
"rpcAddress" : "1.115.29.253:10003",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 14
"rpcUsers" : [
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 15
{
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 17
"password" : "corda_is_awesome",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 18
"permissions" : [
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 19
"ALL"
],
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 16
"user" : "corda"
}
],
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 10
"trustStorePassword" : "trustpass",
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 12
"useHTTPS" : false,
# /home/ubuntu/Corda/r3_Testnet/PartyA/node.conf: 8
"webAddress" : "1.115.29.253:10004"
}
[INFO ] 2017-12-15T10:29:08,547Z [main] Main.main - Main class: /home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-webserver-impl-1.0.0.jar
[INFO ] 2017-12-15T10:29:08,579Z [main] Main.main - CommandLine Args: -Xmx200m -XX:+UseG1GC -javaagent:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/quasar-core-0.7.9-jdk8.jar -Dvisualvm.display.name=Corda -Djava.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0 -Dcapsule.app=net.corda.webserver.WebServer_1.0.0 -DWebserver -Dcapsule.dir=/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0 -Dcapsule.jar=/home/ubuntu/Corda/r3_Testnet/PartyA/corda-webserver.jar
[INFO ] 2017-12-15T10:29:08,670Z [main] Main.main - Application Args:
[INFO ] 2017-12-15T10:29:08,670Z [main] Main.main - bootclasspath: /usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/classes
[INFO ] 2017-12-15T10:29:08,670Z [main] Main.main - classpath: /home/ubuntu/Corda/r3_Testnet/PartyA/corda-webserver.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-rpc-1.0.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-jackson-1.0.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-node-api-1.0.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-core-1.0.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-webapp-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-servlet-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-container-jetty-http-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-security-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-server-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javax.servlet-api-3.1.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/commons-fileupload-1.3.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/log4j-slf4j-impl-2.7.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/log4j-core-2.7.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jopt-simple-5.0.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-container-servlet-core-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-server-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-media-json-jackson-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kotlinx-html-jvm-0.6.3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kotlin-stdlib-jre8-1.1.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-module-kotlin-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kotlin-reflect-1.1.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jsr305-3.0.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jcl-over-slf4j-1.7.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/slf4j-api-1.7.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/rxjava-1.2.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/commons-jexl3-3.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-datatype-jsr310-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-jaxrs-json-provider-2.8.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-jaxrs-base-2.8.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-module-jaxb-annotations-2.8.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-databind-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/eddsa-0.2.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/bcpkix-jdk15on-1.57.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/bcprov-jdk15on-1.57.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/hibernate-jpa-2.1-api-1.0.0.Final.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/quasar-core-0.7.9-jdk8.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-dataformat-yaml-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/artemis-core-client-2.1.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/artemis-commons-2.1.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/guava-21.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-xml-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-server-detector-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-jsr160-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-jmx-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-serializer-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-notif-pull-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-notif-sse-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-discovery-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-service-history-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jolokia-server-core-2.0.0-M3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/commons-io-2.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/log4j-api-2.7.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-client-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-media-jaxb-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-common-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-entity-filtering-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javax.ws.rs-api-2.0.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javax.annotation-api-1.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/hk2-locator-2.5.0-b30.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/hk2-api-2.5.0-b30.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javax.inject-2.5.0-b30.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/validation-api-1.1.0.Final.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-http-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-io-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-util-9.3.9.v20160517.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jetty-continuation-9.2.14.v20151106.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-annotations-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kotlin-stdlib-jre7-1.1.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kotlin-stdlib-1.1.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jackson-core-2.8.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/config-1.3.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kryo-serializers-0.41.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/kryo-4.0.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/proton-j-0.21.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/snakeyaml-1.17.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/json-simple-1.1.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jersey-guava-2.25.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/osgi-resource-locator-1.0.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/hk2-utils-2.5.0-b30.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/aopalliance-repackaged-2.5.0-b30.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javassist-3.20.0-GA.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jgroups-3.6.13.Final.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/netty-all-4.1.9.Final.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/geronimo-json_1.0_spec-1.0-alpha-1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/johnzon-core-0.9.5.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/jboss-logging-3.3.0.Final.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/commons-beanutils-1.9.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/reflectasm-1.11.3.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/minlog-1.3.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/objenesis-2.2.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/javax.inject-1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/commons-collections-3.2.1.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/asm-5.0.4.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/annotations-13.0.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/corda-webserver-impl-1.0.0.jar:/home/ubuntu/Corda/r3_Testnet/PartyA/plugins/corda-finance-1.0.0.jar:/home/ubuntu/Corda/r3_Testnet/PartyA/plugins/java-source-0.1.jar:/home/ubuntu/Corda/r3_Testnet/PartyA/plugins/tradeix-concord-demo_marcopolo-release-2017-12-06-14-05-12.jar:/home/ubuntu/.capsule/apps/net.corda.webserver.WebServer_1.0.0/quasar-core-0.7.9-jdk8.jar
[INFO ] 2017-12-15T10:29:08,671Z [main] Main.main - VM Java HotSpot(TM) 64-Bit Server VM Oracle Corporation 25.131-b11
[INFO ] 2017-12-15T10:29:08,671Z [main] Main.main - Machine: ip-10-240-2-174
[INFO ] 2017-12-15T10:29:08,672Z [main] Main.main - Working Directory: /home/ubuntu/Corda/r3_Testnet/PartyA
[INFO ] 2017-12-15T10:29:09,471Z [main] Main.main - Starting as webserver on 1.115.29.253:10004
[INFO ] 2017-12-15T10:29:09,527Z [main] BasicInfo.logAndMaybePrint - Starting as webserver: 1.115.29.253:10004
[INFO ] 2017-12-15T10:29:09,527Z [main] internal.NodeWebServer.connectLocalRpcAsNodeUser - Connecting to node at 1.115.29.253:10002 as node user
[ERROR] 2017-12-15T10:29:20,917Z [main] core.client.createConnection - AMQ214016: Failed to create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
[INFO ] 2017-12-15T10:29:20,967Z [main] internal.RPCClient.logElapsedTime - Startup took 11190 msec
[INFO ] 2017-12-15T10:29:21,968Z [main] internal.NodeWebServer.connectLocalRpcAsNodeUser - Connecting to node at 1.115.29.253:10002 as node user
You need to connect to the node via the RPC port as opposed to the P2P port. Cheers!

Resources