artifactory after upgrade to 7.47.11 not started - Unexpected response status code: 500 - artifactory

My original version was not old, around 7.41.x, after updating to the latest version, artifactory no longer starts. I can't understand the problem as I don't understand the architecture well. Please help me figure it out. There are no specific errors in the logs.
Artifactory status web page
As far as I understand, the problem is that for some reason the service that implements the user interface on port 8070 does not start.
There is nothing interesting in the logs for this service.
last errors in logs console.log
2022-12-10T19:48:33.842Z [jfrou] [ERROR] [5fb2daba52b5d736] [healthcheck.go:66 ] [main ] [] - Checking health of service 'jffe_000-artifactory' using URL 'http://localhost:8070/readiness' returned an error: Get "http://localhost:8070/readiness": dial tcp 127.0.0.1:8070: connect: connection refused
2022-12-10T19:48:33.852Z [jfrou] [ERROR] [5fb2daba52b5d736] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
2022-12-10T19:48:33.853Z [jfrou] [WARN ] [5fb2daba52b5d736] [local_topology.go:274 ] [main ] [] - Readiness test failed with the following error: "required node services are missing or unhealthy"
2022-12-10T19:48:35.991Z [jfac ] [ERROR] [52b88c5fcb86903d] [o.j.c.ExecutionUtils:190 ] [jf-common-pool-2 ] - Router readiness check failed, cannot start Access
2022-12-10T19:48:35.994Z [jfac ] [ERROR] [52b88c5fcb86903d] [a.s.b.AccessServerRegistrar:79] [jf-common-pool-1 ] - Could not register access
2022-12-10T20:09:28.988Z [jfrou] [ERROR] [37b0a206df08bfae] [healthcheck.go:66 ] [main ] [] - Checking health of service 'jffe_000-artifactory' using URL 'http://localhost:8070/readiness' returned an error: Get "http://localhost:8070/readiness": dial tcp 127.0.0.1:8070: connect: connection refused
2022-12-10T20:09:28.995Z [jfrou] [ERROR] [37b0a206df08bfae] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
I checked the access rights, the availability of the database, the correctness of the configuration files. Tried running ./artifactory.sh and watching the launch progress.
[jfrou] [ERROR] [1dd615d3e5a6357a] [healthcheck.go:71 ] [main ] [] - Checking health of service 'jfrt_01d96rcgsyfzkj1xnv8zqe1snm-artifactory' using URL 'http://localhost:8091/artifactory/api/v1/system/readiness' returned unexpected response status: 500
curl http://localhost:8091/artifactory/api/v1/system/readiness
{
"errors" : [ {
"status" : 500,
"message" : "Bad credentials"
} ]
add:
I spent more than 10 hours on the analysis, turned on the debug. I found this problem, maybe this is just the reason.
I can't find how to fix it yet.
2022-12-11T06:33:14.769Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [.AuthenticationFilterUtils:146] [http-nio-8081-exec-8] - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2022-12-11T06:33:14.769Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:519 ] [http-nio-8081-exec-8] - Using anonymous
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:232 ] [http-nio-8081-exec-8] - Non-UI authentication cache accessed
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:227 ] [http-nio-8081-exec-8] - UI authentication cache accessed
2022-12-11T06:33:14.770Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [o.a.w.s.AccessFilter:523 ] [http-nio-8081-exec-8] - Creating the Anonymous token
2022-12-11T06:33:14.775Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [PassAuthenticationProvider:126] [http-nio-8081-exec-8] - 401, Access didn't authenticate user: 'anonymous'
2022-12-11T06:33:14.776Z [jfrt ] [DEBUG] [349c95a5eb2a97e9] [priseAuthenticationProvider:87] [http-nio-8081-exec-8] - Non docker request. Github provider supports only docker requests.
2022-12-11T06:33:14.781Z [jfrt ] [DEBUG] [ ] [o.a.w.s.ArtifactoryFilter:130 ] [http-nio-8081-exec-8] - org.artifactory.webapp.servlet.ArtifactoryFilter
org.springframework.security.authentication.BadCredentialsException: Bad credentials
at org.artifactory.security.db.DbAuthenticationProvider.additionalAuthenticationChecks(DbAuthenticationProvider.java:64)
at org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:147)
at org.artifactory.security.db.DbAuthenticationProvider.authenticate(DbAuthenticationProvider.java:53)
at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:182)
at org.artifactory.security.RealmAwareAuthenticationManager.authenticate(RealmAwareAuthenticationManager.java:68)
at org.artifactory.webapp.servlet.AccessFilter.useAnonymousIfPossible(AccessFilter.java:533)
at org.artifactory.webapp.servlet.AccessFilter.doFilterInternal(AccessFilter.java:301)
at org.artifactory.webapp.servlet.AccessFilter.doFilter(AccessFilter.java:218)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.RequestFilter.doFilter(RequestFilter.java:88)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryCsrfFilter.doFilter(ArtifactoryCsrfFilter.java:83)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:164)
at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80)
at org.artifactory.webapp.servlet.SessionFilter.doFilter(SessionFilter.java:67)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryTracingFilter.doFilter(ArtifactoryTracingFilter.java:38)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.artifactory.webapp.servlet.ArtifactoryFilter.doFilter(ArtifactoryFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)
at org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:289)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:924)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1743)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:833)

Related

JFrog Artifactory OSS - Getting 401 unauthorized error when trying the Set Me Up or Edit User

I am using JFrog Artifactory OSS 7.46.8 (Self hosted - docker version)
When using the "Set Me Up" option or editing any user from Admin view, basically any operation that calls the "/ui/api/v1/ui/userApiKey/" endpoint fails with 401.
I enabled the debug logging and below are the log I was able to find related to this.
Error generated when user logs in:
User-Changed authentication cache accessed
2022-12-14T08:18:28.679Z [jfrt ] [DEBUG] [ ] [o.a.o.ArtifactoryTracer:78 ] [http-nio-8081-exec-5] - Found existing Tracing Span. A new child Server Tracing Span will be associated with it
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [.AuthenticationFilterUtils:146] [http-nio-8081-exec-5] - Entering ArtifactorySsoAuthenticationFilter.getRemoteUserName
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:434 ] [http-nio-8081-exec-5] - Cached key has been found for request: '/artifactory/api/onboarding/initStatus' with method: 'GET'
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:232 ] [http-nio-8081-exec-5] - Non-UI authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:227 ] [http-nio-8081-exec-5] - UI authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-5] - User-Changed authentication cache accessed
2022-12-14T08:18:28.680Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.AccessFilter:440 ] [http-nio-8081-exec-5] - Header authentication AccessTokenAuthentication [Principal=admin, Credentials=[PROTECTED], Authenticated=true, Details=null, Granted Authorities=[]] found in cache.
2022-12-14T08:18:28.681Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.a.w.s.RepoFilter:113 ] [http-nio-8081-exec-5] - Entering request GET ("IP-REMOVED") /api/onboarding/initStatus.
2022-12-14T08:18:28.681Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-5] - Executing : POST http://localhost:8046/access/api/v1/auth/authenticate
2022-12-14T08:18:28.692Z [jfrt ] [DEBUG] [2f4d3b87c15ab8be] [rtifactoryInitStatusService:91] [http-nio-8081-exec-5] - Unable to authenticate user: 'admin'
org.jfrog.access.client.AccessClientHttpException: HTTP response status 401:Failed on executing request. Response: {
"errors" : [ {
"code" : "UNAUTHORIZED",
"message" : "Invalid credentials"
} ]
}
at org.jfrog.access.client.http.AccessHttpClient.createRestResponse(AccessHttpClient.java:184)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:131)
at org.jfrog.access.client.auth.AuthClientImpl.authenticate(AuthClientImpl.java:93)
at org.artifactory.ui.rest.service.onboarding.GetArtifactoryInitStatusService.execute(GetArtifactoryInitStatusService.java:86)
at org.artifactory.rest.common.service.ServiceExecutor.process(ServiceExecutor.java:39)
at org.artifactory.rest.common.resource.BaseResource.runService(BaseResource.java:141)
at org.artifactory.ui.rest.resource.onboarding.ArtifactoryOnboardingResource.getInitStatus(ArtifactoryOnboardingResource.java:62)
at jdk.internal.reflect.GeneratedMethodAccessor957.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
Error generated when the userApiKey is called.
User-Changed authentication cache accessed
2022-12-14T08:22:13.600Z [jfrt ] [DEBUG] [61b44ab36079c799] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-3] - User-Changed authentication cache accessed
2022-12-14T08:22:13.601Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-2] - Executing : POST http://localhost:8046/access/api/v1/auth/authenticate
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [nRepositoryResponseWrapper:230] [http-nio-8081-exec-8] - Skip invoking on
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.RepoFilter:208 ] [http-nio-8081-exec-8] - Exiting request GET (127.0.0.1) /api/securityconfig
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-8] - User-Changed authentication cache accessed
2022-12-14T08:22:13.602Z [jfrt ] [DEBUG] [333a63a9b7c7563c] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-8] - User-Changed authentication cache accessed
2022-12-14T08:22:13.604Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [PassAuthenticationProvider:126] [http-nio-8081-exec-2] - 401, Access didn't authenticate user: 'admin'
2022-12-14T08:22:13.605Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [priseAuthenticationProvider:87] [http-nio-8081-exec-2] - Non docker request. Github provider supports only docker requests.
2022-12-14T08:22:13.605Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.j.a.c.h.AccessHttpClient:128] [http-nio-8081-exec-2] - Executing : GET http://localhost:8046/access/api/v1/users/admin?expand=groups
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [nRepositoryResponseWrapper:230] [http-nio-8081-exec-2] - Skip invoking on
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.RepoFilter:208 ] [http-nio-8081-exec-2] - Exiting request GET ("IP-REMOVED") /api/userApiKey/admin
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-2] - User-Changed authentication cache accessed
2022-12-14T08:22:13.608Z [jfrt ] [DEBUG] [3e9a4f8ab10ac74 ] [o.a.w.s.AccessFilter:237 ] [http-nio-8081-exec-2] - User-Changed authentication cache accessed
2022-12-14T08:22:13.610Z [jffe ] [ERROR] [03e9a4f8ab10ac74] [frontend-service.log] [main ] - Error: Request failed with status code 401
at createError (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/createError.js:16:15)
at settle (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/opt/jfrog/artifactory/app/frontend/bin/server/dist/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (node:events:539:35)
at endReadableNT (node:internal/streams/readable:1345:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
What will be the solution of this?
This can be a problem with your loadbalancer/reverse proxy. Check if it allows "OPTIONS" method. At least I had a similar case.

Send the trace data of a website using Jaeger and Opentelemetry to Opensearch

I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)

Stuck in the partial helm release on Terraform to Kubernetes

I'm trying to apply a terraform resource (helm_release) to k8s and the apply command is failed half way through.
I checked the pod issue now I need to update some values in the local chart.
Now I'm in a dilemma, where I can't apply the helm_release as the names are in use, and I can't destroy the helm_release since it is not created.
Seems to me the only option is to manually delete the k8s resources that were created by the helm_release chart?
Here is the terraform for helm_release:
cat nginx-arm64.tf
resource "helm_release" "nginx-ingress" {
name = "nginx-ingress"
chart = "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz"
}
BTW: I need to use the local chart as the official chart does not support the ARM64 architecture.
Thanks,
Edit #1:
Here is the list of helm release -> there is no gninx ingress
/data/terraform/k8s$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager default 1 2021-12-08 20:57:38.979176622 +0000 UTC deployed cert-manager-v1.5.0 v1.5.0
/data/terraform/k8s$
Here is the describe pod output:
$ k describe pod/nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Name: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr
Namespace: default
Priority: 0
Node: ocifreevmalways/10.0.0.189
Start Time: Wed, 08 Dec 2021 11:11:59 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress-controller
helm.sh/chart=nginx-ingress-controller-9.0.9
pod-template-hash=99cddc76b
Annotations: <none>
Status: Running
IP: 10.244.0.22
IPs:
IP: 10.244.0.22
Controlled By: ReplicaSet/nginx-ingress-nginx-ingress-controller-99cddc76b
Containers:
controller:
Container ID: docker://0b75f5f68ef35dfb7dc5b90f9d1c249fad692855159f4e969324fc4e2ee61654
Image: docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1
Image ID: docker-pullable://rancher/nginx-ingress-controller#sha256:177fb5dc79adcd16cb6c15d6c42cef31988b116cb148845893b6b954d7d593bc
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=default/nginx-ingress-nginx-ingress-controller-default-backend
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--configmap=default/nginx-ingress-nginx-ingress-controller
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 08 Dec 2021 22:02:15 +0000
Finished: Wed, 08 Dec 2021 22:02:15 +0000
Ready: False
Restart Count: 132
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-nginx-ingress-controller-99cddc76b-62nsr (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzqqn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wzqqn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 8m38s (x132 over 10h) kubelet Container image "docker.io/rancher/nginx-ingress-controller:nginx-1.1.0-rancher1" already present on machine
Warning BackOff 3m39s (x3201 over 10h) kubelet Back-off restarting failed container
The terraform state list shows nothing:
/data/terraform/k8s$ t state list
/data/terraform/k8s$
Though the terraform.tfstate.backup shows the nginx ingress (I guess that I did run the destroy command in between?):
/data/terraform/k8s$ cat terraform.tfstate.backup
{
"version": 4,
"terraform_version": "1.0.11",
"serial": 28,
"lineage": "30e74aa5-9631-f82f-61a2-7bdbd97c2276",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "helm_release",
"name": "nginx-ingress",
"provider": "provider[\"registry.terraform.io/hashicorp/helm\"]",
"instances": [
{
"status": "tainted",
"schema_version": 0,
"attributes": {
"atomic": false,
"chart": "/data/terraform/k8s/nginx-ingress-controller-arm64.tgz",
"cleanup_on_fail": false,
"create_namespace": false,
"dependency_update": false,
"description": null,
"devel": null,
"disable_crd_hooks": false,
"disable_openapi_validation": false,
"disable_webhooks": false,
"force_update": false,
"id": "nginx-ingress",
"keyring": null,
"lint": false,
"manifest": null,
"max_history": 0,
"metadata": [
{
"app_version": "1.1.0",
"chart": "nginx-ingress-controller",
"name": "nginx-ingress",
"namespace": "default",
"revision": 1,
"values": "{}",
"version": "9.0.9"
}
],
"name": "nginx-ingress",
"namespace": "default",
"postrender": [],
"recreate_pods": false,
"render_subchart_notes": true,
"replace": false,
"repository": null,
"repository_ca_file": null,
"repository_cert_file": null,
"repository_key_file": null,
"repository_password": null,
"repository_username": null,
"reset_values": false,
"reuse_values": false,
"set": [],
"set_sensitive": [],
"skip_crds": false,
"status": "failed",
"timeout": 300,
"values": null,
"verify": false,
"version": "9.0.9",
"wait": true,
"wait_for_jobs": false
},
"sensitive_attributes": [],
"private": "bnVsbA=="
}
]
}
]
}
When I try to apply in the same directory, it prompts the error again:
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.nginx-ingress: Creating...
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
Please share your thoughts. Thanks.
Edit2:
The DEBUG logs show some more clues:
2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Release validated: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.118Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceDiff: nginx-ingress] Done: timestamp=2021-12-09T04:30:14.118Z
2021-12-09T04:30:14.119Z [WARN] Provider "registry.terraform.io/hashicorp/helm" produced an invalid plan for helm_release.nginx-ingress, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .cleanup_on_fail: planned value cty.False for a non-computed attribute
- .create_namespace: planned value cty.False for a non-computed attribute
- .verify: planned value cty.False for a non-computed attribute
- .recreate_pods: planned value cty.False for a non-computed attribute
- .render_subchart_notes: planned value cty.True for a non-computed attribute
- .replace: planned value cty.False for a non-computed attribute
- .reset_values: planned value cty.False for a non-computed attribute
- .disable_crd_hooks: planned value cty.False for a non-computed attribute
- .lint: planned value cty.False for a non-computed attribute
- .namespace: planned value cty.StringVal("default") for a non-computed attribute
- .skip_crds: planned value cty.False for a non-computed attribute
- .disable_webhooks: planned value cty.False for a non-computed attribute
- .force_update: planned value cty.False for a non-computed attribute
- .timeout: planned value cty.NumberIntVal(300) for a non-computed attribute
- .reuse_values: planned value cty.False for a non-computed attribute
- .dependency_update: planned value cty.False for a non-computed attribute
- .disable_openapi_validation: planned value cty.False for a non-computed attribute
- .atomic: planned value cty.False for a non-computed attribute
- .wait: planned value cty.True for a non-computed attribute
- .max_history: planned value cty.NumberIntVal(0) for a non-computed attribute
- .wait_for_jobs: planned value cty.False for a non-computed attribute
helm_release.nginx-ingress: Creating...
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [INFO] Starting apply for helm_release.nginx-ingress
2021-12-09T04:30:14.119Z [DEBUG] helm_release.nginx-ingress: applying the planned Create change
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] setting computed for "metadata" from ComputedKeys: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Started: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting helm configuration: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration start: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] Using kubeconfig: /home/ubuntu/.kube/config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.120Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [INFO] Successfully initialized kubernetes config: timestamp=2021-12-09T04:30:14.120Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [INFO] GetHelmConfiguration success: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.121Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Getting chart: timestamp=2021-12-09T04:30:14.121Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Preparing for installation: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 ---[ values.yaml ]-----------------------------------
{}: timestamp=2021-12-09T04:30:14.125Z
2021-12-09T04:30:14.125Z [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2021/12/09 04:30:14 [DEBUG] [resourceReleaseCreate: nginx-ingress] Installing chart: timestamp=2021-12-09T04:30:14.125Z
╷
│ Error: cannot re-use a name that is still in use
│
│ with helm_release.nginx-ingress,
│ on nginx-arm64.tf line 1, in resource "helm_release" "nginx-ingress":
│ 1: resource "helm_release" "nginx-ingress" {
│
╵
2021-12-09T04:30:14.158Z [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/helm/2.4.1/linux_arm64/terraform-provider-helm_v2.4.1_x5 pid=558800
2021-12-09T04:30:14.160Z [DEBUG] provider: plugin exited
You don't have to manually delete all the resources using kubectl. Under the hood the Terraform Helm provider still uses Helm. So if you run helm list -A you will see all the Helm releases on your cluster, including the nginx-ingress release. Deleting the release is then done via helm uninstall nginx-ingress -n REPLACE_WITH_YOUR_NAMESPACE.
Before re-running terraform apply do check if the Helm release is still in your Terraform state via terraform state list (run this from the same directory as where you run terraform apply from). If you don't see helm_release.nginx-ingress in that list then it is not in your Terraform state and you can just rerun your terraform apply. Else you have to delete it via terraform state rm helm_release.nginx-ingress and then you can run terraform apply again.
Just faced a similar issue like this, but in my case nor there was terraform state for the helm, and nor there was a helm release.
so helm list -A or helm list in the current namespace does not.
I found this that solved: helm/helm#4174
With Helm 3, all releases metadata are saved as Secrets in the same
Namespace of the release. If you got "cannot re-use a name that is
still in use", this means you may need to check some orphan secrets
and delete them
and then its start working

JFrog Artifactory failed to initialize with error 500

Installed Artifactory on centos VM using open source rpm package but it fails to initialize.
{
"errors" : [ {
"status" : 500,
"message" : "Artifactory failed to initialize: check Artifactory logs for errors."
} ]
}
Tried adding below in system.yaml file:
shared:
node:
ip: <your ipv4 IP>
Post checking console logs says below error:
Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
2021-01-05T10:36:18.597Z [jfrt ] [ERROR] [f61fe454765979f3] [ctoryContextConfigListener:126] [art-init ] - Application could not be initialized: Connection refused (Connection refused)
java.lang.reflect.InvocationTargetException: null
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:265)
at org.artifactory.lifecycle.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:122)
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is java.lang.reflect.UndeclaredThrowableException
at org.artifactory.spring.ArtifactoryApplicationContext.initReloadableBeans(ArtifactoryApplicationContext.java:302)
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:284)
at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:174)
... 6 common frames omitted
Caused by: java.lang.reflect.UndeclaredThrowableException: null
at com.sun.proxy.$Proxy184.init(Unknown Source)
at org.artifactory.spring.ArtifactoryApplicationContext.initReloadableBeans(ArtifactoryApplicationContext.java:300)
... 8 common frames omitted
Caused by: java.util.concurrent.ExecutionException: org.jfrog.common.ExecutionFailed: Cluster join: Service registry ping failed; Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.jfrog.access.client.AccessServerStartupValidator.waitForServer(AccessServerStartupValidator.java:39)
at org.jfrog.access.client.AccessClientBootstrap.waitForServer(AccessClientBootstrap.java:149)
at org.jfrog.access.client.AccessClientBootstrap.<init>(AccessClientBootstrap.java:104)
at org.jfrog.access.client.AccessClientBootstrap.<init>(AccessClientBootstrap.java:134)
at org.artifactory.security.access.AccessServiceImpl.bootstrapAccessClient(AccessServiceImpl.java:1290)
at org.artifactory.security.access.AccessServiceImpl.lambda$bootstrapAccessClient$23(AccessServiceImpl.java:1251)
at io.vavr.control.Try.mapTry(Try.java:634)
at io.vavr.control.Try.map(Try.java:585)
at org.artifactory.security.access.AccessServiceImpl.bootstrapAccessClient(AccessServiceImpl.java:1251)
at org.artifactory.security.access.AccessServiceImpl.initAccessService(AccessServiceImpl.java:421)
at org.artifactory.security.access.AccessServiceImpl.initAccessClientIfNeeded(AccessServiceImpl.java:410)
at org.artifactory.security.access.AccessServiceImpl.init(AccessServiceImpl.java:403)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118)
at org.artifactory.storage.fs.lock.aop.LockingAdvice.invoke(LockingAdvice.java:76)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
... 10 common frames omitted
Caused by: org.jfrog.common.ExecutionFailed: Cluster join: Service registry ping failed; Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.common.ExecutionUtils.handleStopError(ExecutionUtils.java:156)
at org.jfrog.common.ExecutionUtils.handleFunctionExecution(ExecutionUtils.java:103)
at org.jfrog.common.ExecutionUtils.lambda$generateExecutionRunnable$0(ExecutionUtils.java:67)
at org.jfrog.common.ExecutionUtils$MDCRunnableDecorator.run(ExecutionUtils.java:172)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.jfrog.common.RetryException: Error while trying to connect to local router at address 'http://localhost:8046/access': Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.access.client.AccessServerStartupValidator.convertToRetryException(AccessServerStartupValidator.java:56)
at io.vavr.API$Match$Case0.apply(API.java:5135)
at io.vavr.API$Match.option(API.java:5105)
at io.vavr.control.Try.mapFailure(Try.java:602)
at org.jfrog.access.client.AccessServerStartupValidator.pingAccess(AccessServerStartupValidator.java:46)
at org.jfrog.common.ExecutionUtils.handleFunctionExecution(ExecutionUtils.java:100)
... 7 common frames omitted
Caused by: org.jfrog.access.client.AccessClientException: Unable to connect to Access server: Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:143)
at org.jfrog.access.client.http.AccessHttpClient.ping(AccessHttpClient.java:114)
at org.jfrog.access.client.AccessClientImpl.ping(AccessClientImpl.java:252)
at io.vavr.control.Try.run(Try.java:118)
at org.jfrog.access.client.AccessServerStartupValidator.pingAccess(AccessServerStartupValidator.java:45)
... 8 common frames omitted
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8046 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:72)
at org.jfrog.client.http.CloseableHttpClientDecorator.doExecute(CloseableHttpClientDecorator.java:109)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.jfrog.access.client.http.AccessHttpClient.restCall(AccessHttpClient.java:130)
... 12 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
... 24 common frames omitted
2021-01-05T10:36:19.367Z [jfrt ] [ERROR] [ ] [o.a.w.s.ArtifactoryFilter:213 ] [http-nio-8081-exec-8] - Artifactory failed to initialize: Context is null
2021-01-05T10:36:21.058Z [jfac ] [WARN ] [f455f35e70ecf56 ] [o.j.c.ExecutionUtils:142 ] [pool-23-thread-2 ] - Retry 180 Elapsed 1.52 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
^C
Below is the snippet of router.log:
#HumayunM - Below is the snippet of router.log:
2021-01-05T10:34:55.424Z [jfrou] [FATAL] [112e7162aa72477c] [bootstrap.go:101 ] [main ] - Could not join access, err: Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://108.167.159.189:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http://108.167.159.189:8082/router/api/v1/system/ping": Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused)
2021-01-05T11:05:29.439Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:72 ] [main ] - Router (jfrou) service initialization started. Version: 7.12.4-1 Revision: 5060ba45bc3229a899aee49cb87d680398ab017f PID: 20257 Home: /opt/jfrog/artifactory
2021-01-05T11:05:29.440Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:75 ] [main ] - JFrog Router IP: 108.167.159.189
2021-01-05T11:05:29.441Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:175 ] [main ] - System configuration encryption report:
shared.newrelic.licenseKey: does not exist in the config file
shared.security.joinKeyFile: file '/opt/jfrog/artifactory/var/etc/security/join.key' - already encrypted
2021-01-05T11:05:29.442Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:80 ] [main ] - JFrog Router Service ID: jfrou#01ev5km60hpft7szeaf1n24e48
2021-01-05T11:05:29.442Z [jfrou] [INFO ] [d21e36b02d1a444 ] [bootstrap.go:81 ] [main ] - JFrog Router Node ID: osboxes.org
2021-01-05T11:05:29.476Z [jfrou] [INFO ] [d21e36b02d1a444 ] [http_client_holder.go:155 ] [main ] - System cert pool contents were loaded as trusted CAs for TLS communication
2021-01-05T11:05:29.476Z [jfrou] [INFO ] [d21e36b02d1a444 ] [http_client_holder.go:175 ] [main ] - Following certificates were successfully loaded as trusted CAs for TLS communication:
[/opt/jfrog/artifactory/var/data/router/keys/trusted/access-root-ca.crt]
2021-01-05T11:05:31.486Z [jfrou] [INFO ] [d21e36b02d1a444 ] [config_holder.go:107 ] [main ] - Configuration update detected
2021-01-05T11:05:31.780Z [jfrou] [INFO ] [d21e36b02d1a444 ] [join_executor.go:118 ] [main ] - Cluster join: Trying to rejoin the cluster
2021-01-05T11:05:32.629Z [jfrou] [FATAL] [d21e36b02d1a444 ] [bootstrap.go:101 ] [main ] - Could not join access, err: Cluster join: Failed joining the cluster; Error: Error response from service registry, status code: 400; message: Could not validate router Check-url: http://108.167.159.189:8082/router/api/v1/system/ping; detail: I/O error on GET request for "http://108.167.159.189:8082/router/api/v1/system/ping": Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to 108.167.159.189:8082 [/108.167.159.189] failed: Connection refused (Connection refused)

Artifactory : Application could not be initialized: Invalid DNS name: maven.domain.org:-1

I'm trying to upgrade Artifactory from 6.18 to 7.3.2 and I'm facing error like Invalid DNS name maven.domain.org:-1 . This is not the real dns name I have but the DNS name reported already exist and was running for version 6.18.
2020-10-22T19:05:51.377Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [actorySchedulerFactoryBean:727] [art-init ] - Starting Quartz Scheduler now
2020-10-22T19:05:51.470Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [ifactoryApplicationContext:271] [art-init ] - Artifactory context starting up 58 Spring Beans...
2020-10-22T19:05:51.959Z [jfrt ] [ERROR] [cdd01d4e0b52e2fc] [OnboardingYamlBootstrapper:100] [art-init ] - can't import file artifactory.config.import.yml - Artifactory repositories have already been created
2020-10-22T19:05:51.977Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [o.a.s.a.AccessServiceImpl:408 ] [art-init ] - Initialized new service id: jfrt#01bpj5k40d37v91t153m5y15sa
2020-10-22T19:05:52.035Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [oryAccessClientConfigStore:590] [art-init ] - Using Access Server URL: https://maven.domain.org/access source: System Property
2020-10-22T19:05:52.826Z [jfac ] [INFO ] [5c8eee656ba614dc] [s.r.NodeRegistryServiceImpl:63] [http-nio-8081-exec-3] - Cluster join: Successfully joined jfrt#01bpj5k40d37v91t153m5y15sa with node id artifactory.server
2020-10-22T19:05:52.869Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [.a.c.AccessClientBootstrap:169] [art-init ] - Cluster join: Successfully joined the cluster
2020-10-22T19:05:52.871Z [jfrt ] [INFO ] [cdd01d4e0b52e2fc] [o.j.a.c.g.AccessGrpcClient:74 ] [art-init ] - Connecting to grpc server on maven.domain.org:-1
2020-10-22T19:05:52.884Z [jfrt ] [INFO ] [ ] [o.j.c.w.ConfigWrapperImpl:504 ] [Thread-17 ] - [Node ID: artifactory.server] detected local modify for config 'artifactory.security.access/access.admin.token'
2020-10-22T19:05:53.121Z [jfrt ] [ERROR] [cdd01d4e0b52e2fc] [ctoryContextConfigListener:115] [art-init ] - Application could not be initialized: Invalid DNS name: maven.domain.org:-1
java.lang.reflect.InvocationTargetException: null
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener.configure(ArtifactoryContextConfigListener.java:249)
at org.artifactory.webapp.servlet.ArtifactoryContextConfigListener$1.run(ArtifactoryContextConfigListener.java:111)
Caused by: org.springframework.beans.factory.BeanInitializationException: Failed to initialize bean 'org.artifactory.security.access.AccessService'.; nested exception is java.lang.IllegalArgumentException: Invalid DNS name: maven.domain.org:-1
at org.artifactory.spring.ArtifactoryApplicationContext.refresh(ArtifactoryApplicationContext.java:281)
at org.artifactory.spring.ArtifactoryApplicationContext.<init>(ArtifactoryApplicationContext.java:159)
... 6 common frames omitted
Caused by: java.lang.IllegalArgumentException: Invalid DNS name: maven.domain.org:-1
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:217)
It makes a week I'm searching why the configuration don't grab the port instead of -1 in dns name.
Thank you for your help.
Patrice

Resources