Not able to allow anonymous users to access the server in Nexus - nexus

Recently migrated nexus from one server to another server .
When i am trying to pull from nexus repo using gitlab pipeline.
Getting the below error
408cc74d12b: Retrying in 9 seconds
2408cc74d12b: Retrying in 8 seconds
2408cc74d12b: Retrying in 7 seconds
2408cc74d12b: Retrying in 6 seconds
2408cc74d12b: Retrying in 5 seconds
2408cc74d12b: Retrying in 4 seconds
2408cc74d12b: Retrying in 3 seconds
2408cc74d12b: Retrying in 2 seconds
2408cc74d12b: Retrying in 1 second
2408cc74d12b: Downloading
received unexpected HTTP status: 500 Server Error
ERROR: Job failed: exit status 1
Nexus Repo :
I couldn't find option for adding DockerRealm in anonymous access
enter image description here
This may be the reason for the failure ?
Can anyone help me on this ?

Related

#dask Local abort before MPI_INIT completed completed successfully

I run command as follows.
mpirun --hostfile /home/user/share/hostlist.txt -np 4 /home/user/share/mpi-dask/venv/bin/dask-mpi --scheduler-file ~/dask-scheduler.json
I got result as follows.
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[rpi40000:14497] Local abort
before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
2022-06-23 06:40:12,321 - distributed.nanny - INFO - Worker process 14497 exited with status 1
2022-06-23 06:40:12,324 - distributed.nanny - WARNING - Restarting worker
^C[rpi40000:14416] PMIX ERROR: UNREACHABLE in file ../../../src/server/pmix_server.c at line 2795
[rpi40000:14416] 8 more processes have sent help message help-orte-runtime.txt / orte_init:startup:internal-failure
[rpi40000:14416] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[rpi40000:14416] 5 more processes have sent help message help-orte-runtime / orte_init:startup:internal-failure
[rpi40000:14416] 5 more processes have sent help message help-mpi-runtime.txt / mpi_init:startup:internal-failure

Airflow Papermill operator: task externally skipped after 60 minutes

I am using Airflow in a Docker container. I run a DAG with multiple Jupyter notebooks. I have the following error everytime after 60 minutes:
[2021-08-22 09:15:15,650] {local_task_job.py:198} WARNING - State of this instance has been externally set to skipped. Terminating instance.
[2021-08-22 09:15:15,654] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 277
[2021-08-22 09:15:15,655] {taskinstance.py:1284} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-08-22 09:15:18,284] {taskinstance.py:1501} ERROR - Task failed with exception
I tried to tweak the config file but could not find the good option to remove the 1 hour timeout.
Any help would be appreciated.
The default is no timeout. When your DAG defines dagrun_timeout=timedelta(minutes=60) and execution time exceeds 60 minutes then active task stops with message "State of this instance has been externally set to skipped" logged.

SolrCloud with Zookeeper - cancel_stream_error & TimeoutException: Idle timeout expired: 120000/120000 ms

I have a solrCloud setup in Kubernetes with 2 Solr instances and 3 ZooKeeper instances with 1 shard. It is configured with 8G persistent storage for each Solr and Zookeeper. The Memory allocated for Solr is 16G with 10G Heap size. There are a max of 2.5million records indexed. There scheduler client which will call the Solr with url - /update/json?wt=json&commit=true - to do the add/update/delete operations. Occasionally there will be a huge update/delete happens with 1 million records which will call the api (/update/json?wt=json&commit=true ) with 500 documents at a time, but this is called in multiple threads. Everything works fine 1 week, but suddenly we saw errors in Solr.log which makes the solr in an error state and I had to restart one of the solr node. The error are:
Node 1:
021-04-09 08:20:56.657 ERROR (updateExecutor-5-thread-169-processing-x:datacore_shard1_replica_n1 r:core_node3 null n:solr-1.solrcluster:8983_solr c:datacore s:shard1) [c:datacore s:shard1 r:core_node3 x:datacore_shard1_replica_n1] o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling SolrCmdDistributor$Req: cmd=add{,id=S-170262-P-108028200-F-800001737-E-180905508}; node=ForwardNode: http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ to http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ => java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193) ~[?:?]
Node2:
2021-04-09 08:22:56.661 INFO (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 119999
2021-04-09 08:22:56.661 ERROR (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 120000/120000 ms
at org.eclipse.jetty.server.HttpInput$ErrorState.noContent(HttpInput.java:1085)
at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:318)
And on both nodes we can see the below error as well -
2021-04-09 08:21:00.812 INFO (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 120770
2021-04-09 08:21:00.812 ERROR (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: Task queue processing has stalled for 90013 ms with 0 remaining elements to process.
at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.blockUntilFinished(ConcurrentUpdateHttp2SolrClient.java:501)
The stall time is set at 90000ms.
Why we are getting these errors?
Why is it stalling for long? We have the average doc size of 1Kb.
How can we resolve this problem?

keyTemplateRetriever Failed retrieving throttling data WSO2

I was setting up prepackaged WSO2 identity server with WSO2 API Manager.Also was configuring the domain urls.
Now while starting WSO2 API Manager , below error is printed in the logs
[2017-05-12 05:37:35,237] INFO - CarbonEventManagementService Starting polling event receivers
[2017-05-12 05:37:55,848] WARN - KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
[2017-05-12 05:37:55,850] WARN - BlockingConditionRetriever Failed retrieving Blocking Conditions from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
[2017-05-12 05:38:01,861] WARN - FileSystemPreferences Could not lock System prefs. Unix error code 32693.
[2017-05-12 05:38:01,861] WARN - FileSystemPreferences Couldn't flush system prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
[2017-05-12 05:38:10,877] WARN - KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
[2017-05-12 05:38:10,878] WARN - BlockingConditionRetriever Failed retrieving Blocking Conditions from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
[2017-05-12 05:38:25,940] WARN - BlockingConditionRetriever Failed retrieving Blocking Conditions from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
[2017-05-12 05:38:25,940] WARN - KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
Can anyone tell me what could have i done wrong..May be i did some configuration wrong.Where should i check to find the problem?
Can it be because of SSL issues? I have not yet set up SSL.
Yes, this seems to be from SSL handshake failure specially since you have used hostnames. The default certificates that come with WSO2 Servers are created for localhost.
You can try creating self-signed certificates for APIM and IS hostnames. Then export the public certs of APIM to trust-store.jks of IS and vice versa. This should resolve the SSL handshake failure.
So what happens is when APIM boots up it makes an HTTP call to a web app in Key Manager (throttle data at KM_URL/throttle/data/v1/keyTemplates). APIM decides the URL of the KM from the URL configured in the api-manager.xml
You are seeing the error,
WARN - KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Received fatal alert: handshake_failure. Retrying after 15 seconds...
because the APIM cannot make this HTTP call retrieve throttle data from KM.

Unable to download large files from Sonatype Nexus

Nexus version 3.1.0-04
During a build, I receive the following error downloading an artifact from Nexus.
Download >http://10.148.254.17:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.4.1/assertj-core-2.4.1.jar
:collection:extractIncludeTestProto FAILED
FAILURE: Build failed with an exception.
What went wrong:
Could not resolve all dependencies for configuration ':collection:testCompile'.
Could not download assertj-core.jar (org.assertj:assertj-core:2.4.1)
Could not get resource 'http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.4.1/assertj-core-2.4.1.jar'.
Premature end of Content-Length delimited message body (expected: 900718; received: 6862
This appear to be a problem with large files stored in Nexus.
If I try and download the file via wget or curl, it also fails.
c:>wget http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-
core-2.5.0.jar
--13:57:06-- >http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-core-2.5.0.jar
=> `assertj-core-2.5.0.jar'
Resolving proxy.xxxx.com... done.
Connecting to proxy.xxxx.com[xxx.xxx.xxx.xxx]:xxx... connected.
Proxy request sent, awaiting response... 200 OK
Length: 934,446 [application/java-archive]
0% [ ] 6,856 1.44K/s ETA 10:27
13:57:21 (1.44 KB/s) - Connection closed at byte 6856. Retrying.
c:>curl -O http://xxx.xxx.xxx.xxx:8081/nexus/content/repositories/central/org/assertj/assertj-core/2.5.0/assertj-core-2.5.0.jar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 912k 0 6862 0 0 613 0 0:25:24 0:00:11 0:25:13 613
curl: (18) transfer closed with 927584 bytes remaining to read
Any ideas why?
In my case by docker layer was blocked. I solved this problem by changing the timeout in System>Http>Connection/Socket timeout.

Resources