I am getting below error while importing large database through lando.
The process "mysql --defaults-file=/tmp/drush_w1wair --database=drupal8 --host=database --port=3306 -A"
exceeded the timeout of 14400 seconds.
Please help to configure maximum execution limit config in .lando.yml
Related
We just installed Rancher Desktop 1.4.1 (nerdctl v 0.20.0) on Windows 10 and we seem to have a problem pulling images and logging into a registry:
nerdctl pull alpine
docker.io/library/alpine:latest: resolving |--------------------------------------|
elapsed: 9.9 s total: 0.0 B (0.0 B/s)
INFO[0010] trying next host error="failed to do request: Head \"https://registry-1.docker.io/v2/library/alpine/manifests/latest\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout" host=registry-1.docker.io
FATA[0010] failed to resolve reference "docker.io/library/alpine:latest": failed to do request: Head "https://registry-1.docker.io/v2/library/alpine/manifests/latest": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:47744->192.168.167.172:53: i/o timeout
Trying to login results in similar errors:
nerdctl --debug-full login registry-1.docker.io
/usr/local/bin/docker-credential-rancher-desktop: source: line 5: can't open '/etc/rancher/desktop/credfwd': No such file or directory
Enter Username: myusername
Enter Password:
DEBU[0030] Ignoring hosts dir "/etc/containerd/certs.d" error="stat /etc/containerd/certs.d: no such file or directory"
DEBU[0030] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0030] len(regHosts)=1
ERRO[0040] failed to call tryLoginWithRegHost error="failed to call rh.Client.Do: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout" i=0
FATA[0040] failed to call rh.Client.Do: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.167.172:53: read udp 192.168.167.172:36590->192.168.167.172:53: i/o timeout
It looks like nerdctl is having problems resolving hostnames. It always times-out after 10 seconds.
Is there a way to explicitly configure hostname resolution in Rancher or nerdctl?
Any help would be appreciated.
I am getting the following error java.lang.IllegalArgumentException: Buffering capacity 2097152 exceeded in my local Wikidata installation. It happens when I run the runUpdater.sh
I don't see how I can set the buffer size in the config ?
I am using Airflow in a Docker container. I run a DAG with multiple Jupyter notebooks. I have the following error everytime after 60 minutes:
[2021-08-22 09:15:15,650] {local_task_job.py:198} WARNING - State of this instance has been externally set to skipped. Terminating instance.
[2021-08-22 09:15:15,654] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 277
[2021-08-22 09:15:15,655] {taskinstance.py:1284} ERROR - Received SIGTERM. Terminating subprocesses.
[2021-08-22 09:15:18,284] {taskinstance.py:1501} ERROR - Task failed with exception
I tried to tweak the config file but could not find the good option to remove the 1 hour timeout.
Any help would be appreciated.
The default is no timeout. When your DAG defines dagrun_timeout=timedelta(minutes=60) and execution time exceeds 60 minutes then active task stops with message "State of this instance has been externally set to skipped" logged.
I have a solrCloud setup in Kubernetes with 2 Solr instances and 3 ZooKeeper instances with 1 shard. It is configured with 8G persistent storage for each Solr and Zookeeper. The Memory allocated for Solr is 16G with 10G Heap size. There are a max of 2.5million records indexed. There scheduler client which will call the Solr with url - /update/json?wt=json&commit=true - to do the add/update/delete operations. Occasionally there will be a huge update/delete happens with 1 million records which will call the api (/update/json?wt=json&commit=true ) with 500 documents at a time, but this is called in multiple threads. Everything works fine 1 week, but suddenly we saw errors in Solr.log which makes the solr in an error state and I had to restart one of the solr node. The error are:
Node 1:
021-04-09 08:20:56.657 ERROR (updateExecutor-5-thread-169-processing-x:datacore_shard1_replica_n1 r:core_node3 null n:solr-1.solrcluster:8983_solr c:datacore s:shard1) [c:datacore s:shard1 r:core_node3 x:datacore_shard1_replica_n1] o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling SolrCmdDistributor$Req: cmd=add{,id=S-170262-P-108028200-F-800001737-E-180905508}; node=ForwardNode: http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ to http://solr-0.solrcluster:8983/solr/datacore_shard1_replica_n2/ => java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
java.io.IOException: java.io.IOException: cancel_stream_error
at org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193) ~[?:?]
Node2:
2021-04-09 08:22:56.661 INFO (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 119999
2021-04-09 08:22:56.661 ERROR (qtp1632497828-35124) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout expired: 120000/120000 ms
at org.eclipse.jetty.server.HttpInput$ErrorState.noContent(HttpInput.java:1085)
at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:318)
And on both nodes we can see the below error as well -
2021-04-09 08:21:00.812 INFO (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [datacore_shard1_replica_n2] webapp=/solr path=/update params={update.distrib=TOLEADER&distrib.from=http://solr-1.solrcluster:8983/solr/datacore_shard1_replica_n1/&wt=javabin&version=2}{} 0 120770
2021-04-09 08:21:00.812 ERROR (qtp1632497828-35036) [c:datacore s:shard1 r:core_node4 x:datacore_shard1_replica_n2] o.a.s.h.RequestHandlerBase java.io.IOException: Task queue processing has stalled for 90013 ms with 0 remaining elements to process.
at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.blockUntilFinished(ConcurrentUpdateHttp2SolrClient.java:501)
The stall time is set at 90000ms.
Why we are getting these errors?
Why is it stalling for long? We have the average doc size of 1Kb.
How can we resolve this problem?
I'm running a fresh install of the ocata release of OpenStack. When I try to launch an instance, it fails with a "no hosts available" error. I see the following in /var/log/nova-scheduler.log:
2017-07-24 20:29:39.464 14400 WARNING nova.scheduler.host_manager [req-6f4ff67c-a819-4378-b9f4-fb66c720513b 9879cf1dc8954d5a9606fb555beed1fb f2d743585bd84323959131e1aabd885b - - -] No compute service record found for host ip-10-0-0-\
180
2017-07-24 20:29:39.464 14400 WARNING nova.scheduler.host_manager [req-6f4ff67c-a819-4378-b9f4-fb66c720513b 9879cf1dc8954d5a9606fb555beed1fb f2d743585bd84323959131e1aabd885b - - -] No compute service record found for host ip-10-0-0-\
78
2017-07-24 20:29:39.465 14400 INFO nova.filters [req-6f4ff67c-a819-4378-b9f4-fb66c720513b 9879cf1dc8954d5a9606fb555beed1fb f2d743585bd84323959131e1aabd885b - - -] Filter RetryFilter returned 0 hosts
2017-07-24 20:29:39.465 14400 INFO nova.filters [req-6f4ff67c-a819-4378-b9f4-fb66c720513b 9879cf1dc8954d5a9606fb555beed1fb f2d743585bd84323959131e1aabd885b - - -] Filtering removed all hosts for the request with instance ID 'ba62693\
3-5a3c-4661-ace4-ce8b59cc5505'. Filter results: ['RetryFilter: (start: 0, end: 0)']
I do have two compute hosts, and their FQDN's are ip-10-0-0-180.eu-central-1.compute.internal and ip-10-0-0-78.eu-central-1.compute.internal. Where is nova finding the hostname "ip-10-0-0-180"? I've looked in the nova and nova_api databases (I've used mysqldump to dump them) and nowhere (anymore) do any records with just ip-10-0-0-180 (or ip-10-0-0-78) exist. Why is nova-scheduler trying to find a "service record" for this hostname?
In any case, I've been unable to launch even one instance as all requests fail with "Error: No valid host was found. There are not enough hosts available."