Failed to bind handlers: DNS Error: Not Implemented; retrying in 2 s - yagna

When trying to start Yagna I receive this error, what can I do? I can probably get some DEBUG logs if needed?
[2021-05-06T08:45:08Z INFO yagna] Starting yagna service! Version: 0.6.4 (4fc72117 2021-04-15 build #135).
Log is written to /home/user/.local/share/yagna/yagna_rCURRENT.log
[2021-05-06T08:45:08Z INFO yagna] Data directory: /home/user/.local/share/yagna
[2021-05-06T08:45:08Z INFO ya_sb_router::unix] Router listening on: "/tmp/yagna.sock"
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/yagna.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/market.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/activity.db
[2021-05-06T08:45:08Z INFO ya_persistence::executor] using database at: /home/user/.local/share/yagna/payment.db
[2021-05-06T08:45:08Z INFO ya_identity::service::identity] using default identity: 0xf5ecffecf053508fe97255e046a04ce21c8ee525
[2021-05-06T08:45:08Z INFO yagna] Identity GSB service successfully activated
[2021-05-06T08:45:08Z INFO ya_metrics::pusher] Metrics pusher started
[2021-05-06T08:45:08Z INFO yagna] Metrics GSB service successfully activated
[2021-05-06T08:45:08Z INFO ya_service_bus::remote_router] trying to connect to: /tmp/yagna.sock
[2021-05-06T08:45:08Z INFO ya_service_bus::connection] started connection to gsb
[2021-05-06T08:45:08Z INFO ya_metrics::pusher] Starting metrics pusher
[2021-05-06T08:45:10Z INFO yagna] Version GSB service successfully activated
[2021-05-06T08:45:10Z INFO ya_net::service] using default identity as network id: 0xf5ecffecf053508fe97255e046a04ce21c8ee525
[2021-05-06T08:45:10Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 2 s
[2021-05-06T08:45:12Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 4 s
[2021-05-06T08:45:16Z WARN ya_net::handler] Failed to bind handlers: DNS Error: Not Implemented; retrying in 8 s
EDIT: nslookup
Server: 10.139.1.1
Address: 10.139.1.1#53
** server can't find _net._tcp.dev.golem.network: NOTIMP

I'm not sure what is the reason here, but it seems like DNS is not able to resolve _net._tcp.dev.golem.network SRV record yielding 'Not Implemented'. It is very odd, since Yagna is using Google's DNS servers as a default.
When you face this again pls try to check output of following command
nslookup -q=SRV _net._tcp.dev.golem.network 8.8.8.8

The user has trouble reaching Google's DNS with nslookup, so it appears to be something on his end. He is also using a proxy for his connection, so it must happen somewhere in there. Closing thread.

Related

Got numberformatexception For input string: "0:0:0:0:0:0:0:1" for wso2

I have one API that have been published in WSO2 API gateway.
When I test API, I got this error message from console.
Exception in thread "pool-65-thread-1"
java.lang.NumberFormatException: For input string: "0:0:0:0:0:0:0:1"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong_aroundBody512(APIUtil.java:7851)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong(APIUtil.java:7847)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run_aroundBody4(DataProcessAndPublishingAgent.java:155)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run(DataProcessAndPublishingAgent.java:141)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
According to the blog, they say it need to disable IPv6.
I just disable by registry for IPv6 and add IPV6support only at JAVA_OPTS at wso2server.bat file.
Then I restart again, it still show as IPv6 address.
[2020-01-15 14:40:21,171] INFO - PassThroughListeningIOReactorManager
Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8280
[2020-01-15 14:40:21,172] INFO - PassThroughHttpMultiSSLListener Starting
Pass-through HTTPS Listener...
[2020-01-15 14:40:21,192] INFO -
PassThroughListeningIOReactorManager Pass-through HTTPS Listener
started on 0:0:0:0:0:0:0:0:8243
[2020-01-15 14:40:21,449] INFO -
TaskServiceImpl Task service starting in STANDALONE mode...
[2020-01-15 14:40:21,509] INFO - RegistryEventingServiceComponent
Successfully Initialized Eventing on Registry
[2020-01-15 14:40:21,652] INFO - JMXServerManager JMX Service URL :
service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
When I run again, I got same error.
Please help to answer?
In the latest API Manager versions, we have provided IPv6 support for throttling usecases. If you take the latest pack or a WUM updated pack of your current version
, you should not get this issue.
Also as a workaround, you could set an IPv4 address using X-Forwarded-For header and invoke the API

Task fails due to not being able to read log file

Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured

visual studio code-server not workin with compute engine

I host a web site with Nginx in Debian in google cloud.
when I install code server and i visit: my_ip:8443, chrome replied with: ERR_CONNECTION_TIMED_OUT.
when I execute the command code-server:
INFO code-server v1.1156-vsc1.33.1
INFO Additional documentation: http://github.com/cdr/code-server
INFO Initializing {"data-dir":"/home/naji/.local/share/code-server","extensions-dir":"/home/naji/.local/share/code-server/extensions","working-dir":"/","log-dir":"/home/naji/.cache/code-server/logs/20190723145420188"}
INFO Starting webserver... {"host":"0.0.0.0","port":8443}
WARN No certificate specified. This could be insecure.
WARN Documentation on securing your setup: https://github.com/cdr/code-server/blob/master/doc/security/ssl.md
INFO
INFO Password: ******************
INFO
INFO Started (click the link below to open):
INFO https://localhost:8443/
INFO
INFO Starting shared process [1/5]...
WARN stderr {"data":"(node:17025) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.\n"}
INFO Connected to shared process
what is the solution ?

ICp 2.1.0.1: Installation failed with error TASK [master: Waiting for MariaDB service to start]

I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651

Permission denied while using 'Kaa-Node restart'

I am trying on an application and previously it worked and the data was able to be persisted into MongoDB. But recenntly , we had a change of router and thus we went ahead to regenerate SDK and etc but we still has the connection error.
Error :
2017/01/26 9:24:27 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/01/26 9:24:27 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/01/26 9:24:27 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
And thus , we went ahead with troubleshooting where one of the staffs i emailed passed me a link for troubleshooting .
https://kaaproject.github.io/kaa/docs/v0.10.0/Administration-guide/Troubleshooting/
I followed already but i had an error where im stucked with writing 'kaa-node restart' to restart the node service.
Here are the commands for troubleshooting:
Connect to your Kaa Sandbox via ssh:
$ ssh kaa#<YOUR-SANDBOX-IP>
password: kaa
Stop the Kaa service:
$ sudo service kaa-node stop
Clear the Kaa logs:
$ sudo rm -rf /var/log/kaa/*
Start the Kaa service:
$ sudo service kaa-node start
I typed 'sudo service kaa-node start'. it gave me:
kaa#kaa-sandbox.kaaproject.org:~$ sudo service kaa-node start
* Starting Kaa Node daemon (kaa-node):
/bin/bash: /var/log/kaa/kaa-node-server.init.log: Permission denied
Try verifying the Kaa host on the Management page. Also, the Sandbox Web UI (the Management page) is able to restart all the necessary Kaa services on the Sandbox after the Kaa host change.
Please note that the Kaa host should match the PC host IP address accessible from the network your applications are running in.
Please try and let me know if this works for you.

Resources