I've edited my Kibana.yaml config file to allow remote access using the DHCP IP address on my router from a bridged connection using my adapter.
It seems to not establish a connection using the port and IP assigned.
[root#localhost bin]# ./kibana --allow-root
^C^C log [14:36:15.000] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:15.025] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:15.084] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
^C[root#localhost bin]# ./kibana --allow-root &
[1] 2499
[root#localhost bin]# log [14:36:23.872] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:23.878] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:23.960] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [14:36:24.133] [info][plugins-system] Setting up [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:24.394] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [14:36:24.395] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [14:36:24.433] [warning][config][encryptedSavedObjects][plugins] Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml
log [14:36:24.439] [warning][ingestManager][plugins] Fleet APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.561] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [14:36:24.563] [warning][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011
OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'.
log [14:36:24.575] [warning][actions][actions][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.596] [warning][alerting][alerts][plugins][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.785] [info][monitoring][monitoring][plugins] config sourced from: production cluster
log [14:36:25.067] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [14:36:25.409] [info][savedobjects-service] Starting saved objects migrations
log [14:36:25.976] [info][plugins-system] Starting [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:25.978] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: dbda794a-41a8-4223-b66f-b4fed95353db
log [14:36:26.302] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license.
log [14:36:26.339] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license.
log [14:36:26.340] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
[2021-01-16T09:36:26,422][INFO ][o.e.c.m.MetadataIndexTemplateService] [localhost.localdomain] adding template [.management-beats] for index patterns [.management-beats]
log [14:36:27.290] [info][listening] Server running at http://10.0.0.137:5601
log [14:36:28.153] [info][server][Kibana][http] http server running at http://10.0.0.137:5601
log [14:36:28.157] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.181] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.182] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.183] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [64])
log [14:36:28.184] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [4])
log [14:36:28.973] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.
Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured
I upload a dag file to the web page and when I click 'Graph View' -> ${my_dag} -> 'View Log', it shows:
*** Log file isn't local.
*** Fetching here: http://:8793/log/demo_dag/hello_task/2018-11-14T15:06:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
*** Unsupported remote log location.
I have checked the airflow.cfg and find these config info:
worker_log_server_port = 8793
base_log_folder = /root/airflow/logs
My question is:
How to setup IP address for log service (Only port is setup)?
I have setup directory for log service, why does it still go to /log/.. ?
Any help is appreciated.
This can happen when the task status was manually changed (likely through the "Mark Success" option) and the task never receives a hostname value on the record.
The webserver is attempting to reach out to a server, with no name, to get logs for a task that never ran.
PS: Be careful running processes as the root user.
I've been getting this error, fix it by correcting the socket volume path:
WARNING - OSError while attempting to symlink the latest log directory
In windows the volume will go with a double bar like this:
volumes:
- //var/run/docker.sock:/var/run/docker.sock
Bind to docker socket on Windows
Setting up Airflow to run with Docker Swarm’s orchestration