HTSQL Select widget not rendering results - htsql

When I attempt to render a select widget using HTRAF, I get an "Uncaught TypeError: Cannot read property 'length' of undefined" on line 776 of jquery.htraf.js. The data-htsql attribute is filled with a proper query and I can see the results manually. The problem seems to be that the data and meta properties are not filled in via the Ajax response. Line 43 and 44 are returning undefined up the stack chain of jquery.htraf.js. Is there some config I need to do to the htsql service to get it to render the json appropriately to be compatible with the HTRAF library?
I have started the htsql service with the following command and yaml file:
htsql-ctl serve -C casemart.yaml &
htsql:
db:
engine: mysql
database: casemart
username: root
password: *********
host: localhost
port: 3306
tweak.autolimit:
limit: 1000
tweak.shell.default:
tweak.override:
included-tables: [casemart.*]
foreign-keys:
- sfcase(account_id) -> sfaccount(id)
- sfcase(owner_id) -> sfuser(id)
- sfcase(createdby_id) -> sfuser(id)
- sfcase(closed_date_dim_id) -> date_dim(date_id)

Related

hydra - structured config group/package override via yaml

I'm not very successfully trying to figure out how to achieve an override of a group/package with and via a yaml file. Trying to explain my problem using the example (files and folder structre) from the hydra documentation https://hydra.cc/docs/tutorials/structured_config/schema/.
yaml.config as:
defaults:
- base_config # --> reference to dataclass
- db: base_mysql # --> reference to dataclass
- _self_ debug: true
gives the expected (print when running the myapp.py):
db:
driver: mysql
host: localhost
port: 3306
user: ???
password: ???
Using the yaml file instead instead of the base_mysql dataclass is also fine thus the yaml.config as:
defaults:
- base_config
- db: mysql # --> reads db/mysql.yaml
- _self_
debug: true
prints again as expected
db:
driver: mysql
host: localhost
port: 3306
user: omry
password: secret
Overriding individual fields is as well fine, e.g. with config.yaml like
defaults:
- base_config
- db: mysql
- _self_
debug: true
db:
password: UpdatedPassword
What I'm to able to figure out is how to override the full db group with a/via another yaml file - defining the structure via a dataclass and then override/set the values like:
defaults:
- base_config
- db: base_mysql # --> reference to dataclass to define the structure
- _self_
debug: true
db: mysql # --> mysql.yaml
throws the following error:
In 'config': Validation error while composing config:
Merge error: str is not a subclass of MySQLConfig. value: mysql
full_key:
object_type=Config
Searching the internet/stackoverflow already showed me that moving the self to the first position will get rid of the error - but then the composition order is "wrong".
Keeping the order as it is and using the mysql.yaml for an override works well - when done via commandline (python myapp.py db=mysql when the line "db:mysql" is not present), but for my usecase it much more convinient to handle it all via the yaml file(s).
Somehow I assume that the same functionality is available via CLI and files/code and that I just did not mange to figure out how it works.
(hydra version 1.1 in a conda environment with python 3.9)
Thank you very much in advance for any help that you can provide.
If understand correctly, you want to use the defaults list in your primary yaml file to merge together the base_mysql config with the mysql config. This will do the trick:
defaults:
- base_config
- db: [base_mysql, mysql]
- _self_
debug: true
Passing a list [base_mysql, mysql] of config names causes those configs base_mysql and mysql to be merged together. This is documented here -- see the "CONFIG_NAMES" alternative for specifying an option in the defaults list.
Note that passing the CLI override db=mysql (as in python myapp.py db=mysql) results in modification of the defaults list; the resulting defaults list will be the same as if you had used the following in your yaml file:
defaults:
- base_config
- db: mysql
- _self_
debug: true
You can pass a list [base_mysql, mysql] of config names at the CLI like this:
python my_app.py 'db=[base_mysql, mysql]'

How to write airflow logs to Elasticsearch?

I am using Airflow 1.10.5. Can't seem to find complete documentation or sample on how to setup remote logging using Elasticsearch. I saw airflow documentation about logging, but it wasn't helpful. I am trying to write the airflow (not task) logs to ES.
As far as I understand the docs, the ES log handler can only read from ES. You would have to setup your logging to print into a file, then use something like filebeat to post the file content to ES and Airflow can then read them back...
https://airflow.readthedocs.io/en/stable/howto/write-logs.html#writing-logs-to-elasticsearch
Writing Logs to Elasticsearch
Airflow can be configured to read task
logs from Elasticsearch and optionally write logs to stdout in
standard or json format. These logs can later be collected and
forwarded to the Elasticsearch cluster using tools like fluentd,
logstash or others.
I was able to achieve using [filebeat][1] shipper.
Input config section in filebeat.yml
</snip>
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/*.log
</snip>
Output config section in filebeat.yml
<snip>
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
</snip>
Good doc to read especially about airflow --> ES.

Task fails due to not being able to read log file

Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured

How to handle expected prompt in ansible ios_config module

I am trying to implement couple simple commands on cisco ios devices using Ansible (ios_config module).
Especially, I want to remove user profile, but it requires to answer on a prompt and I am getting timeout error...
I have noticed that there are prompt/answer parameters in ios_command module, but it seems that it is not supported in ios_config module.
Has anyone run into the similar problem?
Ansible Task:
- name: remove user on remote devices
ios_config:
lines:
- no username testuser
provider: "{{ provider }}"
Output from Cisco device:
Cisco_Router(config)#no username testuser
This operation will remove all username related configurations with same name.Do you want to continue? [confirm]
Playbook output:
TASK [remove user on remote devices] *************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: timeout trying to send command: end
fatal: [Cisco_Router]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_3_OlXK/ansible_module_ios_config.py\", line 583, in <module>\n main()\n File \"/tmp/ansible_3_OlXK/ansible_module_ios_config.py\", line 512, in main\n load_config(module, commands)\n File \"/tmp/ansible_3_OlXK/ansible_modlib.zip/ansible/module_utils/network/ios/ios.py\", line 168, in load_config\n File \"/tmp/ansible_3_OlXK/ansible_modlib.zip/ansible/module_utils/connection.py\", line 149, in __rpc__\nansible.module_utils.connection.ConnectionError: timeout trying to send command: end\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Starting with Ansible 2.4 there is an ios_user module that can be used to create, edit and remove users.
Removing a specific user with state: absent
- name: set user view/role
ios_user:
name: testuser
state: absent
provider: "{{ provider }}"
The full documentation and further examples can be found at: https://docs.ansible.com/ansible/latest/modules/ios_user_module.html
_command modules and prompts
The various _command modules, including ios_command support passing prompts.
For example:
- name: run commands that require answering a prompt
ios_command:
commands:
- command: 'clear counters GigabitEthernet0/1'
prompt: 'Clear "show interface" counters on this interface \[confirm\]'
answer: 'y'
- command: 'clear counters GigabitEthernet0/2'
prompt: '[confirm]'
answer: "\r"
See https://docs.ansible.com/ansible/latest/modules/ios_command_module.html for further info.
the prompt waits for a confirmation it seems so you need to confirm the command with a second line, so you likely have to do that.
- name: remove user on remote devices
ios_config:
lines:
- no username testuser
- yes
provider: "{{ provider }}"
I have tried this as well.
It seems that ios_config module is looking for a hostname(config)# prefix after executing each line. Thats why second line is not processing at all and I got the same notification - timeout.

Spring XD - Could not find module with name 'ftphdfs' and type 'source'

I running spring-xd-1.3.1.RELEASE run-time container, when I tried to module file with the source from ftp to hdfs, I get an exception in the shell command which is given below.
xd:>module info --name source:ftphdfs
Command failed org.springframework.xd.rest.client.impl.SpringXDException: Could
not find module with name 'ftphdfs' and type 'source'
Also when I tried to use source as http endpoint, I get an exception like this in shell command which is given below.
xd:>module info --name source:http
Information about source module 'http':
Injects data from http endpoint.
Option Name Description
Default
Type
--------------------- -------------------------------------------------------
--------------------------------------------------------------------------------
--------- -------------------------------------------------------------------
---------------------------------
https true for https://
false
boolean
maxContentLength the maximum allowed content length
1048576
int
messageConverterClass the name of a custom MessageConverter class, to convert
HttpRequest to Message; must have a constructor with a 'MessageBuilderFactory'
parameter org.springframework.integration.x.http.NettyInboundMessageConverter
java.lang.String
port the port to listen to
9000
int
sslPropertiesLocation location (resource) of properties containing the locati
on of the pkcs12 keyStore and pass phrase
classpath:httpSSL.properties
java.lang.String
outputType how this module should emit messages it produces
<none>
org.springframework.util.MimeType
Tech stack which I'm currently using is given below.
1) Hadoop 2.7.2
2) Spring-XD-1.3.1.RELEASE
3) Redis 2.6 (Windows Version) - I use this as a transport
4) Zoo-Keeper 3.8
Any help would be appreciated.
It's a job not a stream source...
xd:>module info job:ftphdfs
Information about job module 'ftphdfs':
...
I don't see an exception for source:http above - just a description of the source.

Resources