I tried everywhere for a solution but I'm stuck.
My setup is the following:
ESP32 uses BLE GATT NOTIFICATION characteristic to push temperature data via thingsboard gateway into thingsboard. Once the BLE connection is established the first telemetry package is shown in the freshly created device's 'latest telemetry' area. If i turn on gateway debugging I can see further notifications reaching thingboard like this:
{"LOGS":"2020-07-20 02:04:19,640 - DEBUG - [ble_connector.py] - ble_connector - 321 - Notification received from device {'device_config': {'name': 'Esp32 v2.2', 'MACAddress': '24:62:AB:F3:43:72', 'telemetry': [{'key': 'temperature', 'method': 'notify', 'characteristicUUID': '0972EF8C-7613-4075-AD52-756F33D4DA91', 'byteFrom': 0, 'byteTo': -1}], 'attributes': [{'key': 'name', 'characteristicUUID': '00002A00-0000-1000-8000-00805F9B34FB', 'method': 'read', 'byteFrom': 0, 'byteTo': -1}], 'attributeUpdates': [{'attributeOnThingsBoard': 'sharedName', 'characteristicUUID': '00002A00-0000-1000-8000-00805F9B34FB'}], 'serverSideRpc': [{'methodRPC': 'rpcMethod1', 'withResponse': True, 'characteristicUUID': '00002A00-0000-1000-8000-00805F9B34FB', 'methodProcessing': 'read'}]}, 'interest_uuid': {'00002A00-0000-1000-8000-00805F9B34FB': [{'section_config': {'key': 'name', 'characteristicUUID': '00002A00-0000-1000-8000-00805F9B34FB', 'method': 'read', 'byteFrom': 0, 'byteTo': -1}, 'type': 'attributes', 'converter': <thingsboard_gateway.connectors.ble.bytes_ble_uplink_converter.BytesBLEUplinkConverter object at 0xb4427eb0>}], '0972EF8C-7613-4075-AD52-756F33D4DA91': [{'section_config': {'key': 'temperature', 'method': 'notify', 'characteristicUUID': '0972EF8C-7613-4075-AD52-756F33D4DA91', 'byteFrom': 0, 'byteTo': -1}, 'type': 'telemetry', 'converter': <thingsboard_gateway.connectors.ble.bytes_ble_uplink_converter.BytesBLEUplinkConverter object at 0xb4427eb0>}]}, 'scanned_device': <bluepy.btle.ScanEntry object at 0xb443a290>, 'is_new_device': False, 'peripheral': <bluepy.btle.Peripheral object at 0xb58f0070>, 'services': {'00001801-0000-1000-8000-00805F9B34FB': {'00002A05-0000-1000-8000-00805F9B34FB': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a210>, 'handle': 2}}, '00001800-0000-1000-8000-00805F9B34FB': {'00002A00-0000-1000-8000-00805F9B34FB': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a270>, 'handle': 21}, '00002A01-0000-1000-8000-00805F9B34FB': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a1d0>, 'handle': 23}, '00002AA6-0000-1000-8000-00805F9B34FB': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a2b0>, 'handle': 25}}, 'AB0828B1-198E-4351-B779-901FA0E0371E': {'0972EF8C-7613-4075-AD52-756F33D4DA91': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a6b0>, 'handle': 41}, '4AC8A682-9736-4E5D-932B-E9B31405049C': {'characteristic': <bluepy.btle.Characteristic object at 0xb443a5f0>, 'handle': 44}}}} handle: 42, data: b'25.00'"}
the data i would like to update is the string '25.00'
I know I could update thingsboard directly but is the use of BLE that I'm interested in because I like that the sensors are notwork agnostic.
My question is why the updated temperature, even if reaching thingsboard won't show up and what can I change in order to make it happen.
Any kind of help much appreciated. I've being wrestling with this the entire weekend.
Adding more clarifications:
ESP32 code generate the BLE notifications: https://pastebin.com/NqMfxsK6
{
"name": "BLE Connector",
"rescanIntervalSeconds": 100,
"checkIntervalSeconds": 10,
"scanTimeSeconds": 5,
"passiveScanMode": true,
"devices": [
{
"name": "Temperature and humidity sensor",
"MACAddress": "24:62:AB:F3:43:72",
"telemetry": [
{
"key": "temperature",
"method": "notify",
"characteristicUUID": "0972EF8C-7613-4075-AD52-756F33D4DA91",
"byteFrom": 0,
"byteTo": -1
}
],
"attributes": [
{
"key": "name",
"characteristicUUID": "00002A00-0000-1000-8000-00805F9B34FB",
"method": "read",
"byteFrom": 0,
"byteTo": -1
}
],
"attributeUpdates": [
{
"attributeOnThingsBoard": "sharedName",
"characteristicUUID": "00002A00-0000-1000-8000-00805F9B34FB"
}
]
}
]
}
Related
I am creating a emr_default connection through DAG (I don't create using UI). AWS credentials are already defined in UI. The code is as below.
c = Connection(
conn_id='emr_default1',
conn_type='Elastic MapReduce',
extra=json.dumps(dict({"Name": "Data Spark",
"LogUri": "s3://aws-logs-55-us-west-1/elasticmap/", "ReleaseLabel": "emr-6.5.0",
"Instances": {"Ec2KeyName": "tester",
"Ec2SubnetId": "subnet-0d62",
"InstanceGroups": [{"Name": "Master nodes", "Market": "ON_DEMAND", "InstanceRole": "MASTER", "InstanceType": "m3.xlarge", "InstanceCount": 1},
{"Name": "Core nodes", "Market": "ON_DEMAND", "InstanceRole": "CORE", "InstanceType": "m3.xlarge", "InstanceCount": 0}],
"TerminationProtected": false, "KeepJobFlowAliveWhenNoSteps": false},
"Applications": [{"Name": "Spark"}],
"Configurations": [{"Classification": "core-site",
"Properties": {"io.compression.codec.lzo.class": "", "io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"},
"Configurations": []}],
"VisibleToAllUsers": true,
"JobFlowRole": "EMR_EC2_DefaultRole",
"ServiceRole": "EMR_DefaultRole",
"Tags": [{"Key": "app", "Value": "analytics"}, {"Key": "environment", "Value": "development"}]})),
)
print(f"AIRFLOW_CONN_{c.conn_id.upper()}='{c.get_uri()}'")
and I am using this connection in emr creation
cluster_creator = EmrCreateJobFlowOperator(task_id='create_job_flow',
emr_conn_id='emr_default1',
job_flow_overrides=JOB_FLOW_OVERRIDES)
But it is not making this connection, can someone please tell me how to create emr_default and use this connection.
Errro:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Thanks,
Xi
If we look at an example DAG in Airflow we see (Graph View):
What determines the positions of tasks also_run_this and this_will_skip? Notice these 2 tasks don't have any connecting lines prior to themselves, which means they could be placed on the same layer (first vertical set of tasks) as runme_0, runme_1 and runme_2 (using my obviously incorrect assumptions about the DAG).
Is it their runtime that places them in the same layer as run_after_loop?
I am looking at the detailed tasks data for this DAG and I don't see anything that distinguishes also_run_this and this_will_skip from runme_0 in terms of position:
Here is runme_0:
{
"class_ref": {
"class_name": "BashOperator",
"module_path": "airflow.operators.bash"
},
"depends_on_past": false,
"downstream_task_ids": ["run_after_loop"],
"end_date": null,
"execution_timeout": null,
"extra_links": [],
"owner": "airflow",
"pool": "default_pool",
"pool_slots": 1,
"priority_weight": 1,
"queue": "default",
"retries": 0,
"retry_delay": {
"__type": "TimeDelta",
"days": 0,
"microseconds": 0,
"seconds": 300
},
"retry_exponential_backoff": false,
"start_date": "2021-06-17T00:00:00+00:00",
"task_id": "runme_0",
"template_fields": ["bash_command", "env"],
"trigger_rule": "all_success",
"ui_color": "#f0ede4",
"ui_fgcolor": "#000",
"wait_for_downstream": false,
"weight_rule": "downstream"
}
And here is also_run_this:
{
"class_ref": {
"class_name": "BashOperator",
"module_path": "airflow.operators.bash"
},
"depends_on_past": false,
"downstream_task_ids": ["run_this_last"],
"end_date": null,
"execution_timeout": null,
"extra_links": [],
"owner": "airflow",
"pool": "default_pool",
"pool_slots": 1,
"priority_weight": 1,
"queue": "default",
"retries": 0,
"retry_delay": {
"__type": "TimeDelta",
"days": 0,
"microseconds": 0,
"seconds": 300
},
"retry_exponential_backoff": false,
"start_date": "2021-06-17T00:00:00+00:00",
"task_id": "also_run_this",
"template_fields": ["bash_command", "env"],
"trigger_rule": "all_success",
"ui_color": "#f0ede4",
"ui_fgcolor": "#000",
"wait_for_downstream": false,
"weight_rule": "downstream"
}
It would make sense if the same layer was based on parallelism (all tasks in the same vertical layer run in parallel) but this would require some thresholding of the run times, and I don't see any such data available in the DAG or TASK information.
In fact, looking at the Tree View, it appears to show runme_0, runme_1, runme_2, also_run_this and this_will_skip all running at the same time:
As per #bruno-uy 's comment, it appears the Graph View has a UI "problem." Definitely not very intuitive.
After checking that runme_0, runme_1, runme_2, also_run_this and this_will_skip all running at the same time, we can say that this is a UI "problem" that they're not shown in the same "layer". Airflow doesn't have the "layer" concept, so basically they don't assure the tasks starting at the same time are aligned vertically.
Could be a good improvement for Airflow, or just add another diagram as you mentioned Sankey.
I am trying to fetch report of line items and works fine from the UI but halt with error via the API. Following is the reportQuery:
{'reportQuery': {
'dimensions': [
'DATE',
'LINE_ITEM_NAME',
'LINE_ITEM_TYPE',
'CREATIVE_SIZE_DELIVERED'
],
'adUnitView': 'TOP_LEVEL',
'columns': [
'TOTAL_LINE_ITEM_LEVEL_IMPRESSIONS',
'TOTAL_LINE_ITEM_LEVEL_CLICKS',
'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE'
],
'dimensionAttributes': [
'LINE_ITEM_FREQUENCY_CAP',
'LINE_ITEM_START_DATE_TIME',
'LINE_ITEM_END_DATE_TIME',
'LINE_ITEM_COST_TYPE',
'LINE_ITEM_COST_PER_UNIT',
'LINE_ITEM_SPONSORSHIP_GOAL_PERCENTAGE',
'LINE_ITEM_LIFETIME_IMPRESSIONS'
],
'customFieldIds': [],
'contentMetadataKeyHierarchyCustomTargetingKeyIds': [],
'startDate': {
'year': 2018,
'month': 1,
'day': 1
},
'endDate': {
'year': 2018,
'month': 1,
'day': 2
},
'dateRangeType': 'CUSTOM_DATE',
'statement': None,
'includeZeroSalesRows': False,
'adxReportCurrency': None,
'timeZoneType': 'PUBLISHER'
}}
The above query throws following error when tried with API.
Error summary: {'faultMessage': "[ReportError.COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS # columns; trigger:'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE']", 'requestId': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'responseTime': '98', 'serviceName': 'ReportService', 'methodName': 'runReportJob'}
[ReportError.COLUMNS_NOT_SUPPORTED_FOR_REQUESTED_DIMENSIONS # columns; trigger:'TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE']
400 Syntax error: Expected ")" or "," but got identifier "TOTAL_LINE_ITEM_LEVEL_ALL_REVENUE" at [1:354]
Did I miss anything? Any idea in this issue?
Thanks !
This issue has been solved by adding a dimension "Native ad format name".
I hope you'll help me to find out what's wrong with my configuration.
I have to ZigBee nodes, one connected via usb to my mac and one connected via tx/rx ports to the raspberry pi 3.
I wrote two scripts, one that sends Xee Api frame packets (from mac) and one that reads packets (to the pi). The two scripts are based on the python-xbee library.
The scripts are the following - on mac:
import serial
from xbee import XBee, ZigBee
serial_port = serial.Serial('/dev/tty.usbserial-A5025UGJ', 9600)
xbee = ZigBee(serial_port, escaped=True)
# coordinator = 00 13 A2 00 40 8B B1 5A
while True:
try:
# Send AT packet
xbee.send('tx',frame_id='A', dest_addr_long='\x00\x13\xA2\x00\x40\x8B\xB1\x5A', data='test')
parameter = xbee.wait_read_frame()
print 'parameter='
print parameter
except KeyboardInterrupt:
break
serial_port.close()
On Pi:
import serial
from xbee import XBee, ZigBee
serial_port = serial.Serial('/dev/serial0', 9600)
xbee = ZigBee(serial_port, escaped=True)
while True:
try:
# Receive AT packet
parameter = xbee.wait_read_frame()
print 'parameter='
print parameter
except KeyboardInterrupt:
break
serial_port.close()
The output of the first script is the following (the sender):
parameter=
{'retries': '\x00', 'frame_id': 'A', 'deliver_status': '\x00',
'dest_addr': '\x00\x00', 'discover_status': '\x00', 'id': 'tx_status'}
The output of the second script is the following (the receiver):
parameter= {'source_addr_long': '\x00\x13\xa2\x00#\x8b\xb1L',
'rf_data': 'test', 'source_addr': '\xa3\x19', 'id': 'rx',
'options': '\x01'}
Now if I start Node-Red 0.17.3 and I use the "serial input" module, connected to a debug output module, i cannot see anything incoming if the newline is base on the char "\n". The port is the same of the script (/dev/serial0).
[
{
"id": "e6aa5379.9fd8c",
"type": "debug",
"z": "35e84ae.5ae88b6",
"name": "",
"active": true,
"console": "false",
"complete": "true",
"x": 432.5,
"y": 213,
"wires": []
},
{
"id": "63563843.bba178",
"type": "serial in",
"z": "35e84ae.5ae88b6",
"name": "",
"serial": "fbf0b4fa.9b2918",
"x": 209.5,
"y": 201,
"wires": [
[
"e6aa5379.9fd8c"
]
]
},
{
"id": "fbf0b4fa.9b2918",
"type": "serial-port",
"z": "",
"serialport": "/dev/serial0",
"serialbaud": "9600",
"databits": "8",
"parity": "none",
"stopbits": "1",
"newline": "\\n",
"bin": "false",
"out": "char",
"addchar": false
}
]
If I change the configuration of "serial in" node, setting the split "after a timeour of 5000 ms" and deliver "binary buffers", this is the result in debug view:
[126,0,125,49,144,0,125,51,162,0,64,139,177,76,163,25,1,112,114,111,118,97,13]
Does anyone know how to find the correct way to split input with XBee API frames?
I don't know anything about node-red, but you need to parse the stream of bytes to extract the frames. It would require more work to escape the data going in and out, but I think you can use API mode 2 (ATAP = 2) where the start of frame byte (0x7E) is escaped when appearing in the frame, so you could potentially split on that byte.
I have been working on Tableau + vertica Solutions.
I have installed relevant vertica ODBC driver from Vertica provided packages .
While going through tdeserver.txt log file, I stumbled upon a line of error log as below :
{
"ts": "2015-12-16T21:42:41.568",
"pid": 51081,
"tid": "23d247",
"sev": "warn",
"req": "-",
"sess": "-",
"site": "{759FD0DA-A1AB-4092-AAD3-36DA0923D151}",
"user": "-",
"k": "database-error",
"v": {
"retcode-desc": "SQL_ERROR",
"retcode": -1,
"protocol": "7fc6730d6000",
"line": 2418,
"file": "/Volumes/build/builds/tableau-9-2/tableau-9-2.15.1201.0018/modules/connectors/tabmixins/main/db/ODBCProtocolImpl.cpp",
"error-records": [{
"error-record": 1,
"error-desc": "[Vertica][ODBC] (11430) Catalog name not supported.",
"sql-state": "HYC00",
"sql-state-desc": "SQLSTATE_API_OPT_FEATURE_NOT_IMPL_ODBC3x",
"native-error": 11430
}]
}
}
This piece of log is repeated several times .
Rest of the setup runs smooth as expected .
Below are the attributes from ~/Library/ODBC/odbc.ini
[ODBC]
Trace = 1
TraceAutoStop = 0
TraceFile = ~/log
TraceLibrary =
ThreePartNaming=1
What am I missing here ?