Error in Datadrift Monitor in Azure ML - Analyze existing data - azure-machine-learning-studio

I have created a monitor using Data drift detector in Azure ML.
When I am monitoring it through studio interface using data---->data monitor
it shows below error.
"Failed to load metrics.
Service is temporarily unavailable. Please try again later"
I am unable to see metrics visualization for analyzing data drift.
Even in notebook also it goes in Queue state for long time and don't display metric.
Data Drift Monitor

Related

Intermittently getting connect error: Function not implemented (38) when connecting with gatttool

I'm working on a project where I need to get data from a BLE environmental sensor onto a raspberry pi and send it to a server at regular intervals. The more often I can send, the better. I found a script online that works with the particular type of sensor that I'm working, but it only reads the data once and doesn't update unless some device connects and disconnects to the sensor.
So, for example, if I ran the script twice in a row it would contain the same data, but if I run the script once, then connected and disconnected from the sensor with my phone, then ran the script again, it would have new, updated data. Now, I'm trying to make this fully automated and don't want to have to keep connecting and disconnecting with my phone every time to get new data, so I've found that running gatttool and connecting has the same effect as if I were to connect and disconnect with my phone. So I've come up with a somewhat clunky solution of automation that all runs through crontab:
Run a script that connects and immediately disconnects from the sensor using gatttool
Run the data-collection script and send the data to the server
Repeat as soon as possible
Step 3 is where the issue lies. I can't run this series as often as I want. The ideal interval is to collect and send data every 30 seconds, but for some reason I intermittently get an error from gatttool:
connect error: Function not implemented (38)
I get this error on every iteration of the cron schedule until I set the interval so that the scripts only run every 2 minutes, and even then I'm intermittently getting the error. I need the data to be consistent and definitely not as sparse as 2 minutes apart. 1 minute would be the absolute max interval I can afford to have the data sent.
How can I get rid of this error?
My script to connect and disconnect from the device:
import pexpect
import time
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
scan = pexpect.spawn("sudo hcitool lescan")
time.sleep(5)
print(scan.terminate())
child = pexpect.spawn("sudo gatttool -i hci0 -b E2:D8:9D:FF:72:A2 -I -t random")
child.sendline("connect")
child.expect("Connection successful", timeout=7)
print("connected!")
child.sendline("disconnect")
child.sendline("quit")
child.sendline("sudo hciconfig hci0 down")
child.sendline("sudo hciconfig hci0 up")
print("done!")
The script that you linked to at the start of your question does not seem to be connecting to the sensor. My understanding of their script is that it is scanning for the advertising data from the sensor which contains the measurement information. This is a common thing to do and there are many different types of beacons you can get.
I speculate that you are seeing more frequent measurements when you connect and disconnect because that is reseting the advertising as the sensor will not be advertising when you are connected.
On the front-page of the repo you linked to, there is some information about about how to change the measurement interval.
You said you wanted this to be every 30 seconds, so that would be a value of 1E that you would need to write to that characteristic.
They suggest an app to do this with. I have used that app and there is nothing specific about that app they point you towards. If you wanted alternatives, I find the nRF Connect app very good for these kinds of activities. If you have the Chrome or Chromium browser installed on your PC or Raspberry Pi, then you can do it from there if you enter the URL of:
chrome://bluetooth-internals/#devices
Press Start Scan -> Inspect the sensor device -> click on 0C4C3010-7700-46F4-AA96D5E974E32A54 service -> click on 0C4C3011-7700-46F4-AA96D5E974E32A54 characteristic -> enter the value (1E) -> press Write button.
This should allow you to use their original script with the frequency of measurement you want.

how to work with kafka-kusto-sink and debug

ingesting data to kakfka cluster that can send data to adx kusto db by using kafka-sink-azure-kusto .
iam successfully ingesting data to kafka cluster and its not transferring data to kusto db. how to debug this? any logs i can verify.
i have tried to check broker log no errors there
ref:https://github.com/Azure/kafka-sink-azure-kusto/blob/master/README.md
Could you please provide more information about how you are running Kafka, and how did you set up the connector?
Debugging steps would be:
Broker logs should mention that connector was picked up properly, did you see that line in the logs?
Looking at Connector logs should show more info about what is actually going on under the hood. Maybe you will see some errors there. /var/log/connect-distributed.log
Try to ingest data via any other method? like one of the SDK's
Try running the setup according to steps detailed under delpoy
Update: more info about connector setup in general can be found at this SO question: Kafka connect cluster setup or launching connect workers
Also, confluent has some helpful docs:https://docs.confluent.io/current/connect/userguide.html

warning google cloud compute instance over utilized

i recently installed a Bitnami Wordpress Network stack on google cloud compute.
I keep getting a warning saying that it is over utilised however, when i view cpu and disk usage statistics, i cannot see how this is possible? Both statistics are usually very low only spiking when I am administering websites (ie importing large files, backups, etc).
For exmaple as i post this message right now usage for the
Is this just a marketing ploy to get me to upgrade my instance?
What happens when we overutilise anyway? (what are the symptoms...as my wordpress network appears to me to be functioning flawlessly)
Please see images of my disk and cpu usage over the last 7 days
[CPU utilisation statistcs 7 days][1]
[disk operations 7 days][2]
[Network Packets statistics 7 days][3]
[1]: https://i.stack.imgur.com/iZa0L.png
[2]: https://i.stack.imgur.com/lUOno.png
[3]: https://i.stack.imgur.com/SnbHq.jpg
You need to install the Monitoring Agent in order to get accurate recommendations.
If the monitoring agent is installed and running on a VM instance, the
CPU and memory metrics collected by the agent are automatically used
to compute sizing recommendations. The agent metrics provided by the
monitoring agent give better insights into resource utilization of the
instance than the default Compute Engine metrics. This allows the
recommendation engine to estimate resource requirements better and
make more precise recommendations.
Read: https://cloud.google.com/compute/docs/instances/apply-sizing-recommendations-for-instances?hl=en_GB&_ga=2.217293398.-1509163014.1517671366#using_the_monitoring_agent_for_more_precise_recommendations
How to install the Monitoring Agent to get accurate sizing recommendations:
https://cloud.google.com/monitoring/agent/install-agent

stream creation with analytic module in spring-xd get failed

I am working on spring-xd framework. I am following Spring-XD guide Analytic tab and want to deploy my prediction model for live streaming data.
I have built the iris classification model using naive-Bayes in R and store that pmml file at my desktop in Ubuntu-14.0.4 Lts. this is my stream definition-
stream create test --definition "mqtt --url='tcp://localhost:1883' --topics='irisPayload'| analytic-pmml --location='/home/andy/Desktop/iris-flower-naive-bayes.pmml.xml' --inputFieldMapping='sepalLength:Sepal.Length,sepalWidth:Sepal.Width,petalLength:Petal.Length,petalWidth:Petal.Width'--outputFieldMapping='Predicted_Species:predictedSpecies' | file" --deploy
.
Error:- Analytic and processor module is not found.
I Think my stream definition is wrong. as I am unable to found stream definition for such a case in spring-xd guide.
I am running spring-xd in single-node at my local machine. Instead of Http, I want to send my data using MQTT Pub-Sub protocol. my MQTT broker is up and running, any sort of help is appreciated. thanks a lot.
IIRC, it's not distributed with XD out of the box because of licensing issues.
You can find it on github here and build/install it yourself.

onDiscoverCharacteristicResults not called because of android.os.DeadObjectException

using the sample_apk_icsActivity app supplied in the motorola ICS R2 add-on, i am able to successfully connect to my BLE peripheral running the HRM profile.... i then create new BluetoothGattService object, at which time discovery of characteristics appears to begin....
using a packet sniffer, everything appears "normal".... at the end of the characteristic discovery process -- when i'd expect a callback through my IBluetoothGattProfile.Stub, i see a log message from the underlying BluetoothService reporting a DeadObjectException.... from the prior log messages, it would appear the service did find some characteristics and was preparing to give me callback....
again, i've been using the motorola sample app "as is"....
thanks....
You may have a concurrency issue. Where are you getting Gatt from ? If you send him to an another thread, he might have been destroyed in his original one.

Resources