stream creation with analytic module in spring-xd get failed - r

I am working on spring-xd framework. I am following Spring-XD guide Analytic tab and want to deploy my prediction model for live streaming data.
I have built the iris classification model using naive-Bayes in R and store that pmml file at my desktop in Ubuntu-14.0.4 Lts. this is my stream definition-
stream create test --definition "mqtt --url='tcp://localhost:1883' --topics='irisPayload'| analytic-pmml --location='/home/andy/Desktop/iris-flower-naive-bayes.pmml.xml' --inputFieldMapping='sepalLength:Sepal.Length,sepalWidth:Sepal.Width,petalLength:Petal.Length,petalWidth:Petal.Width'--outputFieldMapping='Predicted_Species:predictedSpecies' | file" --deploy
.
Error:- Analytic and processor module is not found.
I Think my stream definition is wrong. as I am unable to found stream definition for such a case in spring-xd guide.
I am running spring-xd in single-node at my local machine. Instead of Http, I want to send my data using MQTT Pub-Sub protocol. my MQTT broker is up and running, any sort of help is appreciated. thanks a lot.

IIRC, it's not distributed with XD out of the box because of licensing issues.
You can find it on github here and build/install it yourself.

Related

how to work with kafka-kusto-sink and debug

ingesting data to kakfka cluster that can send data to adx kusto db by using kafka-sink-azure-kusto .
iam successfully ingesting data to kafka cluster and its not transferring data to kusto db. how to debug this? any logs i can verify.
i have tried to check broker log no errors there
ref:https://github.com/Azure/kafka-sink-azure-kusto/blob/master/README.md
Could you please provide more information about how you are running Kafka, and how did you set up the connector?
Debugging steps would be:
Broker logs should mention that connector was picked up properly, did you see that line in the logs?
Looking at Connector logs should show more info about what is actually going on under the hood. Maybe you will see some errors there. /var/log/connect-distributed.log
Try to ingest data via any other method? like one of the SDK's
Try running the setup according to steps detailed under delpoy
Update: more info about connector setup in general can be found at this SO question: Kafka connect cluster setup or launching connect workers
Also, confluent has some helpful docs:https://docs.confluent.io/current/connect/userguide.html

R script on premise data gateway

I have created a report that read data from OData source, SQL Server and R.
R script read the data from an OData source.
Refresh works fine on my computer.
I want to share my work with my colleague and publish the report and use our On Premise Data Gateway, but I keep getting an error that data gateway is not configured correctly. If I use my personal gateway on my computer, everything works fine.
Any idea why On Premise Gateway is not working?
I'm happy to stand corrected, but it's my understanding that R Scripts are not a supported data source for an Enterprise On Premise Data Gateway.
I imagine Microsoft are worried about taking on the intense compute demands generated by R within their cloud. The Personal Gateway keeps your machine doing all the R processing.

use mavlink without qgroundcontrol

I'm trying to conect my PX4Flow sensor to a raspberry pi. It seems that nearly everybody is using qgroundcontrol to access and control it. But as I'd like to integrate it into some bigger program, I'd like to control it with some self-written simple python code, if possible.
My aim is to:
access the camera (to measure the speed - later)
get gyrometer values
I don't need the ultra sonic sensor.
I found out that I can use MAVlink for the communication between the px4flow sensor and the raspberry pi. I cloned the git repository and followed the steps on https://github.com/mavlink/mavlink until the generation of header file (python -m mavgenerate). With that, I can generate a new python file. I don't know if this is correct, and I don't know what to do with that python file. No more file (header files) are copied or generated. How do I go on? How do I use the library? How do I even test the connection?
If I understand you correctly, you want to make a module to communicate with PX4Flow.
I have some experience in building a ground control station with ardupilot. I think the procedure is roughly the same:
Generate the proper mavlink library, what you have done by using mavgenerate. Read some guidance of mavlink communication procedure.
Read the source code in PX4Flow communication module https://github.com/PX4/Flow/blob/master/src/modules/flow/communication.c, which shows what kind of messages have been sent to client side (e.g. your communication module)
Start write the module code to communicate with PX4Flow. You may need to start with HEARTBEAT msg first to establish connections between your module and PX4Flow. Note that you can always receive HEARTBEAT messages from PX4Flow. You can start with decoding these ones.
Implement your other functionalities.
You can read sources code of QGourndControl during step 3 and step 4. Make sure to find the right module in its repo.
My communication module is built using JavaScript https://github.com/kvenux/nodegcs, if it helps.

Monitoring Integration points

Our company is working on integrating Guidewire(claims processing system) into the existing claims system. We will be executing performance tests on the integrated system shortly. I wanted to know if there was some way to monitor the integration points specific to guidewire.
The system is connected through Web Services. We have access to Loadrunner and Sitescope, and are comfortable with using other open source tools also.
I realize monitoring WSDL files is an option, Could you suggest additional methods to monitor the integration points?
Look at the architecture of Guidewire. You OS have OS monitoring points and you have application monitoring points. The OS is straightforward using SiteScope, SNMP (with SiteScope or LoadRunner), Hyperic, Native OS tools or a tool like Splunk.
You likely have a database involved: This monitoring case is well known and understood.
Monitoring the services? As the application experts inside of your organization what they look at to determine if the application is healthy and running well. You might be implementing a set of terminal users (RTE) with datapoints, log monitoring through SiteScope, custom monitors scheduled to run on the host piping the output through SED to a standard form that can be imported into Analysis at the end of the test.
Think Architecturally. Decompose each host in the stack into OS and services. Map your known monitors to the hosts and layers. Where you run into issues grab the application experts and have them write down the monitors they use (they will have more faith in your results and analysis as a result)

How to test latency of Flex messages

I have a system where clients connect via http streaming channels and use Producer and Consumer classes to dispatch and receive messages. I need to test the latency of messages in a way that adequately simulates real-world usage when the server is under load. I have 3 ideas for how this may be accomplished. Has anyone tried and succeeded or failed with these methods?
Use an out-of-the box test system like JMeter. Haven't found any that support streaming yet.
Use Selenium and FlexMonkey on BrowserMob to simulate actual users.
Use a client api (possibly from BlazeDS) that supports streaming and Flex messaging to write a custom testing framework. Haven't found a client api that supports streaming yet, any language would be OK.
There is a tool for testing the performance of BlazeDS/LCDS, created by Adobe. Take a look here (there is a PDF file called Adobe LiveCycle Data Services 3 ES2 Performance Brief in the PDF portofolio, having a couple of attachments) .

Resources