Get notified about changes via SSE on Node-Red - server-sent-events

I'm trying to create a dtwin of some sensors putting together eclipse-ditto and node-red frameworks. I'd like to be notified when the "thing" is updated via SSE, so i followed the instruction from the https://www.eclipse.org/ditto/httpapi-sse.html but when i'm trying to envoke the endpoint in this way:
curl -X GET -H 'Accept: text/event-stream' -H 'Authorization: Basic ZGl0dG86ZGl0dG8=' -i 'http://localhost:8080/api/2/things?ids=smart:factory_lwb' the request get stacked forever. I tryed with and without ID, but the result is the same. Could someone help me, please!
Thanks a lot in advance,

I'm not sure what you mean by "the request get stacked forever", but if it is what I think you mean, this is exactly the behaviour that is expected.
When you open the connection for an event stream the connection is expected to stay open.
You will then receive events on this connection.
You might want to add the -N flag to curl in order to make curl immediately print out the data the stream receives, instead of buffering it?
For example open a SSE stream and create a new thing. You will then see the created thing in your stream.
I hope I could help you and thank you for supporting eclipse ditto by asking questions on stackoverflow! :)

Related

How to do request-reply on a JetStream stream?

I made a Go example for JetStream Walkthrough - NATS Docs at https://github.com/hnakamur/nats-stream-example/tree/2c834d7d967f024348fbaa478eae18e9749431ba.
As a next step of my experiments, I tried to make a Go example for Request-Reply Walkthrough - NATS Docs but on a JetStream stream instead of a non-JetStream stream.
My attempt is https://github.com/hnakamur/nats-stream-example/commit/149243b8bd30974a592061cd9d0c3a9b7f3f30fc.
However as soon as I ran my request subcommand, I got replies like msg.Data={"stream":"my_stream2", "seq":112}, even though I did not run my reply subcommand.
Here are steps to reproduce:
$ nats-server -js
$ ./nats-stream-example stream-add --stream my_stream2 --subject foo2
$ ./nats-stream-example consumer-add --consumer pull_consumer2 --stream my_stream2
$ ./nats-stream-example request --subject foo2 --count 100
How can I do request-reply properly on a JetStream stream?
Answering my own question, as a workaround, I made another subject for the reply and set the subject to the message header.
Then the receiver get the subject from the message header and publish the reply to the subject.
https://github.com/hnakamur/nats-stream-example/commit/1406b14bf06cc390268ee7fa082ab55c49b99050

Reading Incoming HTTP Response Bodies to CLI

There's a simple game that my friends and I play both in person and and online. I developed a CLI that records our in-person games (I just type in each move), but I now want to use it to record our online games. All I need to do is pipe the HTTP response bodies being sent to my browser (Firefox) to my CLI. Unfortunately. I can't figure out how to do this.
Ideally, I'm looking for a Ubuntu package that I can run from the command line that will capture and return all HTTP response bodies from a specific endpoint. I've looked into tcpdump and some simple proxy servers, but I'm not sure they do what I want them to do.
Thanks for your help! Let me know if I need to provide any further information!
I used MITMProxy as ZachChilders recommended in the comments. I found it somewhat difficult to get set up, so I'll include what directions I followed to get it up and running:
1) Install MITMProxy.
2) Configure Firefox.
3) Create Add On to parse body.
4) Stream data via Python to CLI (TODO).

Quectel BG96 MQTT publish error

I'm try to publish my data to ThingsBoard server i use this types of AT commands
AT+QIACT=1
OK
AT+QMTOPEN=1,"demo.thingsboard.io",1883
OK
AT+QMTCONN=1,"demo.thingsboard.io","MY_ACCESS_TOKEN",""
OK
AT+QMTPUB=1,0,0,0,"v1/devices/me/telemetry"
>{"temperature":35.00,"humidity":80.00} // MY_POST_DATA This line hanging my module
All AT commands response is ok But i finally enter MY_POST_DATA the module doesn't provide no response hanging the previous command.. and i check my ThinksBoard data never post telemetry..
Please help any one how can i fix this problem and publish MQTT server.
Step 1: Get hold of the official AT command documentation for the modem (Quectel BG96 I assume?). It should document how the AT+QMTPUB command behaves and what it expects. Everything else is just guessing. The manufacturer should provide this, and if not you should demand to get one.
...
Step 873, when you have exhausted absolutely all possible ways of getting hold of the official AT command documentation for the modem: You can try my guess that the command behaves similar to other commands that read arbitrary length user data, most notably AT+CMGS which sends SMS messages, which expect a Ctrl-Z (ascii value 26) as an end of data indicator.
+QMTPUB: 1,0,0 simply mean that BG96 has successfully published and your broker (thingsboard) have also acknowledged publication of message.
If you can't see data on broker, then please check if the topic you are publishing is correct or not.
It may happen you are publishing to another topic (or to a different PATH).
Ask 'thingsboard' for help regarding proper topic.

marklogic 8 :Schedule Xquery extraction

I'm currently using Xquery queries (launch via the API) to extract from Marklogic 8.0.6/
Query in my file extract_data.xqy:
xdmp:save("toto.csv",let $nl := "
"
return
document {
for $data in collection("http://book/polar")
return ($data)
})
API call :
$curl --anyauth --user ${MARKLOGIC_USERNAME}:${MARKLOGIC_PASSWORD} -X POST -i -d #extract_data.xqy \
-H "Content-type: application/x-www-form-urlencoded" \
-H "Accept: multipart/mixed; boundary=BOUNDARY" \
$node:$port/v1/eval?database=$db_name
It works fine but I'd like to schedule this extract directly in marklogic and have it running in background to avoid timeout if the request takes too much time to be executed.
Is-there a feature like that to do that ?
Regards,
Romain.
You can use the task scheduler to setup recurring script execution.
The timeout can be adjusted in the script with xdmp:set-request-time-limit
I would suggest you take a look at MLCP as well.
As suggested by Mads a tool like CORB can help pull csv data out of MarkLogic.
A schedule as suggested by Michael can trigger a periodic export, and save the output to disk, or push it elsewhere via HTTP. I'd look into running incremental exports in that case though, and I'd also suggest batching things up. In a large cluster, I'd even suggest chunking the export into batches per forest or per host on which content forests are attached. Scheduled tasks allow targeting specific hosts on which they should run.
You can also run an adhoc export, particularly if you batch up the work using a tool like taskbot. And if you combine it with its OPTIONS-SYNC-UPDATE mode, you can merge multiple batches back into one result file before emitting it as well, and get better performance out of it, compared to running it single threaded. Merging results doesn't scale endlessly, but if you have a relatively small dataset (only a few million small records maybe), that might be sufficient.
HTH!

Get File Creation Date Over HTTP

Given a file on a webserver (e.g., http://foo.com/bar.zip -> only accessible through HTTP), is there any way to get the date attributes (e.g., date [created, modified]) without downloading the entire archive in the first place?
Right now, I download the archive and read the attributes programmatically. Trouble is that the archive is dozens of MiB so it seems like a waste of resources to download the entire thing and end up reading off just a couple of bytes of information.
I realize that bandwidth is practically free, but I don't like to be wasteful in any case.
Try to read Last-Modified from header
Be sure to use a HTTP HEAD request instead of a HTTP GET request to read the HTTP headers only. If you do a HTTP GET, you will download the whole file nevertheless, even if you decide just to inspect the HTTP headers.
Just for the sake of simplicity, here's a compilation of the existing (perfect) answers from #ihorko and #JanThomä, that uses curl. Other option are available too, of course, but here's a fully functional answer.
Use curl with the -I option:
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on an FTP or FILE file, curl displays the file size and last modification time only.
Also, the -s option is nice here:
-s, --silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it.
Hence, something like this would do the trick:
curl -sI http://foo.com/bar.zip | grep 'Last-Modified' | cut -d' ' -f 2-

Resources