No notification data in Write/Notification set-up - rxandroidble

After doing my best to understand all the magic in RxJava and the excellent rxandrodible library I'm stuck! The last thing that I need to fix is to setup a write/Notification link in which I write a value - a single value - and after that subscribe to a specific characteristic.
I've used the following code - as considered best practice (?) - but it doesen't actually result in any data on the soundTV.
connectionSubscription = device.establishConnection(false)
.flatMap( // when the connection is available...
rxBleConnection -> rxBleConnection.setupNotification(RX_CHAR_UUID), // ... setup the notification...
(rxBleConnection, apScanDataNotificationObservable) -> Observable.combineLatest( // ... when the notification is setup...
rxBleConnection.writeCharacteristic(TX_CHAR_UUID, new byte[]{SOUND}).toObservable(), // ... write the characteristic...
apScanDataNotificationObservable.take(1),// ... and observe for the first notification on the AP_SCAN_DATA
(writtenBytes, responseBytes) -> responseBytes // ... when both will appear return just the response bytes...
)
)
.flatMap(observable -> observable) // ... flatMap the result as it is Observable<byte[]>...// ... and finish after first response is received to cleanup notifications
.take(1)
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
responseBytes -> {
soundTV.setText(new String(responseBytes));
},
throwable -> {
soundTV.setText(throwable.toString());}
);
There is no data written by the subscription to the give TextView, and I can't find anything that goes wrong.
If I just setup the notifications, without combining it with the write, everything works as it should.
Any suggestions on how to make it work?
The code example used gives only one response. What I was looking for was a write and a subscription that was ongoing. I didn't realize that the take(1) call actually made a difference, thought it was a clean-up in the complex call structure.
Sorry! This works as intended for me:
connectionSubscription = device.establishConnection(false)
.flatMap( // when the connection is available...
rxBleConnection -> rxBleConnection.setupNotification(RX_CHAR_UUID), // ... setup the notification...
(rxBleConnection, apScanDataNotificationObservable) -> Observable.combineLatest( // ... when the notification is setup...
rxBleConnection.writeCharacteristic(TX_CHAR_UUID, new byte[]{SOUND}).toObservable(), // ... write the characteristic...
apScanDataNotificationObservable,// ... and observe for the first notification on the AP_SCAN_DATA
(writtenBytes, responseBytes) -> responseBytes // ... when both will appear return just the response bytes...
)
)
.flatMap(observable -> observable) // ... flatMap the result as it is Observable<byte[]>...// ... and finish after first response is received to cleanup notifications
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
responseBytes -> {
soundTV.setText(new String(responseBytes));
},
throwable -> {
soundTV.setText(throwable.toString());}
);

Related

Problem with handling async inside of an actix-rust actor

Currently I'm checking out Actix, a Rust based actor framework. I'm also using Actix web to build a REST API. Now, I'm familiar with actor based architecture from working with Akka, however, I'm not being able to start a simple async task inside of my handler.
It's simplified, but I have the following code:
#[post("/upload")]
pub async fn upload_images(
app_config: web::Data<AppConfig>,
mut payload: Multipart,
) -> Result<HttpResponse> {
... transforms the multipart form into images...
for img in img_vec {
app_config.image_processor_addr.do_send(ResizeImage{
img_id: img._id,
img_format: img.format,
image_buffer: img.image.bytes,
});
};
Ok(HttpResponse::Ok().content_type(ContentType::plaintext()).body(format!("Inserted {} images.", vec_len)))
}
As you can see, I receive a multipart upload which consists of images, which I then send to an image processing actor to perform a resize on the images.
And this is the simplified code for the handling of the message ResizeImage for the ImageProcessor actor:
impl Handler<ResizeImage> for ImageProcessor {
type Result = ();
fn handle(&mut self, msg: ResizeImage, _: &mut Self::Context) -> Self::Result {
let thumbnail_col = self.thumbnail_col.clone();
let img_col = self.img_col.clone();
let img_format: ImageFormat = msg.img_format.clone().into();
log::info!("Parsing image {} with actor {}.", msg.img_id, self.id);
let actor_task_fut = Box::pin(async move {
... parses the image here...
});
match Arbiter::current().spawn(actor_task_fut) {
true => log::info!("Sent task to arbiter."),
false => log::error!("Failed to send task to arbiter!"),
}
}
}
The idea is that I would resolve the web handler, and the resize task would be done async on the actor thread. However, this works on the first call, but when I call the same endpoint before all the images from the previous call are parsed, it doesn't resolve immediately, it waits till the actor has resized the previous batch.
I was under the impression that the messages would be sent to the actor mailbox and then the handler code would not need to wait for anything, since I'm using do_sent, which the documentation states that it doesn't await for the answer. Using Akka I can easily do something similar, and it seems to work. Am I missing something here? Is the way I'm handling async inside the actor thread wrong?

In Firebase and Kotlin, in case of no network connection is there an easy way to handle endless network request looping?

In the case of network connectivity loss, the following code just loops endlessly and keeps making API calls. Is there a way to cancel with a timeout (for example, 5000 ms) using Firebase API? Or would I have to make my own Coroutine to handle this?
fun updateUserFieldInDB(
collectionPath: String,
strArr: ArrayList<String>,
onSuccess: (() -> Unit),
onFail: (() -> Unit)
) {
val fbUser = Firebase.auth.currentUser
if (fbUser == null) {
Log.i(TAG, "user is null....")
return
}
val db = Firebase.firestore
when (strArr.size) {
2 -> {
db.collection(collectionPath).document(fbUser.uid).update(strArr[0], strArr[1])
.addOnSuccessListener {
onSuccess()
}
.addOnFailureListener {
onFail()
}
}
}
}
The onSuccess ad onFail completion handlers for Firestore only fire once the write operation has been committed or rejected on the server. You should only use them if you're interested in detecting that situation, in which case the looping is to be expected.
If you only care whether the write operation was recorded by the Firestore client (in its local cache), the best way to detect that is when the update(strArr[0], strArr[1]) call completes.
So pretty much: when the next line of code executes, the write has been recorded locally; when the completion listeners fire, the write has been handled on the server.

Spawn reading data from multipart in actix-web

I tried the example of actix-multipart with actix-web v3.3.2 and actix-multipart v0.3.0.
For a minimal example,
use actix_multipart::Multipart;
use actix_web::{post, web, App, HttpResponse, HttpServer};
use futures::{StreamExt, TryStreamExt};
#[post("/")]
async fn save_file(mut payload: Multipart) -> HttpResponse {
while let Ok(Some(mut field)) = payload.try_next().await {
let content_type = field.content_disposition().unwrap();
let filename = content_type.get_filename().unwrap();
println!("filename = {}", filename);
while let Some(chunk) = field.next().await {
let data = chunk.unwrap();
println!("Read a chunk.");
}
println!("Done");
}
HttpResponse::Ok().finish()
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| App::new().service(save_file))
.bind("0.0.0.0:8080")?
.run()
.await
}
This works well, but I want to do with form data asynchronously. So I tried instead:
use actix_multipart::Multipart;
use actix_web::{post, web, App, HttpResponse, HttpServer};
use futures::{StreamExt, TryStreamExt};
#[post("/")]
async fn save_file(mut payload: Multipart) -> HttpResponse {
actix_web::rt::spawn(async move {
while let Ok(Some(mut field)) = payload.try_next().await {
let content_type = field.content_disposition().unwrap();
let filename = content_type.get_filename().unwrap();
println!("filename = {}", filename);
while let Some(chunk) = field.next().await {
let data = chunk.unwrap();
println!("Read a chunk.");
}
println!("Done");
}
});
HttpResponse::Ok().finish()
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| App::new().service(save_file))
.bind("0.0.0.0:8080")?
.run()
.await
}
(Added actix_web::rt::spawn to save_file.)
But this did not work -- the message "Done" never printed. The number of "Read a chunk" displayed in the second case was less than the first case, so I guess that field.next().await cannot terminate for some reason before completing reading all data.
I do not know so much about asynchronous programming, so I am not sure why field.next() did not work in actix_web::rt::spawn.
My question are: why is it, and how can I do with actix_web::rt::spawn?
When you make this call:
actix_web::rt::spawn(async move {
// do things...
});
spawn returns a JoinHandle which is used to poll the task. When you drop that handle (by not binding it to anything), the task is "detached", i.e., it runs in the background.
The actix documentation is not particularly helpful here, but actix uses the tokio runtime under the hood. A key issue is that in tokio, spawned tasks are not guaranteed to complete. The executor needs to know, somehow, that it should perform work on that future. In your second example, the spawned task is never .awaited, nor does it communicate with any other task via channels.
Most likely, the spawned task is never polled and does not make any progress. In order to ensure that it completes, you can either .await the JoinHandle (which will drive the task to completion) or .await some other Future that depends on work in the spawned task (usually by using a channel).
As for your more general goal, the work is already being performed asynchronously! Most likely, actix is doing roughly what you tried to do in your second example: upon receiving a request, it spawns a task to handle the request and polls it repeatedly (as well as the other active requests) until it completes, then sends a response.

Is there a way to be notified when a client has unsubscribe from server sent events?

As I understand when a request to an event emitter on the server arrives, that request is never closed and you only need to res.write() every time you would like to send a message. However is there a way to be notified when the client that performed this request has left? Is there a property on the request object?
suppose I have the following route
app.get('/event',function(req,res){
//set response headers
//how do I check if req object is still active to send a message and perform other actions?
})
The basic sequence of events should be similar in other frameworks, but this example is Grails 3.3.
First set up endpoints to subscribe, and to close the connection.
def index() {
// handler for GET /api/subscribe
rx.stream { Observer observer ->
// This is the Grails event bus. background tasks,
// services and other controllers can post these
// events, CLIENT_HANGUP, SEND_MSG, which are
// just string constants.
eventBus.subscribe(CLIENT_HANGUP) {String msg ->
// Code to handle when the grails event bus
// posts CLIENT_HANGUP
// Do any side effects here, like update your counter
// Close the SSE connection
observer.onCompleted()
return
}
eventBus.subscribe(SEND_MSG) {String msg ->
// Send a Server Sent Event
observer.onNext(rx.respond(msg))
}
}
}
def disconnecting()
{
// handler for GET /api/disconnect
// Post the CLIENT_HANGUP event to the Grails event bus
notify(CLIENT_HANGUP, 'disconnect')
}
Now in the client, you need to arrange to GET /api/disconnect whenever your use-case requires it. Assuming you want to notice when someone navigates away from your page, you could register a function on window.onbeforeunload. This example is using Vue.js and Axios.
window.onbeforeunload = function (e) {
e.preventDefault()
Vue.$http({
method: 'get',
url: 'http://localhost:8080/api/disconnect'
})
.then((response) => { console.log(response) })
.catch(({error}) => { console.log(error) })
}
In the case of Servlet stacks like Grails, I found that I needed to do this even if I had no housekeeping of my own to do when the browser went away. Without it, page reloads were causing IOExceptions on the back end.

How do you know when a resume login attempt is being made or was completed?

On the client:
Can you tell on page load whether a resume login attempt will be made?
Is there a hook for when the attempt returns? Can I listen for the right DDP message?
EDIT: Looks like Meteor.userId() is defined on page load when a resume login attempt will be made, which takes care of #1.
Here are a couple solutions:
Watch DDP on client
Unfortunately by the time the stream handler is called with the result of the login method, Meteor.connection._methodInvokers has been cleared – hence the search function. It would be nice if there was a different / more efficient way to know resumeMethodId. A few possibilities:
Is it guaranteed to have id "1"?
A hook that is called when Meteor decides to call login
If Meteor.connection._methodInvokers were reactive, I could do an autorun that stops after the id is found.
.
resumeAttemptComplete = (success) ->
console.log 'resumeAttemptComplete', success
resumeMethodId = null
searchForResumeMethodId = ->
for id, invoker of Meteor.connection._methodInvokers
sentMessage = invoker._message
if sentMessage.method is 'login' and sentMessage.params[0].resume?
resumeMethodId = id
if Meteor.isClient
Meteor.connection._stream.on 'message', (messageString) ->
unless resumeMethodId
searchForResumeMethodId()
message = JSON.parse messageString
if message.id is resumeMethodId and message.msg is 'result'
resumeAttemptComplete !message.error
_methodInvokers definition: https://github.com/meteor/meteor/blob/de74f2707ef34d1b9361784ecb4aa57803d34ae8/packages/ddp-client/livedata_connection.js#L79-L83
Server onLogin sends event to client
// server:
// map of connection ids -> publish function contexts
let onResumePublishers = {}
Meteor.publish('onResume', function () {
onResumePublishers[this.connection.id] = this
this.ready()
this.onStop(() => {
delete onResumePublishers[this.connection.id]
})
})
let handleLoginEvent = function({connection, type}, loggedIn) {
if (type === 'resume') {
let publisher = onResumePublishers[connection.id]
if (publisher)
publisher.added('onResume', connection.id, {loggedIn}})
}
}
Accounts.onLogin(function (loginAttempt) {
handleLoginEvent(loginAttempt, true)
})
Accounts.onLoginFailure(function (loginAttempt) {
handleLoginEvent(loginAttempt, false)
})
// client:
let resumeExpires = new Date(localStorage.getItem('Meteor.loginTokenExpires'))
let resumeAttemptBeingMade = resumeExpires && resumeExpires > new Date()
let OnResume = new Mongo.Collection('onResume')
let onResumeSubscription = Meteor.subscribe('onResume')
OnResume.find(Meteor.connection.id).observeChanges(
added(id, {loggedIn}) {
onResumeSubscription.stop()
onResumeAttemptCompleted(loggedIn)
}
})
let onResumeAttemptCompleted = function(success) {
// ...
}
check Meteor.loggingIn() .If you want to know if the user is trying to login or not . docs

Resources