Its taking too long to write a single command on characteristics. I am using below code for a single command and a loop on it.
getConnObservable()
.first()
.flatMap(rxBleConnection -> rxBleConnection.writeCharacteristic(characteristics, command))
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
bytes -> onWriteSuccess(),
this::onWriteFailure
);
Its taking almost 600ms to write on device. I need to write like 100 of commands 1 by 1.
Can anyone please explain what is the best way to do that batch operation
The best way to get the highest performance possible over BLE is to use the same RxBleConnection to carry out all writes—this means to mitigate the overhead of RxJava i.e.:
getConnObservable()
.first()
.flatMapCompletable(rxBleConnection -> Completable.merge(
rxBleConnection.writeCharacteristic(characteristics, command0)).toCompletable(),
rxBleConnection.writeCharacteristic(characteristics, command1)).toCompletable(),
(...)
rxBleConnection.writeCharacteristic(characteristics, command99)).toCompletable(),
))
.observeOn(AndroidSchedulers.mainThread())
.subscribe(
this::onWriteSuccess,
this::onWriteFailure
);
Additionally one could try to negotiate the shortest possible Connection Interval (CI) by subscribing to rxBleConnection.requestConnectionPriority(BluetoothGatt.CONNECTION_PRIORITY_HIGH, delay, timeUnit)
Further speedup can be achieved by setting bluetoothGattCharacteristic.setWriteType(BluetoothGattCharacteristic.WRITE_TYPE_WITHOUT_RESPONSE) if the peripheral/characteristic supports this write type.*
*Be aware that the internal buffer for writes without response is limited and depending on the API level behaves a bit differently. It should not matter for ~100 writes though.
In regards to this conversation:
RxAndroidBle is a Bluetooth Low Energy library and comparing it to Blue2Serial (which uses standard Bluetooth) in terms of performance is not the best thing to do. These have different use-cases—just like using a WiFi or Ethernet cable to get access to the Internet.
Best Regards
Related
I'm currently setting up an AZURE RTOS (ThreadX on STM32), with Ethernet, SPI and ADCs activated.
This STM32 has to pass-through configuration information from time to time, coming from my PC over the Ethernet-Port.
It has to pass these information via SPI to two other STM32, which makes the first STM32 the system-controller / system-interface. This will be a low-priority task, since the activation of the passed configuration will be started by sync-lines, running from the system-controller to the two other STMs.
While doing so, the system-controller has to read-in ADC values constantly and pass them via Ethernet / TCP to my computer.
I've used the ThreadX TCP server example, as given by STM, as a starting point.
From there I've managed to set up three servers on three ports, communicating sucessfully with a python script on my PC (as a first test).
Now come the two great questions:
1)
Since my input signal may contain frequencies up to 2.5 MHz, I want to digitize this signal with the full 5 MSPS (Nyquist), which ADC3 is capable of.
The smallest internally available data-type at full resolution is uint16_t, which makes the data rate work out to be R = 16 * 5 MSPS = 80 MBit/s (worst-case, I bet, there is optimization possible ... e.g. 8 bits resolution, which halves the data-rate ... but this resolution might not be enough ... or 16 bits, and FFT afterwards, which is also sufficient, since I'm mostly interested in energy per frequency band, but initially I wanted to do this on my computer, for best flexibility).
Even if the Ethernet-IF is capable of doing 100MBit/s, the TCP layer of NetXDuo, I bet, is not.
(There is also USB OTG on this board available, but since networked devices are in my opinion more versatile, I prefer using Ethernet ... nevertheless, USB might be a backup solution)
From my measurements, a data-stream transmitted to the uC via TC from within python, and mirrored back within a thread to my PC allows for relatively consistent 20 MBit/s.
... How do I push this speed to a better level?
(I think 20MBit/s is the back-and-forth data-rate, so one-way may be faster)
However. Second question:
2)
The ADC within the STM is capable of storing data via DMA to memory.
There are two callbacks available, one at half-full, one at full buffer state.
My problem is mostly about the way of reading out the DMA and/or triggering the conversion in the first place.
How do you do this the "right" way on a RTOS (such that you don't brake the RT in RTOS)?
I see some options here, what are the pros/cons you can think of?
a) Let the ADC run freely, calling the call-backs at the respective fill-levels, triggering a TCP-transmission whenever one of the call-backs is reached
-> may lead to glitches due to insufficient speed of the TCP layer in my opinion.
b) Let the ADC conversion be triggered by a thread, which is preempted and will later TCP-transmit the data, as soon as the memory-buffer is full
-> may lead to inconsistency in the converted values, since you get burst-style conversions, with gaps in between, while the buffer is read
c) Let a thread trigger each conversion individually
-> A no-go I think, since threads are not triggered that often, to get a decent sample-frequency
d) Let a free-running ADC trigger callbacks, let a thread do the FFT, transmit within another thread the data via TCP
-> May work, but is less flexible, since the data gets crunched within the uC.
--> Are there other ways you can think of / what do you think about the ways I named here?
--> What do you think about question 1)?
Have a nice day!
I have a loop {} around a futures::select!. I'd like to
await new connections on a Unix socket.
await a new message on one of the Unix streams created in 1. (preferably with some kind of uuid(sent by the connecting client in 1. directly after connecting) attached to it)
I still haven't understood the concept of a Stream correctly, especially futures-rs's "FusedFuture". I tried to create the streams I need using tokio's async-stream, but that didn't work as it does not implement futures-rs's FusedFuture (which I require, as I check other streams in the same select! that benefit/require FusedFutures is_terminated() method)
I think something like tokio_stream::StreamMap is interesting (the key as uuid and the value as a Stream that contains a tupel with the message received on the socket and reference to the UnixStream in order to write back.
I'm really sorry that I cannot provide any examples or something similar to make the issue less unclear. I've tried a lot but I was unable to come any closer to my goal. I hope the way a wrote this is somewhat understandable 😟
Whether to use a std::os::unix::net::UnixListener or tokio::net::UnixListener is up in the air. I've tried both but was unable to achieve anything with either of them.
I'm trying to convert an application from using standard point-to-point MPI calls (e.g., MPI_Isend, MPI_Irecv) to using MPI-3's one-sided calls. My goal is to improve performance on my hardware, which is a system that has Infiniband hardware support and an MPI implementation that's optimized for RDMA calls. I've been told that the hardware performs particularly well with passive synchronization mode, as opposed to active synchronization (i.e., Post-Start-Complete-Wait).
However, even after reading through the MPI standard documentation and examples, I'm confused on how to actually use the calls. For context, my program has a setup phase where I will know the communication pattern and even the buffers of the send data and ultimate buffer of the receiver. So, it's straightforward to set up a window and use it.
Specifically, with passive synchronization, I'm confused about when the "receiver" knows the data in the window has been written by the sender. What I want to do is have the sender produce the message data, then call MPI_Win_lock on the window and then do an MPI_Put and then wait for completion with a MPI_Win_Unlock. But, what is an efficient / recommended way for the "receiver" (window target) of the data to know when the message data has been written? Similarly, given that the communication pattern is iterated and the same receive buffer (the target's buffer) is used multiple times, how do I know that the receiver is done consuming the buffer and it can be reused?
I can envision a couple of approaches:
I can use an MPI_Barrier after the MPI_Win_unlock and before the receiver accesses the data. (This seems that it would work but I'm skeptical that this would yield better performance than active synchronization.)
I can possibly use MPI_Lock and MPI_Unlock on the receiver (target), locking the window when the target is actually using the data so the access epoch can't start on the origin (but, is that the way it works? I've read that lock and unlock don't create critical sections in the traditional sense).
Some sort of home-grown approach where the receiver polls for some sort of a nonce to be written, knowing the data is available when that happens.
Docs for MPI_Win_lock: https://www.open-mpi.org/doc/v3.0/man3/MPI_Win_lock.3.php
In general, how does a programmer synchronize with MPI_Lock and 'MPI_Unlock` in a way that's any more efficient than the active synchronization approach? It does feel like I need to just use post-start-complete-wait, but I'm hoping you can help me find a way to try passive synchronization as well.
I am developing an java-asterisk application that is calling subscribers to deliver messages. At some moments during the call, I need to monitor whether the subscriber is talking or is silent. I need to monitor that for a fairly long time (1-3 seconds) but don't want to interrupt the flow of the outgoing message.
The way I am doing it now is as below
streamFile(*file A*);
exec("WaitForSilence","300,1,1");
waitStatus=getVariable("WAITSTATUS");
streamFile(*file B*);
This works fine but it is only a 300ms detect and a 1s timeout, so from the subscriber point of view the silence between file A and file B is almost unnoticeable. But if I want to listen for longer (say 3 seconds for example) then the subscriber's experience will be ruined.
What I would need is a function similar to "WaitForSilence" but that:
runs in parallel to the script;
delivers its outcome in a variable channel with a name that I define (as there might be several calls to the function, and I need to get all the results)
I've been looking for more than aweek now and couldn't find a way to do that. Any ideas?
Code you provided will do wait, after that will do playback.
There are no way do that simple in one application.
Posible ways:
1) create c/c++ application(asterisk guru skill required) for that.
2) create enother channel, mix it with ChanSpy and in that channel do silence detect. Complexity - expert in asterisk.
Both are not so short(more then 2-3 screens of code), so can't be described in this site.
You can also try use Background application, but i am afraid it will not work too.
I am learning about F# agents (MailboxProcessor).
I am dealing with a rather unconventional problem.
I have one agent (dataSource) which is a source of streaming data. The data has to be processed by an array of agents (dataProcessor). We can consider dataProcessor as some sort of tracking device.
Data may flow in faster than the speed with which the dataProcessor may be able to process its input.
It is OK to have some delay. However, I have to ensure that the agent stays on top of its work and does not get piled under obsolete observations
I am exploring ways to deal with this problem.
The first idea is to implement a stack (LIFO) in dataSource. dataSource would send over the latest observation available when dataProcessor becomes available to receive and process the data. This solution may work but it may get complicated as dataProcessor may need to be blocked and re-activated; and communicate its status to dataSource, leading to a two way communication problem. This problem may boil down to a blocking queue in the consumer-producer problem but I am not sure..
The second idea is to have dataProcessor taking care of message sorting. In this architecture, dataSource will simply post updates in dataProcessor's queue. dataProcessor will use Scanto fetch the latest data available in his queue. This may be the way to go. However, I am not sure if in the current design of MailboxProcessorit is possible to clear a queue of messages, deleting the older obsolete ones. Furthermore, here, it is written that:
Unfortunately, the TryScan function in the current version of F# is
broken in two ways. Firstly, the whole point is to specify a timeout
but the implementation does not actually honor it. Specifically,
irrelevant messages reset the timer. Secondly, as with the other Scan
function, the message queue is examined under a lock that prevents any
other threads from posting for the duration of the scan, which can be
an arbitrarily long time. Consequently, the TryScan function itself
tends to lock-up concurrent systems and can even introduce deadlocks
because the caller's code is evaluated inside the lock (e.g. posting
from the function argument to Scan or TryScan can deadlock the agent
when the code under the lock blocks waiting to acquire the lock it is
already under).
Having the latest observation bounced back may be a problem.
The author of this post, #Jon Harrop, suggests that
I managed to architect around it and the resulting architecture was actually better. In essence, I eagerly Receive all messages and filter using my own local queue.
This idea is surely worth exploring but, before starting to play around with code, I would welcome some inputs on how I could structure my solution.
Thank you.
Sounds like you might need a destructive scan version of the mailbox processor, I implemented this with TPL Dataflow in a blog series that you might be interested in.
My blog is currently down for maintenance but I can point you to the posts in markdown format.
Part1
Part2
Part3
You can also check out the code on github
I also wrote about the issues with scan in my lurking horror post
Hope that helps...
tl;dr I would try this: take Mailbox implementation from FSharp.Actor or Zach Bray's blog post, replace ConcurrentQueue by ConcurrentStack (plus add some bounded capacity logic) and use this changed agent as a dispatcher to pass messages from dataSource to an army of dataProcessors implemented as ordinary MBPs or Actors.
tl;dr2 If workers are a scarce and slow resource and we need to process a message that is the latest at the moment when a worker is ready, then it all boils down to an agent with a stack instead of a queue (with some bounded capacity logic) plus a BlockingQueue of workers. Dispatcher dequeues a ready worker, then pops a message from the stack and sends this message to the worker. After the job is done the worker enqueues itself to the queue when becomes ready (e.g. before let! msg = inbox.Receive()). Dispatcher consumer thread then blocks until any worker is ready, while producer thread keeps the bounded stack updated. (bounded stack could be done with an array + offset + size inside a lock, below is too complex one)
Details
MailBoxProcessor is designed to have only one consumer. This is even commented in the source code of MBP here (search for the word 'DRAGONS' :) )
If you post your data to MBP then only one thread could take it from internal queue or stack.
In you particular use case I would use ConcurrentStack directly or better wrapped into BlockingCollection:
It will allow many concurrent consumers
It is very fast and thread safe
BlockingCollection has BoundedCapacity property that allows you to limit the size of a collection. It throws on Add, but you could catch it or use TryAdd. If A is a main stack and B is a standby, then TryAdd to A, on false Add to B and swap the two with Interlocked.Exchange, then process needed messages in A, clear it, make a new standby - or use three stacks if processing A could be longer than B could become full again; in this way you do not block and do not lose any messages, but could discard unneeded ones is a controlled way.
BlockingCollection has methods like AddToAny/TakeFromAny, which work on an arrays of BlockingCollections. This could help, e.g.:
dataSource produces messages to a BlockingCollection with ConcurrentStack implementation (BCCS)
another thread consumes messages from BCCS and sends them to an array of processing BCCSs. You said that there is a lot of data. You may sacrifice one thread to be blocking and dispatching your messages indefinitely
each processing agent has its own BCCS or implemented as an Agent/Actor/MBP to which the dispatcher posts messages. In your case you need to send a message to only one processorAgent, so you may store processing agents in a circular buffer to always dispatch a message to least recently used processor.
Something like this:
(data stream produces 'T)
|
[dispatcher's BCSC]
|
(a dispatcher thread consumes 'T and pushes to processors, manages capacity of BCCS and LRU queue)
| |
[processor1's BCCS/Actor/MBP] ... [processorN's BCCS/Actor/MBP]
| |
(process) (process)
Instead of ConcurrentStack, you may want to read about heap data structure. If you need your latest messages by some property of messages, e.g. timestamp, rather than by the order in which they arrive to the stack (e.g. if there could be delays in transit and arrival order <> creation order), you can get the latest message by using heap.
If you still need Agents semantics/API, you could read several sources in addition to Dave's links, and somehow adopt implementation to multiple concurrent consumers:
An interesting article by Zach Bray on efficient Actors implementation. There you do need to replace (under the comment // Might want to schedule this call on another thread.) the line execute true by a line async { execute true } |> Async.Start or similar, because otherwise producing thread will be consuming thread - not good for a single fast producer. However, for a dispatcher like described above this is exactly what needed.
FSharp.Actor (aka Fakka) development branch and FSharp MPB source code (first link above) here could be very useful for implementation details. FSharp.Actors library has been in a freeze for several months but there is some activity in dev branch.
Should not miss discussion about Fakka in Google Groups in this context.
I have a somewhat similar use case and for the last two days I have researched everything I could find on the F# Agents/Actors. This answer is a kind of TODO for myself to try these ideas, of which half were born during writing it.
The simplest solution is to greedily eat all messages in the inbox when one arrives and discard all but the most recent. Easily done using TryReceive:
let rec readLatestLoop oldMsg =
async { let! newMsg = inbox.TryReceive 0
match newMsg with
| None -> oldMsg
| Some newMsg -> return! readLatestLoop newMsg }
let readLatest() =
async { let! msg = inbox.Receive()
return! readLatestLoop msg }
When faced with the same problem I architected a more sophisticated and efficient solution I called cancellable streaming and described in in an F# Journal article here. The idea is to start processing messages and then cancel that processing if they are superceded. This significantly improves concurrency if significant processing is being done.