According to rxandroidble, dispose() should be called in onPause() of Activity lifeCycle, then the BLE connection will be close? And
I can only connect BLE device in new Activity, and if I don't call dispose(), it will come up with BleAlreadyConnectedException(“Already connected to device with MAC address ***”) when I connect to BLE device in new Activity.
So, how can I maintain connection state between Activities?
To keep a reference to anything longer than the lifecycle of an Activity one has to move the reference outside of the scope of this Activity.
On Android platform there are several ways to achieve this separation of lifecycles. By the book approach would be a Service which can be started by an Activity and stopped by a different one. Activities can communicate with the Service for instance by using binding — just be sure that the Service is started as it may be killed if left without any bound clients (activities).
Interface of the Service may vary on case-by-case basis — you will have to design what will suit your needs best.
Alternative (discouraged) approach could be the singleton pattern.
Related
There is a Windows service that I need to communicate with (in a duplex way) from ASP.NET. Is it safe to turn the Windows service into a WCF service and organize two-way communication?
I'm concerned about a scenario when the service is trying to communicate but ASP.NET process is getting reloaded and the message gets lost. Though it's unlikely during development, I guess it's quite likely in production with many clients.
I'm leaning towards a solution that involves some kind of persistence:
Both the Windows service and ASP.NET write data to SQL Server and get notified via SqlDependency
They exchange messages via RabbitMq
Here's a couple of ideas regarding the general case where two independent systems (processes, servers, etc.) need to communicate reliably:
Transaction model, where the transmitting party initiates communication and waits for acknowledgment from the recipient before marking the message as delivered. In case of transmission failure/timeout, it's the sender's responsibility to persist the message and retry later. For instance, Webhook architectures rely on this model.
Publish/Subscribe model, used by a lot of distributed systems, where both parties rely on a third-party message broker (message queue/service bus mechanism) such as RabbitMQ. In this architecture, sender is only responsible for making sure that the message has been successfully queued. The responsibility of making sure that the message is delivered to the recipient is on the message broker. In this case, you need to make sure that your message broker satisfies your reliability needs, for example: Is it in-memory only? Or does it also persist to disk and is able to recover from not just a process-recycle but also a power/system recycle.
And like you said, you can build your own messaging infrastructure too: sender writes to a local or cloud database or a cloud queue/service bus, and the receiver polls and consumes the messages.
So, a few guidelines:
If you ever need to scale out (have multiple servers) and they need to somehow collaborate on these messages, then make your initial investment on a database or cloud-queue solution (such as Azure SQL or Azure Queues).
Otherwise, if your services only need to communicate within one server, then you can use a database approach or use a queue service that satisfies your persistence/reliability requirements. RabbitMQ seems like a robust solution for this scenario.
I wanted to check the viability of the design approach to use Message Oriented middle-ware (MOM) technology like JMS or ActiveMQ or RabbitMQ for handling asynchronous processing within single web application i.e. the publisher and the subscriber to the MOM server will be contained in the same web application.
The rationale behind this design is to offload some of the heavy duty processing functionality as a background asynchronous operation. The publisher in this case is the server side real-time web service method which need to respond back instantaneously (< than 1 sec) to the calling web service client and the publisher emits the message on MOM Topic. The subscriber is contained in the same web application as the publisher and the subscriber uses the message to asynchronously processes the complex slightly more time consuming (5-7 seconds) functionality.
With this design we can avoid having to spawn new threads within the application server container for handling the heavy duty complex processing functionality.
Does using MOM server in this case an overkill where the message publisher and message subscriber are contained in the same web server address space? From what I have read MOM tech is used mainly for inter-application communication and wanted to check if it is fine to use MOM for intra-application communication.
Let know your thoughts.
Thanks,
Perhaps you will not think it is a good example but in the JEE world using JMS for intra-application communication is quite common. Spawning new threads is considered a bad practive and message-driven beans make consuming messages relatively easy and you get transaction support. A compliant application server like GlassFish has JMS on board so production and consumption of messages does not involve socket communication as will be the case with a standalone ActiveMQ. But there might be reasons to have a standalone JMS, e.g. if there is a cluster of consumers and you want the active instances to take over work from the failed ones... but then the standalone JMS server becomes the single point of failure and now you want a cluster of them and so on.
One significant feature of JMS is (optional) message persistence. You may be concerned that the long-running task fails for some reason and the client's request will be lost. But persistent messages are much more expensive as they cause disk IO.
From what you've described I can tell that of the usual features of MOM (asynchronous processing, guaranteed delivery, order of messages) you only need asynchronous processing. So if guarantees are not important I would use some kind of a thread pool.
SignalR documentation says that scaleout/backplane works well in case of server broadcast type of load/implementation. However I doubt that in case of pure server broadcast it will cause duplicate messages to be sent to the clients. Consider the following scenario:
I have two instances of my hub sitting on two web servers behind a load balancer on my web farm.
The hub on each server implements a timer for database polling to fetch some updates and broadcast to clients in groups, grouped on a topic id.
The clients for a group/topic might be divided between the two servers.
Both the hub instances will fetch the same or overlapping updates from the database.
Now as each hub sends the updates to clients via the backplane, will it not result in duplicate updates sent to the clients?
Please suggest.
The problem is not with SignalR, but with your database polling living inside your hubs. A backplane deals correctly with broadcast replication, but if you add another responsibility to your hubs then it's a different story. That's the part that is duplicating messages, not SignalR, because now you have N pollers doing broadcast across all server instances.
You could, for example, remove that logic from hubs into something else, and letting just one single instance of your server applications use this new piece in order to do the generation of messages by polling, using maybe a piece of configuration to decide which one. This way you would send messages only from there, and SignalR's backplane would take care of the replication. It's just a very basic suggestion and it could be done differently, but the key point is that your poller should not be replicated, and that's not directly related to SignalR.
It's also true that polling might not be the best way to deal with your scenario, but IMO that would be answering a different question.
I am working on a SignalR application and I intend to make it scalable using Azure Message Bus and Azure autoscale. However, based on my expected user base, I anticipate that 90% of the time, my application will only have one instance running.
I would like to only have the backplane active if there are more than one instances, since the backplane architecture increases the travel time of a message and the message bus will cost me money. I definitely recognize that the travel time and costs are very small, but there's no reason to have them, if there's no reason to have them.
Q: Is it possible to make the service bus backplane for SignalR dynamic so that it can be enabled and disabled based on need?
Possible? Probably, but this is uncharted territory and there's no telling what would break if you actually implemented a bus that dynamically scaled out on demand. Sounds like a cool experiment though...
We have a system sending HL7 messages to BizTalk using the MLLP HL7 accelerator. We then have several destination systems, which all need their own format of the HL7 message, so we use orchestrations for each destination, each involving a different transform. Each orchestrationi uses a filter to subscribe to the receive port. This subscription model works fairly well, but what if we need to stop or undeploy one of the orchestrations. The drawback of the subscription model, over a push model is that there is no queuing built in, so if the orchestration is removed, the messages picked up by the receive port do not queue for that system. Or is this a concern? How do you handle upgrades to the orchestration, etc. Is there a better design pattern?
A slightly better design in my opinion is to place the transformations (maps) directly on the send ports. You can then have your filters on the different send ports to route to the destinations systems.
This design would make updates a bit easier as there isn't a orchestration that first needs to be removed to deploy a new version of the map. All you then have to do is to stop the port ( that leaves the subscription active and messages will be queued for its subscribing ports). After the port is stopped you can just modify the resource (assembly) with the new version of the map and then start to port to start transforming and sending the queued messages.
It's usally a good idea to only use orchestration when you need to controll a more complex workflow - not just to apply a map as in you case.