Is there any sense in parallel sending messages? (split customers into 2 groups)
new Thread(() =>
{
context.Clients.Clients(listUsers1).LevelI("1 ", d);
}).Start();
new Thread(() =>
{
context.Clients.Clients(listUsers2).LevelI("1 ", d);
}).Start();
or sending messages to all clients
There should be no need to do that unless you are sending a different message to listUsers1 vs. listUsers2.
SignalR has a fairly high rate of per second message send capability. If the size of the messages exceed the recommended cap you should determine how to reduce the message size.
Generally, neither of those should be an issue you have to worry about as it relates to performance.
Related
Because the response topic is empty, when trying to send 3 requests simultaneously only one is getting a response.
Below is the concurrent listener container:
#Bean
public ConcurrentMessageListenerContainer<String, String> replyListenerContainer() {
ContainerProperties containerProperties = new ContainerProperties(replyTopic);
containerProperties.setGroupId(returnGroup);
ConcurrentMessageListenerContainer<String, String> kafkaMessageListenerContainer =
new ConcurrentMessageListenerContainer<>(consumerFactory(), containerProperties);
kafkaMessageListenerContainer.setConcurrency(3);
return kafkaMessageListenerContainer;
}
I am expecting that all 3 requests should complete sequentially.
As Artem said, each partition can have only one consumer (per consumer group).
If the topic has only one partition, there can be only one consumer.
It makes no sense at all to have a replying container receive multiple copies of the reply.
If you have 3 consumers on the topic and want to wait for 3 replies, the next release (2.3) will have an AggregatingReplyingKafkaTemplate. https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#aggregatingreplyingkafkatemplate
I'm using flutter to work on an bluetooth low energy app, via the flutterBlue library, in which we are potentially connecting to multiple peripherals at the same time.
I am able to connect to multiple peripherals if I connect to them individually and send commands to all of them simultaneously.
For state management, my BluetoothHelper is the Model for my ScopedModel.
class BluetoothHelper extends Model {
bool isProcessing = false;
int val = 0;
FlutterBlue flutterBlue = FlutterBlue.instance; //bluetooth library instance
StreamSubscription scanSubscription;
Map<DeviceIdentifier, ScanResult> scanResults = new Map();
/// State
StreamSubscription stateSubscription;
BluetoothState state = BluetoothState.unknown;
/// Device
List<BluetoothDevice> devicesList = new List(); //todo
bool get isConnected => (deviceList.size != 0);
StreamSubscription deviceConnection;
StreamSubscription deviceStateSubscription;
List<BluetoothService> services = new List();
Map<Guid, StreamSubscription> valueChangedSubscriptions = {};
BluetoothDeviceState deviceState = BluetoothDeviceState.disconnected;
Future startScan(String uuid) async {
isProcessing = true;
if (val == 0) {
Future.delayed(Duration(milliseconds: 25), () => scanAndConnect(uuid));
val++;
} else {
Future.delayed(Duration(seconds: 4), () => scanAndConnect(uuid));
}
}
scanAndConnect(String uuid){
scanSubscription =
flutterBlue.scan(timeout: const Duration(seconds: 120), withServices: [
//new Guid('FB755D40-8DE5-481E-A369-21C0B3F39664')]
]).listen((scanResult) {
if (scanResult.device.id.toString() == uuid) {
scanResults[scanResult.device.id] = scanResult;
print("found! Attempting to connect" + scanResult.device.id.toString());
device = scanResult.device;
//connect(device);
connect(device);
}
}, onDone: stopScan);
}
Future connect(BluetoothDevice d) {
deviceConnection = flutterBlue.connect(d).listen(
null,
);
deviceStateSubscription = d.onStateChanged().listen((s) {
if (s == BluetoothDeviceState.connected) {
stopScan();
d.discoverServices().then((s) {
print("connected to ${device.id.toString()}");
services = s;
services.forEach((service) {
var characteristics = service.characteristics;
for (BluetoothCharacteristic c in characteristics) {
if (c.uuid.toString() == '') {//we look for the uuid we want to write to
String handshakeValue ; //value is initiliazed here in code
List<int> bytes = utf8.encode(handshakeValue);
d.writeCharacteristic(c, bytes,
type: CharacteristicWriteType.withResponse);
devicesList.add(d);
}
}
});
});
}
});
}
}
I am trying to loop throw all peripheral Unique Identifier (UID) and then have them connect one after the other programmatically.
This wasnt working out great. It would always end up connecting to the very last peripheral. Seems like the flutterblue instance can only scan for one uid at a time, and if it receives another request, it immediately drops the last request and moves to the new one.
I applied this same logic to the connection of an individual peripheral logic where I'd tap one peripheral and the second immediately and it'd connect to the second one. (I'm not currently blocking the UI or anything while the connection process takes place)
I need to wait till the first peripheral is connected before moving onto the next one.
The code above is the only way I've gotten my peripherals but there are huge problems with this code. It can currently only connect to 2 devices. It's using delays instead of callbacks to achieve connection by giving enough time for the scan and connect to happen before moving onto the second peripheral.
My first instinct was to make the convert the startScan and connect methods into async methods but this isnt working out well as I'd hope.
{await connect(device); } => gives "The built in Identifier "await" cant be used as a type. I could just be setting up the asyncs incorrectly.
I have looked around for alternatives and I've come upon Completers and Isolates. I'm not sure how relevant that might be.
UI SIDE :
I have the following method set for the ontap of a button wrapped within a scoped model descendant. This is going to reliably load peripheralUIDs list with a few uids and then connect to them one after the other.
connectAllPeripherals(BluetoothHelper model, List<String> peripheralUIDs) {
for(var uuid in peripheralUIDs) { //list of strings containing the uuids for the peripherals I want to connect to
model.startScan(uuid);
}
}
Don't know if this point is still an issue.
Assuming your issue hasn't since been fixed. I think the issue you have is trying to maintain the connections within Flutter (rather than just connecting multiple devices and letting Flutter_Blue/the hardware manage the connections).
I've got it happily connecting to multiple devices; after you've setup the instance maintaining a list of multiple device attributes.
i.e. I made a ble-device class which contained each of the following:
StreamSubscription deviceConnection;
StreamSubscription deviceStateSubscription;
List<BluetoothService> services = new List();
Map<Guid, StreamSubscription> valueChangedSubscriptions = {};
BluetoothDeviceState deviceState = BluetoothDeviceState.disconnected;
Maintaining a LinkedHashMap with a new object initialised from the class above for each device connected works nicely.
Other than that - Flutter_Blue will only allow 1 concurrent request call at a time (like reading a characteristic), but you can stack them pretty easily with
await
with the above, I'm able to poll multiple devices within a few milliseconds of each other.
Don't know if that helps - but with any luck, someone also coming across my problem will hit this and save some time.
I have infinite stream with messages represented as Plays Enumerator to which I apply Iteratee. Each message is then processed by Akka actor (number of actors is limited to 10).
Now I would like code in Iteratee to asynchronously wait for free actor if all 10 actors are busy and not to send them another messages which leads to exception Ask timed out on ....
How can I achieve such functionality? Is there a better way to process infinite stream with 10 actors without await?
Example of code I was talking about could look like this:
val workers = context.actorOf(Props[MyWorker].withRouter(RoundRobinRouter(10)))
val it = Iteratee.foreach[Msg] { msg =>
workers ? msg
}
msgEnumerator.apply(it)
Use Iteratee.foldM with the actor ask pattern you have here is seems like the right approach. Assuming you don't want your actors to build up large mailboxes (if you don't care about large mailboxes, just use tell and Iteratee.foreach instead of ask) This will require some specialized routing logic. Since the api for making a custom akka router doesn't support asynchrony, you will need a custom actor to handle the logic of distributing just one piece of work to each actor in your actor pool at a time.
I imagine it working something like:
class WorkDistributor extends Actor {
final val NUM_WORKERS = 10
val workers = context.actorOf(Props[MyWorker].withRouter(RoundRobinRouter(NUM_WORKERS)))
var numActiveWorkers = 0
var queuedWork: Option[Work] = None
def receive = {
case IterateeWork(work) if numActiveWorkers < NUM_WORKERS => workers ! work; numActiveWorkers += 1; sender ! SendMeMoreWork
case IterateeWork(work) => queuedWork = Some(work)
case ActorFinishedWork if queuedWork.isDefined => queuedWork.foreach(workers ! _); queuedWork = None
case ActorFinishedWork => numActiveWorkers -= 1; sender ! SendMeMoreWork
}
}
Where the IterateeWork message is sent by the iteratee and the ActorFinishedWork message is sent by the actors in the actor pool.
Looking at this thing I wrote, this should be rewritten to use become to change the behavior when the actor pool is full (rather than the if filters on each case but I leave that as an exercise for the reader.
Then your Iteratee will look like
Iteratee.foldM[Work, SendMeMoreWork.type](SendMeMoreWork) {
case (_, work) => workDistributor ? IterateeWork(work)
}
Sorry for the beginers question, I read a lot of post here and on the web and there is something fondemental I can't understand.
As I understood, the usage of async actions in WebAPI is mainly for scalability reasons, so any incoming request will be diverted to a worker instead to a thread and by that, more requests could be served.
My project consists on several huge actions that read/insert/update from DB by EF6 many times. the action looks like that:
public async Task<HttpResponseMessage> Get(int id)
{
PolicyModel response = await policyRepository.GetPolicyAsync(id);
return Request.CreateResponse<PolicyModel>(HttpStatusCode.OK,response);
}
and GetPolicyAsync(int id) looks like that:
public async Task<PolicyModel> GetPolicyAsync(int id)
{
PolicyModel response = new PolicyModel();
User currentUser = await Repositories.User.GetUserDetailsAsync(id);
if(currentUser.IsNew)
{
IEnumerable<Delivery> deliveries = await Repositories.Delivery.GetAvailableDeliveries();
if(deliveries == null || deliveries.Count() == 0
{
throw new Exception("no deliveries available");
}
response.Deliveries = deliveries;
Ienumerable<Line> lines = await Repositores.Delivery.GetActiveLinesAsync();
lines.AsParallel().ForAll(line => {
await Repositories.Delivery.AssignLineAsync(line,currentUser);
}
...
return response;
}
I didn't write the entire code but it goes quite a bit and it is also broken into several methods but that is the spirit of it
now the question i have is: is it a good practice to use so many awaiters in one method? I saw that it is more difficult to debug, is it hard to maintain the thread context and for the sake of worker assignment, shouldn't I just use Task.Factory.StartNew() or maybe call a simple Task.Delay() so that the request will immidiatelly be diverted to a worker?
I know that it is not a good practice (async all the way) so maybe just one async method at the end/begining of the GetpolicyAsync(int id) method
EDIT:
as I understood the mechanics of the async methods in .net, for every async method, the compiler is looking for a free thread and let it deal withthe method, the thread is looking for a free worker and assign the method to it and then report back to the compiler that it is free. so if we have 10 threads and for every thread there are 10 workers, the program can deals with 100 concurrent async methods.
so back to web developement, the IIS assign x thread to each app pool, 10 for instance. that means that the async WebAPI method can handle 100 requests but if there is another async method inside, the amount of requests that can be dealth with are 50 and so on, am I right?
and as I understood, I must call an async method in order to make the WebAPI method truely async and now, since it is a bad practice to use Task.Factory.StartNew(), I must at least use Task.Delay()
what I really want to gain is the scalability of the async WebAPI methods and the context awareness of synced methods
in all the examples I've seen so far, they only show a very simple code but in real life, methods are far more complex
Thanks
There's nothing wrong with having many awaits in a single method. If the method becomes too complicated for you you can split it up into several methods:
public async Task<PolicyModel> GetPolicyAsync(int id)
{
PolicyModel response = new PolicyModel();
User currentUser = await Repositories.User.GetUserDetailsAsync(id);
if(currentUser.IsNew)
{
await HandleDeliveriesAsync(await Repositories.Delivery.GetAvailableDeliveries());
}
...
return response;
}
public async Task HandleDeliveriesAsync(IEnumerable<Delivery> deliveries)
{
if(deliveries == null || deliveries.Count() == 0
{
throw new Exception("no deliveries available");
}
response.Deliveries = deliveries;
Ienumerable<Line> lines = await Repositores.Delivery.GetActiveLinesAsync();
lines.AsParallel().ForAll(line => {
await Repositories.Delivery.AssignLineAsync(line,currentUser);
}
Don't use Task.Factory.StartNew or Task.Delay as it just offloads the same work to a different thread. It doesn't add any value in most cases and actually harms in ASP.Net.
After much search, I came across this artical:
https://msdn.microsoft.com/en-us/magazine/dn802603.aspx
it explains what #i3arnon said in the comment. there are no workers at all, the threads are doing all the work.
In a nutshell, the thread that handles a web request reach to an opporation that is designed to be done asyncronically on the driver stack, it creates a request and pass it to the driver. The driver mark it as pending and reports "done" to the thread which goes back to the thread pool to get another assignment. The actual job is not being done by threads but by the driver that borrowing cpu time from all threads and leave them free to attend to their businesses. When the opporation is done, the driver notifies it and an available thread comes to continue...
so, from that I learn that I should look into each and every async method and check that it actually does an opporation that uses the drivers asyncronically.
Point to think: the scallability of your website is dependent on the amount of opporations that are truly being done asyncronically, if you don't have any of them, then your website won't scale.
moderators, I'm a real newbe, if I got it all wrong, please correct me
Thanks!
I'm implementing a function which returns a Stream. I'm not sure how to implement the error handling, what is best practice?
For functions which return a Future, it's best practice never to throw a synchronous error. Is this also true for functions which return a Stream?
Here's an example of what I'm thinking:
Stream<int> count() {
var controller = new StreamController<int>();
int i = 0;
try {
doSomethingThatMightThrow();
new Timer.repeating(new Duration(seconds: 1), () => controller.add(i++));
} on Exception catch (e) {
controller.addError(e);
controller.close();
}
return controller.stream;
}
In general it is true for Streams as well. The main idea is, that users should only need to handle errors in one way. Your example moves all errors to the stream.
There are circumstances where immediate errors are better (for instance you could make the error is due to a programming error and should never be handled anyways, or if you want to guarantee that a Stream never produces errors), but sending the error through a stream is almost always a good thing.
Small nit: a Stream should usually (there are exceptions) not produce any data until somebody has started listening. In your example you are starting a Timer even though you don't even know if there will ever be a listener. I'm guessing the example is reduced and not representative of your real code, but it is something to look out for. The solution would be to use the StreamController's callbacks for pause and subscription changes.
I've updated the example to take on-board Florian's comments.
In my real use case, I don't ever want to buffer the results, so I'm throwing an UnsupportedError if the stream is paused.
I've made it a terminating stream, rather than an infinite one.
If the user of this function adds a listener asynchronously after a few seconds, then they will lose the first couple of results. They shouldn't do this. I guess that's something to document clearly. Though perhaps, I could also throw an error if the subscribe state changes after the first data has been received, but before a close has been received.
Stream<int> count(int max) {
var controller = new StreamController<int>(
onPauseStateChange: () => throw new UnsupportedError('count() Stream pausing not supported.'));
int i = 0;
try {
doSomethingThatMightThrow();
new Timer.repeating(new Duration(seconds: 1), () {
if (!controller.hasSubscribers)
return;
controller.add(i++);
if (i >= max)
controller.close();
});
} on Exception catch (e) {
controller.addError(e);
controller.close();
}
return controller.stream;
}