I'm building a simple talker/listener app that receives OSC data through UDP. I'm using OSCKit pod which itself uses CocoaAsyncSocket library for the internal UDP communication.
When I'm listening to a particular port to receive data from another OSC capable software, I log the received commands to a NSTextView. The problem is that sometimes, I receive thousands of messages in a very short period of time (EDIT: I just added a counter to see how many messages I'm receiving. I got over 14000 in just a few seconds and that is only a single moving object in my software). There is no way to predict when this is gonna happen so I cannot lock the textStorage object of the NSTextView to keep it from sending all its notifications to update the UI. The data is processed through a delegate callback function.
So how would you go around that limitation?
///Handle incoming OSC messages
func handle(_ message: OSCMessage!) {
print("OSC Message: \(message)")
let targetPath = message.address
let args = message.arguments
let msgAsString = "Path: \"\(targetPath)\"\nArguments: \n\(args)\n\n"
print(msgAsString)
oscLogView.string?.append(msgAsString)
oscLogView.scrollToEndOfDocument(self)
}
As you can see here (this is the callback function) I'm updating the TextView directly from the callback (both adding data and scrolling to the end), every time a message is received. This is where Instruments tell me the slow down happens and the append is the slowest one. I didn't go further than that in the analysis, but it certainly is due to the fact that it tries to do a visual update, which takes a lot more time than parsing 32bits of data, and when it's finished it receives another update right away from the server's buffer.
Could I send that call to the background thread? I don't feel like filling up the background thread with visual updates is such a great idea. Maybe growing my own string buffer and flushing it to the TextView every now and then with a timer?
I want to give this a console feel, but a console that freezes is not a console.
Here is a link to the project on github. the pods are all there and configured with cocoapods, so just open the workspace. You guys might not have anything to generate that much OSC traffic, but if you really feel like digging in, you can get IanniX, which is an open-source sequencer/trajectory automator that can generate OSC and a lot of it. I've just downloaded it and I'll build a quick project that should send enough data to freeze the app and I'll add it to the repo if anybody want to give it a shot.
I append the incoming data to a buffer variable and I use a timer that flushes that buffer to the textview every 0.2 seconds. The update cycle of the textview is way too slow to handle the amount of incoming data so unloding the network callback to a timer let the server process the data instead of being stopped every 32bits.
If anybody come up with a more elegant method, I'm open minded.
Related
Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.
I'm trying to get basic Bluetooth LE connections working in my app, but I'm running into problems. I have a couple of sensors that I can connect to just fine with the iOS version of this app, but I simply cannot connect to them with the Android version. The only thing I seem to be able to connect to is a beacon I've set to configuration mode.
After checking my ADB logs again, I noticed that whenever it fails to connect to a device, there's always four calls to bt_osi_alarm and bta_gattc_conn_cback. I'm wondering how to interpret this.
An unsuccessful connection attempt:
D/BluetoothGatt(17824): connect() - device: 00:17:E9:C0:86:14, auto: false
E/bt_osi_alarm(14357): reschedule_root_alarm alarm expiration too close for posix timers, switching to guns
...
E/bt_osi_alarm(14357): reschedule_root_alarm alarm expiration too close for posix timers, switching to guns
E/bt_osi_alarm(14357): reschedule_root_alarm alarm expiration too close for posix timers, switching to guns
E/bt_osi_alarm(14357): reschedule_root_alarm alarm expiration too close for posix timers, switching to guns
...
W/bt_btif (14357): bta_gattc_conn_cback() - cif=3 connected=0 conn_id=3 reason=0x0002
W/bt_btif (14357): bta_gattc_conn_cback() - cif=4 connected=0 conn_id=4 reason=0x0002
W/bt_btif (14357): bta_gattc_conn_cback() - cif=5 connected=0 conn_id=5 reason=0x0002
W/bt_btif (14357): bta_gattc_conn_cback() - cif=6 connected=0 conn_id=6 reason=0x0002
D/BtGatt.GattService(14357): onConnected() - clientIf=6, connId=0, address=00:17:E9:C0:86:14
D/BluetoothGatt(17824): onClientConnectionState() - status=133 clientIf=6 device=00:17:E9:C0:86:14
I/mono-stdout(17824): BLEAdapter.OnConnectionStateChange()
I/mono-stdout(17824): Gatt disconnected!
A successful connection attempt to one of those beacons does not have these sets of four bt_osi_alarm and bta_gattc_conn_cback errors. Preferably if someone knows the Android source well, hopefully they can tell me what's going on. It's a long shot I know, but I'm kinda out of options. Thanks.
Some additional information might help the community debug this issue for you. Please add the following to your original question:
Make/Model/Revision of all BLE devices that you've tested and whether or not each one works
Specific android device you have tested with
The error your app gets and at what step it fails
Are you able to get a BluetoothDevice
Does it fail when calling connectGatt()
Is your app written in java/android studio, or is this some kind of universal app that runs on both iOS and Android?
In my experience, alarm expiration too close for posix timers, switching to guns, isn't as serious as its error level suggests. On my nexus 7/marshmallow device, I see this message constantly while connected to a bluetooth device (serial profile, not BLE) so I don't think this is a bug.
I suggest checking up on the BLE device manufacturer to see if there are any firmware upgrades for them too.
More details on alarm expiration too close for posix timers, switching to guns
What's happening is some part of the android bt code is setting a very short timer. So short, that it expires before the app can measure it.
The error, alarm expiration too close for posix timers, switching to guns, was added to android source in a commit 081e4b67b44a5fd397c2d79a5566e11ae52d4aca
The commit message gives us more background on what is happening.
Ensure alarms are called back when they expire in the past
Turns out the posix timers we're dealing with aren't well behaved.
If the timer is TIMER_ABSTIME the following is supposed to be true:
"If the specified time has already passed, the function shall succeed
and the expiration notification shall be made."
But alas, this is not the case. If the expiration time happens to be
in the past (e.g. very short timer which gets hit with a context
switch before the timer can be set) the timer decides to disarm
itself. This means no more callbacks happen, and no more alarms are
processed ever.
sadness.
But thankfully, we can use timer_gettime to check the state of the
timer after timer_settime. If timer_gettime tells us the timer is
disarmed right after we armed it, we want to perform the alarm
callback ourselves.
Put all timer callbacks on the same thread (as would be the case in
the underlying timer implementation already) and used a sempahore to
signal when an alarm expires in the normal posix timer callback and
also in the timer didn't set case.
The code also includes the following comments
If next expiration was in the past (e.g. short timer that got context
switched) then the timer might have diarmed itself. Detect this case
and work around it by manually signalling the |alarm_expired|
semaphore.
It is possible that the timer was actually super short (a few
milliseconds) and the timer expired normally before we called
|timer_gettime|. Worst case, |alarm_expired| is signaled twice for
that alarm. Nothing bad should happen in that case though since the
callback dispatch function checks to make sure the timer at the head
of the list actually expired.
I have written a very simple Flink streaming job which takes data from Kafka using a FlinkKafkaConsumer082.
protected DataStream<String> getKafkaStream(StreamExecutionEnvironment env, String topic) {
Properties result = new Properties();
result.put("bootstrap.servers", getBrokerUrl());
result.put("zookeeper.connect", getZookeeperUrl());
result.put("group.id", getGroup());
return env.addSource(
new FlinkKafkaConsumer082<>(
topic,
new SimpleStringSchema(), result);
}
This works very well and whenever I put something into the topic on Kafka, it is received by my Flink job and processed. Now I tried to see what happens if my Flink Job isn't online for some reason. So I shut down the flink job and kept sending messages to Kafka. Then I started my Flink job again and was expecting that it would process the messages that were sent meanwhile.
However, I got this message:
No prior offsets found for some partitions in topic collector.Customer. Fetched the following start offsets [FetchPartition {partition=0, offset=25}]
So it basically ignored all messages that came since the last shutdown of the Flink job and just started to read at the end of the queue. From the documentation of FlinkKafkaConsumer082 I gathered, that it automatically takes care of synchronizing the processed offsets with the Kafka broker. However that doesn't seem to be the case.
I am using a single-node Kafka installation (the one that comes with the Kafka distribution) with a single-node Zookeper installation (also the one that is bundled with the Kafka distribution).
I suspect it is some kind of misconfiguration or something the like but I really don't know where to start looking. Has anyone else had this issue and maybe solved it?
I found the reason. You need to explicitly enable checkpointing in the StreamExecutionEnvironment to make the Kafka connector write the processed offsets to Zookeeper. If you don't enable it, the Kafka connector will not write the last read offset and it will therefore not be able to resume from there when the collecting Job is restarted. So be sure to write:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(); // <-- this is the important part
Anatoly's suggestion for changing the initial offset is probably still a good idea, in case checkpointing fails for some reason.
https://kafka.apache.org/08/configuration.html
set auto.offset.reset to smallest(by default it's largest)
auto.offset.reset:
What to do when there is no initial offset in Zookeeper or if an
offset is out of range:
smallest : automatically reset the offset to the smallest offset
largest : automatically reset the offset to the largest offset
anything else: throw exception to the consumer.
If this is set to largest, the consumer may lose some messages when the number of partitions, for the topics it subscribes to, changes on the broker. To
prevent data loss during partition addition, set auto.offset.reset to
smallest
Also make sure getGroup() is the same after restart
I am developing a Live DirectShow filter.
I have an H264 stream source which i can get streams via SDK API.
In my filter I have a Queue which i Enqueue (push) incoming stream from a thread.
Then I consume (Dequeue,pop) these streams inside the filter FillBuffer...
So I make a thread safe Queue...But this causes some problem....
At FillBuffer if i check is there any incoming packets and if there is , process
logic is like this:
...
bool hasElement = SynchronizedQueue.pop(element);
if(!hasElement)
{
return S_OK
}
...
...this consume to much CPU...
Howewer using boost lib to implement lock with condition variable
...
SynchronizedQueue.waitAndPop(element) ;// which wait until we have some
Which have a lover CPU ...But sometime when there is no data in Queue, this block FillBuffer function and filter may not be stopped...
So any design idea alternatives for a live source filter which takes input streams from a remote mechine and pass it to decoder?
Or how can I make my design better....lower CPU and can be stopped?
The source filter owns a pushing thread, so you need to wait there using a synchronization object (event, mutex) to yield control until a new frame is available for pushing off the output pin.
Whenever you receive a frame from SDK and you put it onto queue, you will indicate this availability using synchronization object, e.g. you will set an event. The worker thread wlil see the event and start processing the frame.
The worker thread needs to be able to respond to two events, at least: a new frame, and filter/graph stop. So you will need WaitForMultipleObjects to wait for multiple evetns and wake up on the first occurred.
I want to send a string from a website to a local machine.
My local machine is connected into a network through a router.
Thanks You
email Id: manish.m.meshram#gmail.com
Well, that largely depends on what the receiving computer needs to do with that string.
If you only need to notify the user of this, I would suggest the easier way is to go with the net send command.
Since you are wking in ASP.net, you can use the Process and ProcessStartInfo class to launch a command like
net send YourPC "String to send"
If you need to do something more sophisticated with the string message, you could for example print it in some sort of log file and then read it from the destination machine.
If you can give more information on your needs, we'll be probably able to help you better.
Luca
I suggest you poll the webapp for messages.
For instance, let the webapp have an URL that simply returns the timestamp of the most recent message, at http://thesite.com/messages/MostRecentTimetamp.aspx
The page should return the timestamp only, in an format you can parse, for instance:
2009-08-29 14:00:00
Then, on another URL, http://thesite.com/messages/FromLastHour.aspx display the list of messages for the last N hours (or other suitable time period). This page could return one message per line, with the message timestamp at the start of the line.
For instance:
2009-08-29 13:58:20 A message
2009-08-29 13:59:30 Here's a message
2009-08-29 14:00:00 Another message
On your local machine, create a program that as often as needed reads and parses http://thesite.com/messages/MostRecentTimetamp.aspx. If the program detects that the timestamp has changed, read http://thesite.com/messages/FromLastHour.aspx and process the new messages.
Adjust the timing according to your needs.
Or even better, have an URL: http://thesite.com/messages/MoreRecentThan.aspx?timestamp=2009-08-29 13:50:00.
That returns messages that are newer than the timestamp passed in. The program on your local machine should then pass the timestamp of the most recent message it has handled.
Of course, your web site has to keep track of outgoing messages in some sort of queue. You could use a database table for this. The web app can delete old messages from this table periodically.
If you want to get fancy, you could implement this as a SOAP web service. Or you could let the URLs return the data formatted as JSON.