I am currently on XCode 5.0 and working on an iOS 7 sample application which uses CocoaAyncSocket library. In this application, the "sender controller" sends UDP messages on 255.255.255.255 on port 4000 for a "receiver controller" to handle and print out. The "sender controller" has a for loop that broadcasts a message 200 times. Using Wireshark (filtering on udp.port == 4000), of the 200 packets, 0 of them were lost, which is great! In this environment, everything works great, and the "receiver controller" prints out all the messages.
But now when I move the application an actual iPad (iPad MD328LL/A 16GB, Wi-Fi 3rd Generation iOS 7), some of the packets are lost. Of the 200, about 60% - 65% of the packets are picked up by WireShark and make it to the "receiver controller". I am not quite sure if its the library (which I don't think since it works perfectly with the simulator) or and iOS 7/iPad that is causing the packet loss issue.
Code:
// Sender Controller
#interface ViewController ()
{
GCDAsyncUdpSocket *udpSocket;
}
#end
- (void)viewDidLoad
{
[super viewDidLoad];
if (udpSocket == nil)
{
[self setupSocket];
}
// ...
}
// set ups socket
- (void)setupSocket
{
// Initialize
udpSocket = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
// Enable Broadcast
[udpSocket enableBroadcast:YES error:nil];
NSError *error = nil;
// Bind to port
if (![udpSocket bindToPort:0 error:&error])
{
[self logError:FORMAT(#"Error binding: %#", error)];
return;
}
if (![udpSocket beginReceiving:&error])
{
[self logError:FORMAT(#"Error receiving: %#", error)];
return;
}
[self logInfo:#"Ready"];
}
// ...
// Click event
- (IBAction)send:(id)sender
{
// Format message
NSData *data = [msg dataUsingEncoding:NSUTF8StringEncoding];
// Broadcast message 200 times
for (int i = 0 ; i < 200; i++) {
[udpSocket sendData:data toHost:host port:port withTimeout:-1 tag:0];
}
}
I know that in this scenario these messages are being sent at a fast rate and it would be highly unlikely for a user to send 200 broadcasts like this. I also understand that UDP is cheap and there is a chance of the occasional malformed or lost packets, but at the rate of 40%... that seems rather high to me.
If anybody has any suggestion/experience with this or any useful information, it would be greatly appreciated
Thanks in advance!
Related
I've created a TCP server application using BSD sockets and NUCLEO-H743ZI2 development board with STM32CubeMX 5.6.0 & LwIP 2.0.3 in Keil-MDKARM.
I noticed that:
If a client connects and sends 11 bytes or more at first, server
receives the data correctly and read() responds displaying the data.
However, if client sends the first data lower than 11
bytes, read() function blocks even next received data is higher than 11 bytes, until client disconnects. After the disconnection, all the data queued is displayed.
Namely, if first data sent from a client to my server is lower than 11 bytes, event_callback for a rcvevent is not triggered until disconnection.
My aim is to make the server available to one byte reception.
I've pasted my Server task/thread below. Let me have your kind response at your earliest convenience and feel free to request other related files/libraries(lwip.h, lwipopts.h..).
Kind Regards
void StartTask01(void const * argument)
{
/* USER CODE BEGIN StartTask01 */
MX_LWIP_Init();
/*start a listening tcp server*/
int iServerSocket;
struct sockaddr_in address;
if ((iServerSocket = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
printf("Socket could not be created\n");
}
else
{
address.sin_family = AF_INET;
address.sin_port = htons(80);
address.sin_addr.s_addr = INADDR_ANY;
if (bind(iServerSocket, (struct sockaddr *)&address, sizeof (address)) < 0)
{
printf("socket could not be bound\n");
}
else
{
listen(iServerSocket, MEMP_NUM_NETCONN);
}
}
/*server started listening*/
struct sockaddr_in remoteHost;
int newconn;
char caReadBuffer[1500];
memset(caReadBuffer, 0, 1500);
for(;;)
{
/*block until accepting an incoming connection*/
newconn = accept(iServerSocket, (struct sockaddr *)&remoteHost, (socklen_t *)(sizeof(remoteHost)));
if (newconn != -1)/*if accepted well*/
{
/*block until data arrives*/
read(newconn, caReadBuffer, sizeof(caReadBuffer));
printf("data read: %s\n", caReadBuffer);
memset(caReadBuffer, 0, 1500);
}
}
/* USER CODE END StartTask01 */
}
The problem that's causing this issue is that you only call read once on each connection. If you don't happen to receive all the data from that single call to read (which is entirely unpredictable), you will never call read on that connection again.
When you call read on a blocking TCP connection, it will only block if there is no data available. Otherwise, it will give you whatever data is available up to the maximum number of bytes you ask for. It will not wait for more data if only some is available. It's up to you to call read again if you didn't receive all the data you expected.
One your second iteration of the for loop, you overwrite newconn with a new connection. You don't close the old connection. So you have a socket leak.
SOLVED:
The problem is, my server was listening port 80. I changed it to port 7 and thankfully bug is resolved, now read() works as expected.
This bug let me think that LwIP had problems on listening that web(80) port instead of others. There should be a some kind of discrimination between listening some spectacular ports even for unimplemented protocols.
How one forces Windows to disconnect from BLE device being used in UWP app? I receive notifications from some characteristics but at some point I want to stop receiving them and make sure I disconnect from the BLE device to save BLE device's battery?
Assuming your application is running as a gatt client and you have the following instances your are working with in your code:
GattCharacteristic myGattchar; // The gatt characteristic you are reading or writing on your BLE peripheral
GattDeviceService myGattServ; // The BLE peripheral' gatt service on which you are connecting from your application
BluetoothLEDevice myBleDev; // The BLE peripheral device your are connecting to from your application
When you are already connected to your BLE peripheral, if you call the Dispose() methods like this :
myBleDev.Dispose(); and/or myGattServ.Dispose(); and/or myGattchar.Service.Dispose()
you surely will free resources in your app but will not cleanly close the BLE connection: The application looses access to control resources for the connection. Nevertheless, connection remains established on the lower levels of the stack (On my peripheral device the Bluetooth connection active LED remains ON after calling any of Dispose() methods).
Forcing disconnection is done by first disabling notifications and indications on the concerned characteristic (i.e. myGattchar in my example above) by writing a 0 (zero) to the Client Characteristic Configuration descriptor for that characteristic through call to method WriteClientCharacteristicConfigurationDescriptorAsync with parameter GattClientCharacteristicConfigurationDescriptorValue.None :
GattCommunicationStatus status =
await myGattchar.WriteClientCharacteristicConfigurationDescriptorAsync(
GattClientCharacteristicConfigurationDescriptorValue.None);
Just dispose all objects related to the device. That will disconnect the device, unless there are other apps connected to it.
For my UWP app, even though I've used Dispose() methods, I still received notifications. What helped me was setting my device and characteristics to null. Example:
device.Dispose();
device = null;
Not all to certain of how "correct" this programming is, but it's been working fine for me so far.
The UWP Bluetooth BLE sample code from Microsoft (dispose the BLE device) didn't work for me. I had to add code (dispose the service) to disconnect the device.
private async Task<bool> ClearBluetoothLEDeviceAsync()
{
if (subscribedForNotifications)
{
// Need to clear the CCCD from the remote device so we stop receiving notifications
var result = await registeredCharacteristic.WriteClientCharacteristicConfigurationDescriptorAsync(GattClientCharacteristicConfigurationDescriptorValue.None);
if (result != GattCommunicationStatus.Success)
{
return false;
}
else
{
selectedCharacteristic.ValueChanged -= Characteristic_ValueChanged;
subscribedForNotifications = false;
}
}
selectedService?.Dispose(); //added code
selectedService = null; //added code
bluetoothLeDevice?.Dispose();
bluetoothLeDevice = null;
return true;
}
Remember you must call -= for events you have called += or Dispose() will never really garbage collect correctly. It's a little more code, I know. But it's the way it is.
Not just with bluetooth stuff, I will remind you - with everything. You can't have hard referenced event handlers and get garbage collection to work as expected.
Doing all the disposing and null references suggested didn't achieve the Windows (Windows Settings) disconnection I was looking for.
But dealing with IOCTL through DeviceIoControl did the job.
I found that after calling GattDeviceService.GetCharacteristicsAsync(), BluetoothLeDevice.Dispose() does not work. So I dispose the Service I don't need.
GattCharacteristicsResult characteristicsResult = await service.GetCharacteristicsAsync();
if (characteristicsResult.Status == GattCommunicationStatus.Success)
{
foreach (GattCharacteristic characteristic in characteristicsResult.Characteristics)
{
if (characteristic.Uuid.Equals(writeGuid))
{
write = characteristic;
}
if (characteristic.Uuid.Equals(notifyGuid))
{
notify = characteristic;
}
}
if (write == null && notify == null)
{
service.Dispose();
Log($"Dispose service: {service.Uuid}");
}
else
{
break;
}
}
Finally, when I want to disconnect the Bluetooth connection
write.Service.Dispose();
device.Dispose();
device = null;
First times doesn't work "Null"( before open App in iPhone )
and some times doesn't work but i want one loop or timer for repeat this request for get result :
here is my code
- (void)application:(UIApplication *)application handleWatchKitExtensionRequest:(NSDictionary *)userInfo reply:(void (^)(NSDictionary *))reply
{
// Temporary fix, I hope.
// --------------------
__block UIBackgroundTaskIdentifier bogusWorkaroundTask;
bogusWorkaroundTask = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
[[UIApplication sharedApplication] endBackgroundTask:bogusWorkaroundTask];
}];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[[UIApplication sharedApplication] endBackgroundTask:bogusWorkaroundTask];
});
// --------------------
__block UIBackgroundTaskIdentifier realBackgroundTask;
realBackgroundTask = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
reply(nil);
[[UIApplication sharedApplication] endBackgroundTask:realBackgroundTask];
}];
// Kick off a network request, heavy processing work, etc.
// Return any data you need to, obviously.
// reply(nil);
reply(#{#"Confirmation" : #"Text was received."});
[[UIApplication sharedApplication] endBackgroundTask:realBackgroundTask];
// NSLog(#"User Info: %#", userInfo);
}
Watch App Code
- (void)willActivate {
// This method is called when watch view controller is about to be visible to user
[super willActivate];
NSDictionary *dictionary = [[NSDictionary alloc] initWithObjectsAndKeys:#"MyCamande", #"OK", nil];
[InterfaceController openParentApplication:dictionary reply:^(NSDictionary *replyInfo, NSError *error) {
NSLog(#"Reply received by Watch app: %#", replyInfo);
}];
}
how can recall for get finally result
Well, I would not recommend you using anything, related to network operations on watch itself. First of all because Apple does not recommend to do it for obvious reasons. The only network thing that is performed on the watch directly is loading images.
I have been struggling with network operations and watch for like a week and came to a conclusion, that the most stable way to do it right now is not obvious.
The main issue is that WKInterfaceController.openParentApplication(...) does not work as expected. One can not just request to open iPhone app and give back the response as is. There are tons of solutions stating that creating backgound thread in - (void)application:(UIApplication *)application handleWatchKitExtensionRequest:(NSDictionary *)userInfo reply:(void (^)(NSDictionary *))reply would work just fine, but it actually does not. The problem is that this method has to send reply(...); right away. Even creating synchronious requests won't help, you will keep receiving "error -2 iPhone application did not reply.." like 5 times our of 10.
So, my solution is following:
You implement:
func requestUserToken() {
WKInterfaceController.openParentApplication(["request" : "token"], reply: responseParser)
}
and parse response for error that might occur if there's no response from iPhone.
On iOS side
- (void)application:(UIApplication *)application handleWatchKitExtensionRequest:(NSDictionary *)userInfo reply:(void (^)(NSDictionary *))reply
{
__block UIBackgroundTaskIdentifier watchKitHandler;
watchKitHandler = [[UIApplication sharedApplication] beginBackgroundTaskWithName:#"backgroundTask"
expirationHandler:^{
watchKitHandler = UIBackgroundTaskInvalid;
}];
NSString *request = userInfo[#"request"];
if ([request isEqualToString:#"token"])
{
reply(#{#"token" : #"OK"});
[PSWatchNetworkOperations.shared loginUser];
}
dispatch_after( dispatch_time( DISPATCH_TIME_NOW, (int64_t)NSEC_PER_SEC * 1 ), dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0 ), ^{
[[UIApplication sharedApplication] endBackgroundTask:watchKitHandler];
} );
}
This code just creates a background thread that forces iPhone to send a network request. Let's imagine you would have a special class in your iPhone app that would send these requests and send the answer to watch. For now, this is only accomplishable using App Groups. So you have to create an app group for your application and watchkit extension. Afterwards, I would recommend using MMWormhole in order to establish communication between your app and extension. The manual is pretty self-explaining.
Now what's the point of all this. You have to implement sending request to server and send response through wormhole. I use ReactiveCocoa, so example from my code is like this:
- (void)fetchShoppingLists
{
RACSignal *signal = [PSHTTPClient.sharedAPIClient rac_GET:#"list/my" parameters:#{#"limit":#20, #"offset":#0} resultClass:PSShoppingListsModel.class];
[signal subscribeNext:^(PSShoppingListsModel* shoppingLists) {
[self.wormHole passMessageObject:shoppingLists identifier:#"shoppingLists"];
}];
[signal subscribeError:^(NSError *error) {
[self.wormHole passMessageObject:error identifier:#"error"];
}];
}
As you see here I send back either response object, or error. Note, that all that you send through wormhole should be NSCoding-compatible.
Now on the watch you'll probably parse response like this:
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
PSWatchOperations.sharedInstance.requestUserToken()
PSWatchOperations.sharedInstance.wormhole.listenForMessageWithIdentifier("token", listener: { (messageObject) -> Void in
// parse message object here
}
})
}
So, to make a conclusion. You send request to parent application to wake up from background and start async operation. Send reply() back immediately. When you receive answer from operation send notification that you've got response. Meanwhile listen to response in your watchExtension.
Sorry, that was a lot of text, but I just hope it helps keep one's ass cool, because I've spent a lot of nerves on that.
May be you can try to explain the exact problem a little more clearly. But one thing you may want to do regardless is to make the openParentApp call in awakeWithContext: instead of willActivate.
I would like to have a service which connects via TCP to a server and then continuously listens to incoming data. I'm using CocoaAsyncSocket which I'm using in the following way:
self.socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
NSError *err = nil;
if (![self.socket connectToHost:#"..." onPort:... error:&err]) {
return;
}
[self.socket readDataWithTimeout:-1 tag:1];
and then in the reading delegate method:
- (void)socket:(GCDAsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag {
NSLog(#"%#", data);
[self.socket readDataWithTimeout:-1 tag:1];
}
is this correct that I'm immediately calling readDataWithTimout:tag: again? Or is there a (better) way to always listen to incoming messages?
For what you are doing this is fine. You need to call
-[readDataWithTimeout] in -didReadData, because otherwise you would only receive the first message from the server. GCDAsyncSocket is designed this way, because there are a few other ways you can receive incoming data.
I have a simple sketch on my Seeeduino Mega 1.22 which just displays the serial input on a LCD Display. Using lynx term and the arduino serial monitor works fine: sent input is being displayed. The trouble starts when I want to start serial communication between my Java programm, running in Eclipse on a Win7 x64 machine, and the Seeeduino. I'm using the RXTX x64 build. The programm is intended to send and receive some string.getBytes() via the open port. Receiving on the Java side works, but receiving on the Arduino side fails.
It seems that the problem is the proper Flow Control setting. I saw that some people had the same issue like here Issues receving in RXTX
But this solution does not work for me. If I set the FlowControl to None, than I get only a block icon on the display, indicating, that the serial connection has been established, but nothing else. If I set the FlowControl to RCTS_IN | RCTS_OUT, then I get the string bytes only on the display when I close the established connection.
Why is the data only send when I close the connection (Flushing the out stream did not help as well) ? What am I doing wrong with the Flow Controll settings?
This is the modified connect() method I'm using.
void connect(String portName) throws Exception {
CommPortIdentifier portIdentifier = CommPortIdentifier
.getPortIdentifier(portName);
if (portIdentifier.isCurrentlyOwned()) {
System.out.println("Error: Port is currently in use");
} else {
CommPort commPort = portIdentifier.open(this.getClass().getName(),
2000);
if (commPort instanceof SerialPort) {
SerialPort serialPort = (SerialPort) commPort;
serialPort.setSerialPortParams(115200, SerialPort.DATABITS_8,
SerialPort.STOPBITS_1, SerialPort.PARITY_NONE);
try {
serialPort.setFlowControlMode(
// SerialPort.FLOWCONTROL_NONE);
// OR
// If CTS/RTS is needed
//serialPort.setFlowControlMode(
SerialPort.FLOWCONTROL_RTSCTS_IN |
SerialPort.FLOWCONTROL_RTSCTS_OUT);
} catch (UnsupportedCommOperationException ex) {
System.err.println(ex.getMessage());
}
serialPort.setRTS(true);
in = serialPort.getInputStream();
out = serialPort.getOutputStream();
(new Thread(new SerialWriter(out))).start();
serialPort.addEventListener(new SerialReader(in, this));
serialPort.notifyOnDataAvailable(true);
} else {
System.out.println("Error: Only serial ports are to use!");
}
}
}
Thanks in advance for your time
Solved it. It was not the buffer, as many suggested it. The problem was, that the Seeeduinos RST Switch on the board was set to automatic. Setting it to manual, solved the problem.
No Flow-Controll needed.