Background:
I am using the RxAndroidBle library and have a requirement to quickly (as possible) connect to multiple devices at a time and start communicating. I used the RxBluetoothKit for iOS, and have started to use RxAndroidBle on my Pixel 2. This had worked as expected and I could establish connections to 6-8 devices, as required, in a few hundred milliseconds. However, broadening my testing to phones such as the Samsung S8 and Nexus 6P, it seems that establishing a single connection can now take upwards of 5-6 seconds instead of 50-60 millis. I will assume for the moment that that disparity is within the vendor-specific BT implementations. Ultimately, this means that connecting to, e.g., 5 devices now takes 30 seconds instead of < 1 second.
Question:
What I understand from the documentation and other questions asked, RxAndroidBle queues all scanning, connecting, and communication requests and executes them serially to be safe and maintain stability based on the variety of Bluetooth implementations in the Android ecosystem. However, is there currently a way to execute the requests (namely, connecting) in parallel to accept this risk and potentially cut my total time to establish multiple connections down to whichever device takes the longest to connect?
And side question: are there any ideas to diagnose what could possibly be taking 5 seconds to establish a connection with a device? Or do we simply need to accept that some phones will take that long in some instances?
However, is there currently a way to execute the requests (namely, connecting) in parallel to accept this risk and potentially cut my total time to establish multiple connections down to whichever device takes the longest to connect?
Yes. You may try to establish connections using autoConnect=true which would prevent locking the queue for longer than few milliseconds. The last connection should be started with autoConnect=false to kick off a scan. Some stack implementations are handling this quite OK but your mileage may vary.
And side question: are there any ideas to diagnose what could possibly be taking 5 seconds to establish a connection with a device?
You can check the Bluetooth HCI snoop log. Also you may try using a BLE sniffer to check what is actually happening "on-air" (e.g. an nRF51 Development Kit).
Or do we simply need to accept that some phones will take that long in some instances?
This is also an option since usually there is little one can do about connecting time. From my experience BLE stack/firmware implementations are wildly different from each other.
Related
I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
I have read that Android 4.4+ supports 7 connections open at a time. My question is: does RxAndroidBle handle queuing of connection operations when this number is reached, or is it up to the user of the library to implement a queue for this?
The RxAndroidBle library doesn't handle queueing of connections. In fact it is quite impossible to do that since the number of connections is shared among every application in the system and not only the RxAndroidBle may be opening them.
Besides I have encountered some vendors (Micromax Canvas for instance) where despite using Android 5.0 it only worked with one connection at any given time.
Best Regards.
I am doing a measurement project where I send and receive data from numerous devices on my network. The send/receive can be considered fast and intensive, as there is almost no pause and a continuous flow of data. However, the data to/from each device is quite small, on the order of a couple of bytes each. For some reason, I am experiencing a reset of my entire ethernet connection where my internet connection also goes down, and I lose connection to all my devices as well. I have never experienced such a situation and am wondering what are some of the common situations that might lead to resets like this?
Actually, it turns out this had to do with the way I constructed my thread, in which I would create a new socket continuously without discarding the previous one. Stupid right? Well I fixed the code and the ethernet no longer crashes.
HELP PLEASE! I have an application that needs as close to real-time processing as possible and I keep running into this unusual delay issue with both TCP and UDP. The delay occurs like clockwork and it is always the same length of time (mostly 15 to 16 ms). It occurs when transmitting to any machine (eve local) and on any network (we have two).
A quick run down of the problem:
I am always using winsock in C++, compiled in VS 2008 Pro, but I have written several programs to send and receive in various ways using both TCP and UDP. I always use an intermediate program (running locally or remotely) written in various languages (MATLAB, C#, C++) to forward the information from one program to the other. Both winsock programs run on the same machine so they display timestamps for Tx and Rx from the same clock. I keep seeing a pattern emerge where a burst of packets will get transmitted and then there is a delay of around 15 to 16 milliseconds before the next burst despite no delay being programmed in. Sometimes it may be 15 to 16 ms between each packet instead of a burst of packets. Other times (rarely) I will have a different length delay, such as ~ 47 ms. I always seem to receive the packets back within a millisecond of them being transmitted though with the same pattern of delay being exhibited between the transmitted bursts.
I have a suspicion that winsock or the NIC is buffering packets before each transmit but I haven't found any proof. I have a Gigabit connection to one network that gets various levels of traffic, but I also experience the same thing when running the intermediate program on a cluster that has a private network with no traffic (from users at least) and a 2 Gigabit connection. I will even experience this delay when running the intermediate program locally with the sending and receiving programs.
I figured out the problem this morning while rewriting the server in Java. The resolution of my Windows system clock is between 15 and 16 milliseconds. That means that every packet that shows the same millisecond as its transmit time is actually being sent at different milliseconds in a 16 millisecond interval, but my timestamps only increment every 15 to 16 milliseconds so they appear the same.
I came here to answer my question and I saw the response about raising the priority of my program. So I started all three programs, went into task manager, raised all three to "real time" priority (which no other process was at) and ran them. I got the same 15 to 16 millisecond intervals.
Thanks for the responses though.
There is always buffering involved and it varies between hardware/drivers/os etc. The packet schedulers also play a big role.
If you want "hard real-time" guarantees, you probably should stay away from Windows...
What you're probably seeing is a scheduler delay - your application is waiting for other process(s) to finish their timeslice and give up the CPU. Standard timeslices on multiprocessor Windows are from 15ms to 180ms.
You could try raising the priority of your application/thread.
Oh yeah, I know what you mean. Windows and its buffers... try adjusting the values of SO_SNDBUF on sender and SO_RCVBUF on reciever side. Also, check involved networking hardware (routers, switches, media gateways) - eliminate as many as possible between the machines to avoid latency.
I meet the same problem.
But in my case,I use GetTickCount() to get current system time,unfortunately it always has a resolution of 15-16ms.
When I use QueryPerformanceCounter instead of GetTickCount(), everything's all right.
In fact,TCP socket recv data evenly,not 15ms deal once.
When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.