If I understood the Link Layer correctly, it is always exactly in one of the five states "Standby", "Advertising", "Scanning", "Initiating", or "Connected". So how is it possible that I can connect to two devices simultaneously? When I am connected to one device, the Link Layer is in "Connected" state. To connect to another device, it would have to switch to "Initiating" or "Advertising" state (depending on its GAP role), while maintaining the "Connected" state to the already connected device. But then it is in two states simultaneously, which is forbidden.
Where am I wrong?
You are correct in your understanding of the Link Layer states; this is demonstrated in a figure in the Core Specification (1):-
However, the specification also states that the Link Layer may optionally support multiple state machines as follows (2):-
The Link Layer in the Connection State may operate in the Master Role and Slave Role at the same time.
The Link Layer in the Connection State operating in the Slave Role may have multiple connections.
The Link Layer in the Connection State operating in the Master Role may have multiple connections.
All other combinations of states and roles may also be supported.
The Link Layer in the Connection State shall have at most one connection to another Link Layer in the Connection State.
The table below lists a couple of possible combinations of Link Layer states (3):-
I hope this helps.
Bluetooth Specification, Version 5.0, Vol 6, Part B, General Description, Page 2553
Bluetooth Specification, Version 5.0, Vol 6, Part B, General Description, Page 2554
Bluetooth Specification, Version 5.0, Vol 6, Part B, General Description, Page 2555
Related
I have set up Akka.net actors running inside an ASP.net application to handle some asynchronous & lightweight procedures. I'm wondering how Akka scales when I scale out the website on Azure. Let's say that in code I have a single actor to process messages of type FooBar. When I have two instances of the website, is there still a single actor or are there now two actors?
By default, whenever you'll call ActorOf method, you'll order creation of a new actor instance. If you'll call that in two actor systems, you'll end up with two separate actors, having the same relative paths inside their systems, but different global addresses.
There are several ways to share an information about actors between actor systems:
When using Akka.Remote you can call actors living on another actor system given their addresses or IActorRefs. Requirements:
You must know the path to a given actor.
You must know the actual address (URL or IP) of an actor system, on which that actor lives.
Both actor systems must be able to communicate via TCP between actor system (i.e. open ports on firewall).
When using Akka.Cluster actor systems (also known as nodes) can form a cluster. They will exchange information about their localization in the network, track incoming nodes and eventually detect a dead or unreachable ones. On top of it, you can use higher level components i.e. cluster routers. Requirements:
Every node must be able to open TCP channel to every other (so again, firewalls etc.)
A new incoming node must know at least one node that is already part of the cluster. This is easily achievable as part of the pattern known as lighthouse or via plugins and 3rd party services like consul.
All nodes must have the same name.
Finally, when using cluster configuration you can make use of Akka.Cluster.Sharding - it's essentially a higher level abstraction over actor's location inside the cluster. When using it, you don't need to explicitly tell, where to find or when to create an actor. Instead, all you need is a unique actor identifier. When sending a message to such actor, it will be created ad-hoc somewhere in the cluster if it didn't exist before, and rebalanced to equally spread the workload in cluster. This plugin also handles all logic associated with routing the message to that actor.
It's possible to use one twain driver to manage concurrent request to two different multifunction printer?
I mean, if I have two MFPs , can I do two scan request in paralel using the same twain driver?
It depends on if your driver supports it.
From the TWAIN Spec page 125:
If an application attempts to connect to a Source that only supports a single connection when the source is already opened, the Source should respond with TWRC_FAILURE and TWCC_MAXCONNECTIONS.
Also from the spec on page 212:
The Source is responsible for managing this, not the Source Manager (the Source Manager does not know in advance how many connections the Source will support).
I tested this with a Fujitsu fi-7260 scanner and got the TWCC_MAXCONNECTIONS error with Twacker:
It could be possible. The reason being TWAIN just sits between the application and the images fed to it.
Imagine a scenario something on the below lines:
1) User clicked on the scan button.
2) You initiate the network layer calls to start the scan job.
3) Now instead to one printer you start scan jobs on two printers from two threads.
4) Let's say each of those threads populate the raw BMP data to a single data structure that is shared.
5) Once both threads are complete iterate over that shared data structure to pass the images to the application via the XFERIMAGE call.
Basic idea is to create an abstraction of two printers behind the scene.
Please let me know if my understanding of your question was not correct or you need other clarification.
If you implement it in the described way, it usually works only with two different MFP's as the majority of TWAIN drivers do not support two different USB devicves at the same time.
I've been looking for BLE materials to get answer to this. But I could not get in any. Though practically/logically speaking, peripheral should decide I want to see if it is documented some where. Any links with this information will be very helpful.
On iOS, at least, if the peripheral specifies that encryption is required for a characteristic then iOS Central will initiate a pairing operation when access is attempted. If encryption isn't required then no pairing takes place - The central can just initiate a connection.
So, in summary -
The peripheral 'decides' if pairing is required through the definition of its characteristics.
The central manages the pairing process when required.
On "Just Works"; my understanding is that this is pairing where neither device can display a passkey, or allow the user to input one. It is NOT an unencrypted connections.
While your general iOS device can do both, many devices, such as an toothbrush handle, can't, but can still require "just works" pairing and bounding. This is also called unauthenticated pairing with encryption. It appears that iOS will prompt the user to accept a pairing request if the peripheral can't display a passkey.
I started exploring openHAB for my home automation. Looks to be a great application for the home automation. I want to automate two homes and want to run openHAB on one centrally placed server. Is it possible to segregate the data for my two homes and provide use based access for two homes.
Or I will have to have to instances running on my server.
Please suggest if anyone has done this earlier.
you can (I believe) provide different sitemaps, but the most importing question is how will a central openhab instance communicate with the "other" home?
Especially If you're going to use bindungs which require a piece of hardware like z-wave etc.
You can potentially play with MQTT and have a small Raspberry Pi running in the "other" home feeding the MQTT.
Assuming that there is not a hardware or range based issue with using OpenHab for two homes (e.g. z-wave USB dongle but second home is out of range) and there is network connectivity between the two houses, there are a number of ways you can accomplish this. Here is one.
The easiest would probably be to just use a naming convention for your items and groups to easily tell which house the item comes from. You would probably want to set up a separate sitemap for each house as well. If I understand your question this should segregate the data for you based on name and provide use based access for each home.
If you want to segregate the data even more thoroughly you can configure your persistence to save all the items from one house to one DB and all the others to a different one, though you will need two different persistence bindings set up (i.e. one uses rrd4j and another uses db4o). I'm not sure this provides any advantage.
The final step is getting the data from the remote house into openHab. How this is accomplished will depend on the nature of the sensors and triggers in the other house. You can use the HTTP binding, TCP/IP binding, or an MQTT broker. I've personally exposed a couple of my Raspberry Pi based sensors to openHAB using a python script and the paho library that publishes the sensor data read from the GPIO pins to an MQTT broker and it works great.
Centralization vs segregation - you have to decide, which one has more advantages and less risk.
The two houses will store data on the server (openhab2, mqtt, DB/rr4d) and each one have access to it - that must be clarified.
The network connectivity is obvious, it should be stable between the two sites. Security is another issue - not only digital, but safety of life (hvac controlling or safety appliances with network outage?).
Configuration is pretty supported in both ways, separate config files (items, rules, persistence, etc) and connectivity in a hierarchy has endless approaches and capabilities.
In the newest version of the android app you can add multiple openhab servers. Why don't just use two instances of openHAB?
I have a questions regarding WebSocket communications in mobile connections.
I was wondering how the long-lived TCP connections can be handled for a long time in mobility networks when the user migrate among different networks. What happens to already established TCP connections when handover (hand-off) occurs?
Do different technologies (3G, 4G or etc) behave differently in this case?
I will appreciate if you could leave some online sources or articles as well that I can read more in this regard?
Thank you in advance :)
The hand-off is always transparent to the user — all TCP and voice connections are always kept active when transitioning between the towers on a commercial mobile network like LTE, UMTS etc. You might experience some periods of time where the data stops flowing, but that's about it.
I've had several opportunities to verify this myself through an interesting experiment on a T-Mobile USA's HSPA+ nationwide network. Take a 12-hour-plus drive from one major city to another one, without turning your phone off. Take a look at the area where the external IPv4-address terminates (by using traceroute). You might as well notice that it's still at the same area where you've started your trip. Now reboot the phone, and see where the external IPv4 address is routed to now. You'll notice that now it's likely terminated in a major metro area closer to where you are. I.e., your connection within the core network of the operator follows you along not just within a given city, metro or state, but also between the states and the timezones.
The reason for this is that the carrier has a Core Network, and all external connections are handled by the Packet Gateway of the Core Network, which keeps track of all the connections. More on this is documented in Chapter 7 of the book called High Performance Browser Networking (HPBN.co).
This is not really a SO but more a programmers question and I don't see what you have researched for yourself, but you certainly can't rely on a connection to stay alive, mobile or not.
In fact mobile operators kill long-living connections by resetting them after a certain amount of time or data. So you should be ready to reconnect upon a socket exception anyway.