I recently purchased an assortment of sensors from a company and have been having little success in getting them to communicate with my software. I sent a note to the manufacturer asking about compatibility and was told that the devices use 'proprietary ZigBee'.
What does this mean? Do they use a different command set? Is the information is encrypted somehow?
If they are "ZigBee certified" or have a ZigBee logo on the packaging, then they have to implement the standard ZigBee protocols, including ZCL (ZigBee Cluster Library) and ZDO/ZDP (ZigBee Device Object/Profile) on endpoint 0.
Their product could include Manufacturer-Specific clusters with undocumented commands.
If they're using ZCL, then standard ZDO discovery should still work and allow you to enumerate all endpoints and their clusters that don't have the manufacturer-specific bit set. If you know the 16-bit manufacturer ID they're using, you can discover those attributes as well, and display their values (you won't know what they are though).
You should consider reading the ZCL specification at zigbee.org, as it may help you to understand how ZigBee devices communicate with each other. It also explains the manufacturer-specific extensions to the standard.
If you are a developer or are just curious to see the ZigBee traffic among the devices and sensors you have, you might want to try sniffing the traffic.
We use the Perytons sniffer. They support many off-the-shelf dongles you can use as front-ends and provide a 30 days free evaluation of their application.
Proprietary Zigbee usually called as Manufacturer Specific Profile(MSP) in zigbee and is very commonly used by developers and companies. Also Zigbee used to certify MSP till some time last year and used to issue the certificate too of the same. But now the certification is only limited to compliance of zigbee but not the logo usage.
https://www.udemy.com/internet-of-things-and-everything-a-workshop-on-zigbee/
Related
I'm looking into how one would create a network of embedded systems. What I'd like to achieve is for a device (basically a chip with network capabilities) to directly send data to a server but not use the internet(tcp/ip) or cellular data(like GSM etc).
I don't have much expertise in this field. Most of the networking protocols I've seen like ZigBee are designed for a Local Area Networks. Wide Area Network can be achieved perhaps over mesh or hoping etc. But is there a known protocol for long range networking, say for sensors, assuming there aren't low power constraints?
I am guessing you want to avoid the internet and GSM, not because you have anything against the protocols but because you want your solution to work without having to rely these networks.
If so then you don't have to rule out TCP/IP as this can be used in private networks also.
From your description it sounds like the closest thing that would meet your needs would be a satellite based communications system. So long as you are not worried about price, power and to a certain extend size, then your sensors can communicate from anywhere using satellite links.
There are also HAP - High Altitude Platforms. These are essentially like low flying Satellites, or high flying planes/blimps, so don't have the same coverage but need less power for a given communication bandwidth. If you search for 'High Altitude Platform Networking' you should find plenty of examples such as the following which is an up to date summary of the technology at the time of writing:
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S2175-91462016000300249
As mentioned above, many if not all of these systems will support IP based communication protocols on top of their lower layers. Unless you really have some issue with the protocols themselves, it seems sensible to use them as there is such a wealth of experience, tools etc associated with IP communications, and using them does not make you dependent on the wider 'Internet'.
Its also worth mentioning that a common pattern is to have local groups of sensors communicate with each other and or with a gateway and the gateway then communicate over the long link back to your server. This allows the individual device be smaller, cheaper, lower power etc. This may not match your requirements if you are not likey to have clusters of sensors, however.
If you search for satellite sensor networks you may find you get a lot of hits for the gateway case mentioned above. This article 'A Survey of Architectures and Scenarios in Satellite-Based Wireless Sensor Networks: System Design Aspects' looks to be a good overview which includes HAPs also and it is available to download form this site at the time of writing:
https://www.researchgate.net/publication/250003254_A_Survey_of_Architectures_and_Scenarios_in_Satellite-Based_Wireless_Sensor_Networks_System_Design_Aspects
I would look into LoRa which is designed with these requirements in mind.
You still need infrastructure for that (gateways, network servers), you could roll out your own or (in urbanized areas) use pre-existing one like TTN.
The choice of networking technologies depends on many factors, such as range, power constraints, bandwidth requirements and so on.
User Raber already pointed out that you could look into LoRa / LoRaWAN. Since nobody else mentioned it yet, my suggestion is to also have a look at SigFox technology, which is slightly different from LoRa in what it offers and their business model.
Let me start with a full disclosure : I have been given a mission which is out of my leauge and I am 'grasping at straws' here.
the back story :
I have 3 different pieces of hardware . all of them collects the same data but stores it differently .
I wantto make a 4th hardware which will collect the data from all of the others and to do so I first need to choose which protocol is better for this job and implement it on thoose hardwares.
they are not connected to the internet but have a connection between them .
Once in my studies I learnt about SNMP protocol and from googling now I came across OPC protocol .
I can't understand what is the difference between them [as for my understanding both have alarm events , secure ,etc] , and can't find full info about OPC .
trying to understand which one is suited for me.
to clarify I am planning to implement my own version of DB in the hardware [for example on SNMP I will need to build my own MIBs/somekind of my own agent ] .
I agree that SNMP is a better choice in this case. But the explanation of OPC is strange in my point of view just wrong.
SNMP is designed to monitor devices connected to some sort of network like TCP/IP. Nowadays it is indeed mainly used in network equipment like router etc.
OPC is a protocol to retrieve data, alarms and historical data from a device.
Alarm is in the case of a PLC a real alarm. Like tank 1 is almost overflowing. Action is/must be taken.
OPC is not only used in Scada. It mainly used for software to communicate with PLCs and own written software. That can be SCADA-software, but that is not always the case.
SNMP is a general purpose protocol which is widely used everywhere to manage/monitor all kinds of equipment, systems, devices and hardware in different domains. Nowadays, it is a de-facto standard protocol used for monitoring/management of any type of entities.
In opposite to that the OPC is only used in SCADA domain. So it is kind of specific. I'd go with SNMP if I were you.
SNMP=Simple Network Management Protocol. According to my experience with it- it is far from being simple. So beware of using it unless you completely sure it addresses your problem best, e.g you have large and complex firmware and software and you need to sync interfaces between various departments of software engineers.
I would propose in simple cases as yours just to implement anything proprietary or use Prometheus which is far simpler and more flexible for changes.
Good Luck.
We are working on a project to create a V2V ad-hoc network between vehicles. Basically we are deploying Raspberry Pis in vehicles and we are using Xbee modules and Zigbee protocol in order to exchange data between vehicles if they are near from each other.
Our ad-hoc network is completely dynamic and decentralized (We cannot have any static nodes in the network). Our problem is that the topology of the mesh network created by the Zigbee protocol requires a coordinator to be always in the network and the network will fail in case this coordinator failed down.
It seems that using Zigbee protocol requires knowing the topology of the network beforehand.
We do not know whether it is feasible to create our dynamic network using the Zigbee protocol without knowing the topology of the network beforehand and without the requirement of the coordinator. Can we have more than 1 coordinator in the network to overcome this problem? Thanks in advance.
Is there a particular reason you are looking to use Zigbee? if you are after a truely decentralised network you would be better off using either a different protocol (one that has no need for a coordinator) or by defining your own using basic RF modems (which is a lot more complicated)
however XBee modules are configurable by AT commands, with a bit of work you could probably set them up to periodically change mode (when it is running as coordinator) to check for other coordinators and if it finds one then stay as a standard node, and if the coordinator drops out (stops replying) then steps up to be coordinator.
this approach would require you to solve a few issues though:
how do the remaining modules in the network decide which becomes coordinator
how often would the coordinator scan for other coordinators in order to effect a reasonable response time, yet not disrupt data flow
i would suggest that you provide a bit more information relevant to the question such as:
how much data is being transferred?
how often is this data being transferred?
how are you planning to define where the data gets sent (addresses? or multicast to everywhere?)
hopefully this helps you in the right direction, but in the mean time i'd suggest you look at the XBee datasheets for the AT commands and what is possible (has been a while since i used them)
James
Digi sells XBee modules that implement protocols other than ZigBee. Both the plain 802.15.4 module and their proprietary DigiMesh module would be possible candidates for your project if you don't need to be ZigBee-compliant.
I think that you could send broadcast messages with 802.15.4.
With DigiMesh, all nodes are of the same node type. But I do not know how well it will handle having networks come together and fragment on a regular basis. You could contact Digi's technical support or sales support teams to see if they can provide any guidance.
This may not be the typical stackoverflow question.
A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.
Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.
OpenFlow is a research project from Stanford University led by professor Nick McKeown. In the original OpenFlow research paper, the goal of OpenFlow was to give researchers a way "to run experimental protocols in the networks they use every day." For years networking researchers have had an almost impossible task deploying and evaluating their ideas on real networks with real Ethernet switches and IP routers. The difficultly is that real switches and routers from companies like Cisco, HP, and others, are all closed, proprietary boxes that implement standard "protocols", like Ethernet spanning tree, and OSPF. There are business reasons why Cisco and HP won't let you run software on their switches and routers; there is no technical reason. OpenFlow was invented to solve a people problem: if Cisco is not willing to let you run code on their switch, maybe they can at least provide a very narrow interface to let you remotely configure their switch, and that narrow interface is called OpenFlow.
To my knowledge more than a dozen companies are currently implementing OpenFlow support for their switches. Some like HP are only providing the OpenFlow software for research purposes. Others like NEC are actually offering commercial support.
For academic researchers that want to evaluate new routing protocols in real networks, OpenFlow is a huge win. For switch vendors, it is less clear if OpenFlow support will help, hurt, or have no effect in the long run. After all, the academic research market is very small.
The reason why OpenFlow is most often discussed in the context of enterprise networks is that OpenFlow grew out of a previous research project called Ethane that used OpenFlow's mechanism of remotely programming switches in an enterprise network in order to centralize a security policy. Ethane, and by extension OpenFlow, has led directly to two startup companies: Nicira, founded by Martin Casado, and Big Switch Networks, founded by Guido Appenzeller. It would be easier to implement an Ethane-like system if all of the switches in the network supported OpenFlow.
Closely related to enterprise networks are data center networks, the networks that interconnect thousands to tens of thousands of servers in companies such as Google, Facebook, Microsoft, Amazon.com, and Yahoo!. One problem with Ethernet is that it does not scale to this many servers on the same Layer 2 network. We attempted to solve this problem in a research project called PortLand. We used OpenFlow to facilitate programming the switches from a central controller, which we called a Fabric Manager. We released the PortLand source code as open source.
However, we also found a limitation to OpenFlow's functionality. In another data center networking research project called Helios, we were not able to use OpenFlow because it did not provide a mechanism for bonding multiple switch ports into a Link Aggregation Group (LAG). Presumably one could extend the OpenFlow specification indefinitely until it all possible switch features become exposed.
There are other networks as well such as the Internet access networks, Internet backbones, home networks, wireless networks, cellular networks, etc. Researchers are trying to see where OpenFlow fits into all of these markets. What it really comes down to is the question, "what problem does OpenFlow solve?" Ethane makes a case for enterprise networks but I have not yet seen a compelling case for any other type of network. OpenFlow might be the next big thing, or it might end up being a case of "don't solve a people problem with a technical solution."
In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.
It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.
There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this “flow based multiplexing” but instead they call it “link aggregation”.
Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.
Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".
Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.
#Nathan: OpenFlow 1.1 actually adds some primitives that enable the use of multiple links via the Multipath Proposal.
An excellent view of OpenFlow by Simon Crosby
http://community.citrix.com/display/ocb/2011/03/21/The+Rise+of+the+Software+Defined+Network
More context on SDN which discusses IETF's SDN initiative and ONF's Openflow. Working in conjuction is a powerful combination http://bit.ly/A8xYso
Nathan, Excellent historical account and overview of openflow. Thanks!
You've hit on the points that I've been wrapping my head around as to why Openflow might not be widely adopted. Since it was designed to be open to allow researcher the ability to run experimental protocols and not necessarily be "compatible with" the big players Cisco/HP/etc. it puts itself into niche (although potentially big) market, more on this later. And as you've stated it's recieved some adoption in the "cloud data centers (CDC)" e.g. google, facebook, etc because they need to exploit experimental protocols to gain a competitive advantage or optimize for their application.
As you've stated some switch vendors have added openflow capability to capitalize on the niche need in academia and potentially sell into the CDC; google, facebook. This is potentially a big market (or bubble if you're pessimistic).
The problem that I see is that the majority of the market (80% or more) is enterprise IT data centers. The requirements here is for stable, compatible networking. Open and less expensive would be nice, but not at the cost of the former.
One could think of a day where corporate IT is partially or completely cloud-sourced where QoS is maintained by the cloud provider. In this case, experimental protocols could be leveraged to provide a competitive advantaged for speed or QoS. In which case; openflow could play a more dominant roll. I personally think this scenario is many years off.
So, the conclusion I come to is that other than in research and perhaps CDCs (google, facebook), the market is pretty small. I suppose that if researchers use openflow to come up with a better protocol for say link aggregation, or congestion management, then eventually Cisco and HP will provide those in their standard offering because their customers will demand it. So openflow could be a market influencer (via the research community), but it would not be a market disruptor.
Do you agree with my conclusions? Thanks for your input.
I know that a protocol is a set of rules that governs communication between two computers on a network, but how are thoses rules implemented for the computer? Is a protocol basically a piece of code or, in other words, software?
Protocols are generally built upon each other. At the risk of sounding pedantic, here's an example of a protocol and where/how it's implemented:
Application Protocol - the way a particular application talks to another instance of itself or a corresponding server; this is implemented in the application code or a shared library
TCP (or UDP, or another layer) - the way that information is sent at the binary level and split up into usable chunks, then reassembled at the destination; this is usually implemented as part of the operating system, but it is still software code
IP - the way that information (having already been split or truncated by something like TCP or UDP) makes its way from one place to another by routing over one or more "hops"; this is always software code, but is sometimes implemented in the OS and sometimes implemented in the network device (your LAN card, for example)
base-T (ethernet), token ring, etc - Here we are physically getting into how the hardware talks to one another; ie, which wire corresponds to a particular type of signal; this is always implemented in hardware
electricity /photons - the laws that govern (or at least define) how electrons (or photons) flow over a conductive material or over the air; this is usually implemented in hardware ;)
In a sense, these are all "protocols" (a set of rules or expected behaviors that allow communication to take place), and they're built on one another.
Bear in mind that (aside from electricity) this is not an exhaustive list of the sort of protocols that exist at any of these layers!
Edit Thanks to dmckee for pointing out that electricity isn't the only physical process used in networking ;)
Networking protocols are not pieces of code or software, they are only a set of rules. When software uses a specific networking protocol, then the software is known as an implementation. There can be many different software implementations of the same protocol (i.e. Windows and UNIX have different TCP/IP implementations). It is possible to understand networking protocols without any knowledge of programming.
EDIT: How are they implemented? Here's a paper on taking an abstract specification of a protocol and implementing it into C. You'll see that less-strict protocols leave out certain details that programmers have to guess on, which makes some implementations incompatible with others.
A network protocol is basically like a spoken language. It is implemented by code that sends and receives specially prepared messages over the network/internet, much like the vocal chords you need to speak (the network and hardware) and a brain to actually understand what someone said (the protocol stack/software).
Sometimes protocols are implemented directly on the hardware [for speed reasons] (like the Ethernet protocol for LANs) - but it is always software/code required to do something useful with a protocol.
This might be interesting for you:
The OSI Model
Protocol (Computing)
Software implements the rules defined in the protocol, some protocols are formal defined and some informal.
a protocol is a set of rules governing the communication between two entities.
in the computer/programming context, a protocol is a set of rules governing the communication between two programs.
in the computer network context, a protocol is a set of rules governing the communication between two programs, well, over network.
in computers, in the end everything is embodied in code...
Protocols are basically set of rules. The way to implement them is to first of all make a state machine diagram as it completely tells that what is going to be the current state and how the state is going to change on the basis of input and what output actions are going to be performed.
Your answer is a very short one:
BY READING THE RFC.
The main networking problem is to share data between computers. All the networking protocols try to solve is a little part of that major problem. Some of them (the protocols) are implemented as software, some others as hardware. In short, protocols like algorithms, can be implemented it in many programming languages.
Back to the TCP, it is implemented by the operating system.