Wide Area Network for embedded systems without cellular or internet - networking

I'm looking into how one would create a network of embedded systems. What I'd like to achieve is for a device (basically a chip with network capabilities) to directly send data to a server but not use the internet(tcp/ip) or cellular data(like GSM etc).
I don't have much expertise in this field. Most of the networking protocols I've seen like ZigBee are designed for a Local Area Networks. Wide Area Network can be achieved perhaps over mesh or hoping etc. But is there a known protocol for long range networking, say for sensors, assuming there aren't low power constraints?

I am guessing you want to avoid the internet and GSM, not because you have anything against the protocols but because you want your solution to work without having to rely these networks.
If so then you don't have to rule out TCP/IP as this can be used in private networks also.
From your description it sounds like the closest thing that would meet your needs would be a satellite based communications system. So long as you are not worried about price, power and to a certain extend size, then your sensors can communicate from anywhere using satellite links.
There are also HAP - High Altitude Platforms. These are essentially like low flying Satellites, or high flying planes/blimps, so don't have the same coverage but need less power for a given communication bandwidth. If you search for 'High Altitude Platform Networking' you should find plenty of examples such as the following which is an up to date summary of the technology at the time of writing:
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S2175-91462016000300249
As mentioned above, many if not all of these systems will support IP based communication protocols on top of their lower layers. Unless you really have some issue with the protocols themselves, it seems sensible to use them as there is such a wealth of experience, tools etc associated with IP communications, and using them does not make you dependent on the wider 'Internet'.
Its also worth mentioning that a common pattern is to have local groups of sensors communicate with each other and or with a gateway and the gateway then communicate over the long link back to your server. This allows the individual device be smaller, cheaper, lower power etc. This may not match your requirements if you are not likey to have clusters of sensors, however.
If you search for satellite sensor networks you may find you get a lot of hits for the gateway case mentioned above. This article 'A Survey of Architectures and Scenarios in Satellite-Based Wireless Sensor Networks: System Design Aspects' looks to be a good overview which includes HAPs also and it is available to download form this site at the time of writing:
https://www.researchgate.net/publication/250003254_A_Survey_of_Architectures_and_Scenarios_in_Satellite-Based_Wireless_Sensor_Networks_System_Design_Aspects

I would look into LoRa which is designed with these requirements in mind.
You still need infrastructure for that (gateways, network servers), you could roll out your own or (in urbanized areas) use pre-existing one like TTN.

The choice of networking technologies depends on many factors, such as range, power constraints, bandwidth requirements and so on.
User Raber already pointed out that you could look into LoRa / LoRaWAN. Since nobody else mentioned it yet, my suggestion is to also have a look at SigFox technology, which is slightly different from LoRa in what it offers and their business model.

Related

OPC vs SNMP protocol

Let me start with a full disclosure : I have been given a mission which is out of my leauge and I am 'grasping at straws' here.
the back story :
I have 3 different pieces of hardware . all of them collects the same data but stores it differently .
I wantto make a 4th hardware which will collect the data from all of the others and to do so I first need to choose which protocol is better for this job and implement it on thoose hardwares.
they are not connected to the internet but have a connection between them .
Once in my studies I learnt about SNMP protocol and from googling now I came across OPC protocol .
I can't understand what is the difference between them [as for my understanding both have alarm events , secure ,etc] , and can't find full info about OPC .
trying to understand which one is suited for me.
to clarify I am planning to implement my own version of DB in the hardware [for example on SNMP I will need to build my own MIBs/somekind of my own agent ] .
I agree that SNMP is a better choice in this case. But the explanation of OPC is strange in my point of view just wrong.
SNMP is designed to monitor devices connected to some sort of network like TCP/IP. Nowadays it is indeed mainly used in network equipment like router etc.
OPC is a protocol to retrieve data, alarms and historical data from a device.
Alarm is in the case of a PLC a real alarm. Like tank 1 is almost overflowing. Action is/must be taken.
OPC is not only used in Scada. It mainly used for software to communicate with PLCs and own written software. That can be SCADA-software, but that is not always the case.
SNMP is a general purpose protocol which is widely used everywhere to manage/monitor all kinds of equipment, systems, devices and hardware in different domains. Nowadays, it is a de-facto standard protocol used for monitoring/management of any type of entities.
In opposite to that the OPC is only used in SCADA domain. So it is kind of specific. I'd go with SNMP if I were you.
SNMP=Simple Network Management Protocol. According to my experience with it- it is far from being simple. So beware of using it unless you completely sure it addresses your problem best, e.g you have large and complex firmware and software and you need to sync interfaces between various departments of software engineers.
I would propose in simple cases as yours just to implement anything proprietary or use Prometheus which is far simpler and more flexible for changes.
Good Luck.

Can a million New York city devices be programmed for true peer-to-peer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If Chris and Pat want to exchange a text message, they send and receive via their network providers, which charge them for a connection.
If Chris and Pat are both located in New York City, and there are enough wireless devices between Chris and Pat all close enough to each other to form a continuous chain, is it possible for all those devices to be programmed to cooperatively forward packets amongst each other, bypassing the need for network providers?
It would seem the "address" of each device would have to include current geographic coordinates, and devices would have to report their movements frequently enough so routing attempts could still find them, but the speed and capacity of devices nowadays could handle that, right?
Would such a network be viable? Does it already exist or has it been attempted? Is there some kind of inherent programming problem that is difficult to overcome?
There are a few interesting things here:
Reachability. At least you need to use a technology that can do ad-hoc and peer-to-peer networking. Of those technologies only bluetooth, NFC and WiFi are more or less often implemented. Of those again only wifi currently may have the strength to connect to devices in other houses or to the street, but even there typical ranges are 30-60m (and that's for APs, it might be lower for UEs).
Mobility. ANY short-range wireless communication protocol has difficulties with fast-moving devices. It's simple math, suppose your coverage is 50m in diameter, if you move at about 20km/h or 5.5m/s, you have less than 10s to actually detect, connect and send data while passing this link. Oh, but then we did not consider receiving traffic, you actually have to let all devices know that for the next 10s you want to receive data now via this access network. To give an example, wifi connectivity times with decent authentication (which you need for something like this) alone takes a few seconds. 10s might be doable, but as soon we talk about cars, trains, ... it's becoming almost impossible with current technology. But then again, if you can't connect to those, what are the odds you will cross some huge boulevards with your limited reachability?
Hop to hop delays. You need a lot of those. We can fairly assume that you need at least a hop each 20-30m, let's average at 40 hops/km. So to send a packet over lets say 5km you'd need 200 hops. Each hop needs to take in a packet (L2 processing), route it (L3 processing) and send it out again (L2 processing). While mobile devices are relatively powerful these days I wouldn't assume they can handle that in the microseconds routers do. Next to that in a wireless network you have to wait for a transmission slot, which can actually take in the order of ms (each hop!). So all in all, odds are huge this would be a terribly slow network.
Loss. Well, this depends a bit on the wireless protocol, either it has its own reliable delivery protocol (which will make the previous point worse) or it doesn't. In that last case, suppose your wireless link has about .1% loss, or 99.9% no-loss, this would actually end up with an 18.1% loss rate for the 200 hops considered previously ( (1-0.999**200)*100) This is nearly impossible to work with in day-to-day communications.
Routing. lets say you need a few millions of devices and thus routes. For traditional routing this usually takes some very heavy multicore routers with loads of processing power. Let's just say mobile devices (today) can't cut that yet. A purely geographically based routing mechanism might work, but I can't personally think of any (even theoretical) system for this that works today. You still have to distribute those routes, deal with (VERY) frequent route updates, avoid routing loops, and so on. So even with that I'd guess you'd hit the same scale issues as with for example OSPF. But all-in-all I think this is something that mobile devices will be able to handle somewhere in the not-so-far future, we're just talking about computing capacity here.
There are some other points why such a network is very hard today, but these are the major ones I know of. Is it impossible? No, of course not, but I just wanted to show why I think it is almost impossible with the current technologies and would require some very significant improvements, not just building the network.
If everyone has a device with sufficient receive/process/send capabilities, then backbones (ISP's) aren't really necessary. Start at mesh networking to find the huge web of implementations, devices, projects, etc., that have already been in development. The early arpanet was essentially true peer-to-peer, but the number of net nodes grew faster than the nodes' individual capabilities, hence the growth of backbones and those damn fees everyone's paying to phone and cable companies.
Eventually someone will realize there are a million teenagers in NYC that would be happy to text and email each other for free. They'll create a 99-cent download to let everyone turn their phones and laptops and discarded devices into routers and repeaters, and it'll go viral.
Someday household rooftop repeaters might become as common as TV antennas used to be.
Please check: Wireless sensor network
A wireless sensor network (WSN) of spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively pass their data through the network to a main location

What is 'proprietary ZigBee'?

I recently purchased an assortment of sensors from a company and have been having little success in getting them to communicate with my software. I sent a note to the manufacturer asking about compatibility and was told that the devices use 'proprietary ZigBee'.
What does this mean? Do they use a different command set? Is the information is encrypted somehow?
If they are "ZigBee certified" or have a ZigBee logo on the packaging, then they have to implement the standard ZigBee protocols, including ZCL (ZigBee Cluster Library) and ZDO/ZDP (ZigBee Device Object/Profile) on endpoint 0.
Their product could include Manufacturer-Specific clusters with undocumented commands.
If they're using ZCL, then standard ZDO discovery should still work and allow you to enumerate all endpoints and their clusters that don't have the manufacturer-specific bit set. If you know the 16-bit manufacturer ID they're using, you can discover those attributes as well, and display their values (you won't know what they are though).
You should consider reading the ZCL specification at zigbee.org, as it may help you to understand how ZigBee devices communicate with each other. It also explains the manufacturer-specific extensions to the standard.
If you are a developer or are just curious to see the ZigBee traffic among the devices and sensors you have, you might want to try sniffing the traffic.
We use the Perytons sniffer. They support many off-the-shelf dongles you can use as front-ends and provide a 30 days free evaluation of their application.
Proprietary Zigbee usually called as Manufacturer Specific Profile(MSP) in zigbee and is very commonly used by developers and companies. Also Zigbee used to certify MSP till some time last year and used to issue the certificate too of the same. But now the certification is only limited to compliance of zigbee but not the logo usage.
https://www.udemy.com/internet-of-things-and-everything-a-workshop-on-zigbee/

NBAD, Netflow on layer 7

I'm developing Network Behavior Anomaly Detection and I'm using Cisco protocol NetFlow for collecting traffic information. I want to collect information about layer 7 of ISO OSI Reference Model, especially https protocol.
What is the best way to achieve this?
Maybe someone find it helpful:
In my opinion you should try sFlow or Flexible NetFlow.
SFlow uses a sampling to achieve scalability. System architecture consists receiving devices getting two types of samples:
-randomly sampling packets
-basis of sampling counters at certain time intervals
Sampled packets are sent as sFlow datagrams to a central server running the software for the analysis and reporting of network traffic, sFlow collector.
SFlow may be implemented in hardware or software, and while the name "sFlow" means that this is flow technology, however, this technology is not flow at all, and represents the transmission image on the basis of samples.
NetFlow is a real flow technology. Entries for the flow generated in the network devices and combined into packages.
Flexible NetFlow allows customers to export almost everything that passes through the router, including the entire package and doing it in real time, like sFlow.
In my opinion Flexible NetFlow is much better and if you're afraid of DDoS attack choose it.
If FNF is better why use sFlow? Cause many switches today only supports sFlow, and if we don't have possibility of use FNF and want to get real-time data sFlow is best option.

Flow based routing and openflow

This may not be the typical stackoverflow question.
A colleague of mine has been speculating that flow-based routing is going to be the next big thing in networking. Openflow provides the technology to use low cost switches in large application, IT data-centers, etc; replacing Cisco, HP, etc switch and routers. The theory is that you can create a hierarchy these openflow switches with simple configuration, eg. no spanning tree. Open flow will route each flow to the appropriate switch/switch-port, using only the knowledge of the hierarchy of switches (no routers). The solution is suppose to save enterprises money and simplify networking.
Q. He is speculating that this may dramatically change enterprise networking. For many reasons, I am skeptical. I would like to hear your thoughts.
OpenFlow is a research project from Stanford University led by professor Nick McKeown. In the original OpenFlow research paper, the goal of OpenFlow was to give researchers a way "to run experimental protocols in the networks they use every day." For years networking researchers have had an almost impossible task deploying and evaluating their ideas on real networks with real Ethernet switches and IP routers. The difficultly is that real switches and routers from companies like Cisco, HP, and others, are all closed, proprietary boxes that implement standard "protocols", like Ethernet spanning tree, and OSPF. There are business reasons why Cisco and HP won't let you run software on their switches and routers; there is no technical reason. OpenFlow was invented to solve a people problem: if Cisco is not willing to let you run code on their switch, maybe they can at least provide a very narrow interface to let you remotely configure their switch, and that narrow interface is called OpenFlow.
To my knowledge more than a dozen companies are currently implementing OpenFlow support for their switches. Some like HP are only providing the OpenFlow software for research purposes. Others like NEC are actually offering commercial support.
For academic researchers that want to evaluate new routing protocols in real networks, OpenFlow is a huge win. For switch vendors, it is less clear if OpenFlow support will help, hurt, or have no effect in the long run. After all, the academic research market is very small.
The reason why OpenFlow is most often discussed in the context of enterprise networks is that OpenFlow grew out of a previous research project called Ethane that used OpenFlow's mechanism of remotely programming switches in an enterprise network in order to centralize a security policy. Ethane, and by extension OpenFlow, has led directly to two startup companies: Nicira, founded by Martin Casado, and Big Switch Networks, founded by Guido Appenzeller. It would be easier to implement an Ethane-like system if all of the switches in the network supported OpenFlow.
Closely related to enterprise networks are data center networks, the networks that interconnect thousands to tens of thousands of servers in companies such as Google, Facebook, Microsoft, Amazon.com, and Yahoo!. One problem with Ethernet is that it does not scale to this many servers on the same Layer 2 network. We attempted to solve this problem in a research project called PortLand. We used OpenFlow to facilitate programming the switches from a central controller, which we called a Fabric Manager. We released the PortLand source code as open source.
However, we also found a limitation to OpenFlow's functionality. In another data center networking research project called Helios, we were not able to use OpenFlow because it did not provide a mechanism for bonding multiple switch ports into a Link Aggregation Group (LAG). Presumably one could extend the OpenFlow specification indefinitely until it all possible switch features become exposed.
There are other networks as well such as the Internet access networks, Internet backbones, home networks, wireless networks, cellular networks, etc. Researchers are trying to see where OpenFlow fits into all of these markets. What it really comes down to is the question, "what problem does OpenFlow solve?" Ethane makes a case for enterprise networks but I have not yet seen a compelling case for any other type of network. OpenFlow might be the next big thing, or it might end up being a case of "don't solve a people problem with a technical solution."
In order to assess the future of flow-based networking and OpenFlow, here’s the way to think about it.
It starts with the silicon trends: Moore’s Law (2X transistors per 18-24 months), and a correlated but slower improvement in the I/O bandwidth available on a single chip (roughly 2X every 30-36 months). You can now buy full-featured 10GbE single chip switches with 64 ports, and chips which have a mix of 40GbE and 10GbE ports with comparable total I/O bandwidth.
There are a variety of ways physically connect these in a mesh (ignoring the loop-free constraints of spanning tree and the way Ethernet learns MAC addresses). In the high performance computing (HPC) world, a lot of work has been done building clusters with InfiniBand and other protocols using meshes of small switches to network the compute servers. This is now being applied to Ethernet meshes. The geometry of a CLOS or fat-tree topology enables a two stage mesh with a large number of ports. The math is thus: Where n is the # of ports per chip, the number of devices you can connect in a two-stage mesh is (n*2)/2, and the number you can connect in a three-stage mesh is (n*3)/4. While with standard spanning tree and learning, the spanning tree protocol will disable the multi-path links to the second stage, most of the Ethernet switch vendors have some sort of multi-chassis Link Aggregation protocol which gets around the multi-pathing limitation. There is also standards work in this area. Although it might not be obvious, the vast majority of Link Aggregation schemes allocate traffic so all the frames of any given flow take the same path. This is done in order to minimize out-of-order frames so they don’t get dropped by some higher level protocol. They could have chosen to call this “flow based multiplexing” but instead they call it “link aggregation”.
Although the devil is in the details, there are a variety of data center operators and vendors that have concluded they don’t need to have large multi-slot chassis switches in the aggregation/core layer for server connect, instead using meshes of inexpensive 1U or 2U switches.
People have also concluded that eventually you need some kind of management station to set up the configuration of all the switches. Again, drawing from the experience with HPC and InfiniBand, they use what is called an InfiniBand Controller. In the telecom world, most telecom networks have evolved to separate the management and part of the control plane from the boxes that carry the data traffic.
Summarizing the points above, meshes of Ethernet switches with an external management plane with multipath traffic where flows are kept in order is evolutionary, not revolutionary, and is likely to become mainstream. At least one major company, Juniper, has made a big public statement about their endorsement of this approach. I'd call all of these "flow-based routing".
Juniper and other vendors’ proprietary approaches notwithstanding, this is an area that cries out for standards. The Open Networking Foundation (ONF), was founded to promote standards in this area, starting with OpenFlow. Within a couple of months, the sixty+ members of ONF will be celebrating their first year anniversary. Each member has, I am led to believe, paid tens of thousands of dollars to join. While the OpenFlow protocol has a ways to go before it is widely adopted, it has real momentum.
#Nathan: OpenFlow 1.1 actually adds some primitives that enable the use of multiple links via the Multipath Proposal.
An excellent view of OpenFlow by Simon Crosby
http://community.citrix.com/display/ocb/2011/03/21/The+Rise+of+the+Software+Defined+Network
More context on SDN which discusses IETF's SDN initiative and ONF's Openflow. Working in conjuction is a powerful combination http://bit.ly/A8xYso
Nathan, Excellent historical account and overview of openflow. Thanks!
You've hit on the points that I've been wrapping my head around as to why Openflow might not be widely adopted. Since it was designed to be open to allow researcher the ability to run experimental protocols and not necessarily be "compatible with" the big players Cisco/HP/etc. it puts itself into niche (although potentially big) market, more on this later. And as you've stated it's recieved some adoption in the "cloud data centers (CDC)" e.g. google, facebook, etc because they need to exploit experimental protocols to gain a competitive advantage or optimize for their application.
As you've stated some switch vendors have added openflow capability to capitalize on the niche need in academia and potentially sell into the CDC; google, facebook. This is potentially a big market (or bubble if you're pessimistic).
The problem that I see is that the majority of the market (80% or more) is enterprise IT data centers. The requirements here is for stable, compatible networking. Open and less expensive would be nice, but not at the cost of the former.
One could think of a day where corporate IT is partially or completely cloud-sourced where QoS is maintained by the cloud provider. In this case, experimental protocols could be leveraged to provide a competitive advantaged for speed or QoS. In which case; openflow could play a more dominant roll. I personally think this scenario is many years off.
So, the conclusion I come to is that other than in research and perhaps CDCs (google, facebook), the market is pretty small. I suppose that if researchers use openflow to come up with a better protocol for say link aggregation, or congestion management, then eventually Cisco and HP will provide those in their standard offering because their customers will demand it. So openflow could be a market influencer (via the research community), but it would not be a market disruptor.
Do you agree with my conclusions? Thanks for your input.

Resources