Let me start with a full disclosure : I have been given a mission which is out of my leauge and I am 'grasping at straws' here.
the back story :
I have 3 different pieces of hardware . all of them collects the same data but stores it differently .
I wantto make a 4th hardware which will collect the data from all of the others and to do so I first need to choose which protocol is better for this job and implement it on thoose hardwares.
they are not connected to the internet but have a connection between them .
Once in my studies I learnt about SNMP protocol and from googling now I came across OPC protocol .
I can't understand what is the difference between them [as for my understanding both have alarm events , secure ,etc] , and can't find full info about OPC .
trying to understand which one is suited for me.
to clarify I am planning to implement my own version of DB in the hardware [for example on SNMP I will need to build my own MIBs/somekind of my own agent ] .
I agree that SNMP is a better choice in this case. But the explanation of OPC is strange in my point of view just wrong.
SNMP is designed to monitor devices connected to some sort of network like TCP/IP. Nowadays it is indeed mainly used in network equipment like router etc.
OPC is a protocol to retrieve data, alarms and historical data from a device.
Alarm is in the case of a PLC a real alarm. Like tank 1 is almost overflowing. Action is/must be taken.
OPC is not only used in Scada. It mainly used for software to communicate with PLCs and own written software. That can be SCADA-software, but that is not always the case.
SNMP is a general purpose protocol which is widely used everywhere to manage/monitor all kinds of equipment, systems, devices and hardware in different domains. Nowadays, it is a de-facto standard protocol used for monitoring/management of any type of entities.
In opposite to that the OPC is only used in SCADA domain. So it is kind of specific. I'd go with SNMP if I were you.
SNMP=Simple Network Management Protocol. According to my experience with it- it is far from being simple. So beware of using it unless you completely sure it addresses your problem best, e.g you have large and complex firmware and software and you need to sync interfaces between various departments of software engineers.
I would propose in simple cases as yours just to implement anything proprietary or use Prometheus which is far simpler and more flexible for changes.
Good Luck.
Related
I was wondering if DDS could be used over the internet, and if it would be a good choice for online gaming.
I have seen on the RTI website that they support WAN, but does that mean I can subscribe to a topic from another participant that is on the other side of the world?
What would happen to the QoS guarantees if this was the case?
Thanks.
Disclaimer: I work on OpenDDS full time, but have no experience in networked games programming.
A internet-enabled DDS could be used for connecting game clients. Whether or not it's a good idea is something I can't answer at the moment with no specific information, but the QoS part is a good question. In OpenDDS, as far as I'm aware, we try to adhere to the QoS defined by the user as if it was a normal RTPS connection. This means using it over the Internet might require some tuning of the QoS depending on what QoS you want to use. For example if deadline QoS was being used on a local network, the time period might have to be relaxed given the greater latency of the Internet.
For OpenDDS, internet-enabled RTPS is described in Chapter 15 of the OpenDDS's Developer's Guide: http://download.objectcomputing.com/OpenDDS/OpenDDS-latest.pdf. In addition to using ICE to overcome NATs, we also have a feature called the RTPS Relay to enable connections when a client can't use ICE.
I'm not familiar with what specific capabilities RTI Connext here has but as far as I'm aware they are similar, in that they use ICE as well. Also it should be noted that internet-enabled RTPS is not standardized, so the Connext and OpenDDS wouldn't be able to be talk over WAN.
OpenDDS would only be appropriate for games in very constrained environments because of the bandwidth requirements. If all users are on the same LAN then the UDP multicast approach that RTPS uses would be effective for a peer-to-peer game architecture. However, if remote users are added, then the requirement of every peer having to send every update directly to every other peer will very quickly explode the bandwidth requirements.
Given that the RTPS relay is already another application that needs to be run, a game server that collates updates from peers and sends world state would be far more effective for cases where users are not all on a single LAN segment.
I'm looking into how one would create a network of embedded systems. What I'd like to achieve is for a device (basically a chip with network capabilities) to directly send data to a server but not use the internet(tcp/ip) or cellular data(like GSM etc).
I don't have much expertise in this field. Most of the networking protocols I've seen like ZigBee are designed for a Local Area Networks. Wide Area Network can be achieved perhaps over mesh or hoping etc. But is there a known protocol for long range networking, say for sensors, assuming there aren't low power constraints?
I am guessing you want to avoid the internet and GSM, not because you have anything against the protocols but because you want your solution to work without having to rely these networks.
If so then you don't have to rule out TCP/IP as this can be used in private networks also.
From your description it sounds like the closest thing that would meet your needs would be a satellite based communications system. So long as you are not worried about price, power and to a certain extend size, then your sensors can communicate from anywhere using satellite links.
There are also HAP - High Altitude Platforms. These are essentially like low flying Satellites, or high flying planes/blimps, so don't have the same coverage but need less power for a given communication bandwidth. If you search for 'High Altitude Platform Networking' you should find plenty of examples such as the following which is an up to date summary of the technology at the time of writing:
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S2175-91462016000300249
As mentioned above, many if not all of these systems will support IP based communication protocols on top of their lower layers. Unless you really have some issue with the protocols themselves, it seems sensible to use them as there is such a wealth of experience, tools etc associated with IP communications, and using them does not make you dependent on the wider 'Internet'.
Its also worth mentioning that a common pattern is to have local groups of sensors communicate with each other and or with a gateway and the gateway then communicate over the long link back to your server. This allows the individual device be smaller, cheaper, lower power etc. This may not match your requirements if you are not likey to have clusters of sensors, however.
If you search for satellite sensor networks you may find you get a lot of hits for the gateway case mentioned above. This article 'A Survey of Architectures and Scenarios in Satellite-Based Wireless Sensor Networks: System Design Aspects' looks to be a good overview which includes HAPs also and it is available to download form this site at the time of writing:
https://www.researchgate.net/publication/250003254_A_Survey_of_Architectures_and_Scenarios_in_Satellite-Based_Wireless_Sensor_Networks_System_Design_Aspects
I would look into LoRa which is designed with these requirements in mind.
You still need infrastructure for that (gateways, network servers), you could roll out your own or (in urbanized areas) use pre-existing one like TTN.
The choice of networking technologies depends on many factors, such as range, power constraints, bandwidth requirements and so on.
User Raber already pointed out that you could look into LoRa / LoRaWAN. Since nobody else mentioned it yet, my suggestion is to also have a look at SigFox technology, which is slightly different from LoRa in what it offers and their business model.
I recently purchased an assortment of sensors from a company and have been having little success in getting them to communicate with my software. I sent a note to the manufacturer asking about compatibility and was told that the devices use 'proprietary ZigBee'.
What does this mean? Do they use a different command set? Is the information is encrypted somehow?
If they are "ZigBee certified" or have a ZigBee logo on the packaging, then they have to implement the standard ZigBee protocols, including ZCL (ZigBee Cluster Library) and ZDO/ZDP (ZigBee Device Object/Profile) on endpoint 0.
Their product could include Manufacturer-Specific clusters with undocumented commands.
If they're using ZCL, then standard ZDO discovery should still work and allow you to enumerate all endpoints and their clusters that don't have the manufacturer-specific bit set. If you know the 16-bit manufacturer ID they're using, you can discover those attributes as well, and display their values (you won't know what they are though).
You should consider reading the ZCL specification at zigbee.org, as it may help you to understand how ZigBee devices communicate with each other. It also explains the manufacturer-specific extensions to the standard.
If you are a developer or are just curious to see the ZigBee traffic among the devices and sensors you have, you might want to try sniffing the traffic.
We use the Perytons sniffer. They support many off-the-shelf dongles you can use as front-ends and provide a 30 days free evaluation of their application.
Proprietary Zigbee usually called as Manufacturer Specific Profile(MSP) in zigbee and is very commonly used by developers and companies. Also Zigbee used to certify MSP till some time last year and used to issue the certificate too of the same. But now the certification is only limited to compliance of zigbee but not the logo usage.
https://www.udemy.com/internet-of-things-and-everything-a-workshop-on-zigbee/
I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!
I know that a protocol is a set of rules that governs communication between two computers on a network, but how are thoses rules implemented for the computer? Is a protocol basically a piece of code or, in other words, software?
Protocols are generally built upon each other. At the risk of sounding pedantic, here's an example of a protocol and where/how it's implemented:
Application Protocol - the way a particular application talks to another instance of itself or a corresponding server; this is implemented in the application code or a shared library
TCP (or UDP, or another layer) - the way that information is sent at the binary level and split up into usable chunks, then reassembled at the destination; this is usually implemented as part of the operating system, but it is still software code
IP - the way that information (having already been split or truncated by something like TCP or UDP) makes its way from one place to another by routing over one or more "hops"; this is always software code, but is sometimes implemented in the OS and sometimes implemented in the network device (your LAN card, for example)
base-T (ethernet), token ring, etc - Here we are physically getting into how the hardware talks to one another; ie, which wire corresponds to a particular type of signal; this is always implemented in hardware
electricity /photons - the laws that govern (or at least define) how electrons (or photons) flow over a conductive material or over the air; this is usually implemented in hardware ;)
In a sense, these are all "protocols" (a set of rules or expected behaviors that allow communication to take place), and they're built on one another.
Bear in mind that (aside from electricity) this is not an exhaustive list of the sort of protocols that exist at any of these layers!
Edit Thanks to dmckee for pointing out that electricity isn't the only physical process used in networking ;)
Networking protocols are not pieces of code or software, they are only a set of rules. When software uses a specific networking protocol, then the software is known as an implementation. There can be many different software implementations of the same protocol (i.e. Windows and UNIX have different TCP/IP implementations). It is possible to understand networking protocols without any knowledge of programming.
EDIT: How are they implemented? Here's a paper on taking an abstract specification of a protocol and implementing it into C. You'll see that less-strict protocols leave out certain details that programmers have to guess on, which makes some implementations incompatible with others.
A network protocol is basically like a spoken language. It is implemented by code that sends and receives specially prepared messages over the network/internet, much like the vocal chords you need to speak (the network and hardware) and a brain to actually understand what someone said (the protocol stack/software).
Sometimes protocols are implemented directly on the hardware [for speed reasons] (like the Ethernet protocol for LANs) - but it is always software/code required to do something useful with a protocol.
This might be interesting for you:
The OSI Model
Protocol (Computing)
Software implements the rules defined in the protocol, some protocols are formal defined and some informal.
a protocol is a set of rules governing the communication between two entities.
in the computer/programming context, a protocol is a set of rules governing the communication between two programs.
in the computer network context, a protocol is a set of rules governing the communication between two programs, well, over network.
in computers, in the end everything is embodied in code...
Protocols are basically set of rules. The way to implement them is to first of all make a state machine diagram as it completely tells that what is going to be the current state and how the state is going to change on the basis of input and what output actions are going to be performed.
Your answer is a very short one:
BY READING THE RFC.
The main networking problem is to share data between computers. All the networking protocols try to solve is a little part of that major problem. Some of them (the protocols) are implemented as software, some others as hardware. In short, protocols like algorithms, can be implemented it in many programming languages.
Back to the TCP, it is implemented by the operating system.