I am aware of current Ethernet Technology and it's non-deterministic behavior, due to CSMA-/CA/CD. I also see a lot of news around Time Sensitive Networking.
Could anyone briefly explain how TSN could change or enhance timing, synchronization, how is it related to IEEE 1588(PTP) etc ?
Your question is way too broad, but anyway...
First:
All Ethernet link these days are full duplex, so collision avoidance such as CSMA-/CA/CD are a thing of the past since about 15 years I would say: Ethernet is not a shared media (like is air for Wi-Fi), and there simply isn't any collision, so no need to avoid it. Did you get that "non-deterministic because of CSMA-/CA/CD" from a book or teacher? Then those would need a serious update.
(And so no, the collision risk and its avoidance mechanism is not the cause of "current Ethernet Technology and it's non-deterministic behavior", especially not with the word "current".)
Then:
About TSN (Time Sensitive Networking): TSN is just the new name for an IEEE 802 task sub-group that was first called AVB for "Audio Video Bridging", as you can see from here (2005 or 2008 to 2012):
http://www.ieee802.org/1/pages/avbridges.html
Note: The Audio/Video Bridging Task Group was renamed the
"Time-Sensitive Networking Task Group" in November, 2012. Please refer
to that page for further information. The rest of this page has been
preserved it existed at that time, without some obsolete meeting
information.
The change was made to reflect its now broader perspective: not only for Pro Audio/Video distribution use, but also to automotive, and latter industrial applications as well. So the work continues here:
http://www.ieee802.org/1/pages/tsn.html
As a result, you'll actually find most information about the fundamentals of TSN by googling "Ethernet AVB" rather than "Ethernet TSN". The wikipedia page, carefully maintained by people directly involved with the technology, is a good start:
https://en.wikipedia.org/wiki/Audio_Video_Bridging
Also, as with every technology, there is a technical side, that's the IEEE AVB->TSN group, and there is a marketing side, taking care of branding, use-cases, and (very important) certification programs to label and guarantee the interoperability of products, to have a healthy ecosystem. For AVB/TSN, this marketing side is handled by the AVnu Alliance SIG (Special Interest Group), founded in 2009:
http://avnu.org/
There too, you can find a lot of information in the knowledge base section of the web site (technologies, whitepapers, specifications, FAQs): why was it made (what is the problem to solve), how does it work, what are the use cases in the various fields it targets.
Some final words:
AVB/TSN it is not a single protocol, but rather a set of protocols, put together with possible variants according to the use case. Example: originally designed with auto-configuration/plug and play' built in (geared towards sound engineers, without the need for network engineers), its automotive profile rather use static configuration (because of reduced boot and configuration time, lesser code/hardware for reduced cost embedded devices, and you're not going to change a car networking topology nor roles of the nodes everyday anyway).
And that makes a BIG pile of standards: the base IEEE AVB standards put together, the last one published in 2013 IIRC, where already about 1,500 pages, and TSN is now expanding that. Distribution of a common wall-clock to participants, which is a prior need to synchronization, with sub-microsecond range of precision, is a big, complex problem in itself to start with . Be it with a static clock device reference ("PTP master") in IEEE 1588, and even more with a elected, and then constantly monitored and possibly re-elected GM ("Grand Master) via a BMCA (Best master clock algorithm) as in IEEE 802.1AS.
Also, all this require implementation not only in the end nodes, but also in the switches (bridges), which takes an active part in it at pretty much every level (clocking, but also bandwidth reservation, then admission and traffic shaping). So part of the standards are relevant to nodes, others to the bridges.
So all this is really quite big and complex, and going from 1,500 pages (which I once read from cover to cover) - and it's now more than that - to "briefly explain how TSN could change or enhance timing, synchronization, how is it related to IEEE 1588 (PTP) etc ?" is a bit challenging... especially the "etc" part... :-)
Hope these few pointers help though.
Related
I am researching about Peer-To-Peer network architecture for games.
What i have read from multiples sources is that Peer-To-Peer model makes it easy for people to hack. Sending incorrect data about your game character, whether it is your wrong position or the amount of health point you have.
Now I have read that one of the things to make Peer-To-Peer more secure is to put an anti-cheat system into your game, which controls some thing like: how fast has someone moved from spot A to spot B, or controls if someones health points did not change drastically without a reason.
I have also read about Lockstep, which is described as a "handshake" between all the clients in Peer-to-Peer network, where clients promise not to do certain things, for instance "move faster than X or not to be able to jump higher than Y" and then their actions are compared to the rules set in the "handshake".
To me this seems like an anti-cheat system.
What I am asking in the end is: What is Lockstep in Peer-To-Peer model, is it an Anti-Cheat system or something else and where should this system be placed in Peer-To-Peer. In every players computer or could it work if it is not in all of the players computer, should this system control the whole game, or only a subset?
Lockstep was designed primarily to save on bandwidth (in the days before broadband).
Question: How can you simulate (tens of) thousands of units, distributed across multiple systems, when you have only a vanishingly small amount of bandwidth (14400-28800 baud)?
What you can't do: Send tens of thousands of positions or deltas, every tick, across the network.
What you can do: Send only the inputs that each player makes, for example, "Player A orders this (limited size) group ID=3 of selected units to go to x=12,y=207".
However, the onus of responsibility now falls on each client application (or rather, on developers of P2P client code) to transform those inputs into exactly the same gamestate per every logic tick. Otherwise you get synchronisation errors and simulation failure, since no peer is authoritative. These sync errors can result from a great deal more than just cheaters, i.e. they can arise in many legitimate, non-cheating scenarios (and indeed, when I was a young man in the '90s playing lockstepped games, this was a frequent frustration even over LAN, which should be reliable).
So now you are using only a tiny fraction of the bandwidth. But the meticulous coding required to be certain that clients do not produce desync conditions makes this a lot harder to code than an authoritative server, where non-sane inputs or gamestate can be discarded by the server.
Cheating: It is easy to see things you shouldn't be able to see: every client has all the simulation data available. It is very hard to modify the gamestate without immediately crashing the game.
I've accidentally stumbled across this question in google search results, and thought I might as well answer years later. For future generations, you know :)
Lockstep is not an anti-cheat system, it is one of the common p2p network models used to implement online multiplayer in games (most notably in strategy games). The base concept is fairly straightforward:
The game simulation is split into fairly short time frames.
After each frame players collect input commands from that frame and send those commands over the network
Once all the players receive the commands from all the other players, they apply them to their local game simulation during the next time frame.
If simulation is deterministic (as it should be for lockstep to work), after applying the commands all the players will have the same world state. Implementing the determinism right is arguably the hardest part, especially for cross-platform games.
Being a p2p model lockstep is inherently weak to cheaters, since there is no agent in the network that can be fully trusted. As opposed to, for example, server-authoritative network models, where developer can trust a privately-owned server that hosts the game. Lockstep does not offer any special protection against cheaters by itself, but it can certainly be designed to be less (or more) vulnerable to cheating.
Here is an old but still good write-up on lockstep model used in Age of Empires series if anyone needs a concrete example.
Can anyone tell me what major and minor (contained within the advertisement packet of BLE signals) are used for? I've heard that it's used for differentiating signals with the same UUID, but that raises questions like "why use two" and "is that just how certain receivers use it". It would be useful to have a decent explanation of it.
As per #Larme's comment, I presume you are asking about iBeacon advertisements - these are a special use of BLE. Bluetooth Low Energy service advertisements have a different format and don't include the major/minor.
The iBeacon specification doesn't say how to use major and minor - this is defined by the people that implement solutions using iBeacon. Two numbers just gives more flexibility.
A lot of effort went into making BLE use very little power. Accordingly the iBeacon advertisement has to be quite small in order to minimise the transmission time. I guess the designers decided two 16 bit numbers was a reasonable compromise between power consumption and a useable amount of information.
A typical retail use case could use the major to indicate a store (New York, Chicago, London etc) and the minor to indicate the department (shoes, menswear etc). The app that detects a beacon can then pass this information to a server which can send back relevant information - the user's location on a map or specials for that department etc. This was discussed in the guide that #Larme linked to.
A solution that presented information on museum exhibits might just use the major number to determine which exhibit the person was near and ignore the minor number. The minor number would still be in the advertisement, of course, the app just wouldn't use it for anything.
Background
Sweden is transitioning to a compulsory law for all business owners handling cash or card transactions, to implement/buy partially-encrypted POS (point of sale)/cash registers:
Signing and encryption are used to
securely store the information from
the cash register in the control unit.
The control system with a certified
control unit is based on the
manufacturer for each control unit
model obtaining a main encryption key
from the Swedish Tax Agency. The
manufacturer then uses the main key to
create unique encryption keys that are
placed in the control unit during the
manufacturing process. In order to
obtain main encryption keys,
manufacturers must submit an
application to the Swedish Tax Agency.
Source SKV
This has caused somewhat of an uproar among Swedish traders because of the necessitated complexity and strong encryption to be used, along with a highly sophisticated technical implementation from the shop owner's perspective, as the alternative is to buy the system from companies who have traversed the documentation, gotten their security keys and built the software and integrated it into the hardware.
So my first question is if any other countries in the world even comes close to the preciseness that the Swedish Tax Agency requires of its companies (alongside having extensive guidelines for bookkeeping)?
I'd like to hear about any other encryption schemes of interest and how they are applied through legislation when it comes to verifying the transactions and book keeping entries. Examples of such legislation could be similar to another Swedish rule; that book keeping entries (transactions) must be write-only, at most written 4 days after the occurrance and only changeable through a tuple of (date, signature of person doing it, new bookings).
Finally, what are your opinions on these rules? Are we going towards all-time uplinks for book keeping + POS systems to the tax agency's servers which verify and detect fraudulent patterns in real-time similar to the collective intelligence algorithms out there or will there be a back-lash against the increased complexity of running business?
Offhand, I can't think of anywhere else in the world that implements this strict of a requirement. However, some existing POS systems may implement this sort of encryption depending on what the definition of "control unit" is and where the differentiation between "control unit" and "cash register" lies.
The reason I say this is because many POS systems (at least the ones that I've worked with) are essentially a bunch of dumb terminals that are networked to a central database and transaction processing server. No data is actually stored in the cash register itself, so there is only a need to encrypt it on the server side (and over the wire, but I'm assuming a secure network protocol is being used). You would need to get a lawyer to interpret what exactly constitutes a "control unit", but if this can be defined as something on the server side (in a networked POS case like this) then the necessary complexity to implement such a system would not be too onerous.
The difficult case would be where each individual cash register requires a unique encryption key, whether to encrypt data to be stored inside the register itself or to encrypt data before sending it to a central server. This would require a modification or replacement of each cash register, which could indeed prove costly depending on the size of the business and the age of the existing equipment. In many countries (the US in particular), mandating such an extensive and costly change would likely have to be either accompanied by a bill providing funds to businesses (to help pay for the equipment conversion) or written in a manner more like "all Point-Of-Sale equipment manufactured or sold after {{{some future date}}} must implement the following features:". Implementing rules that place expensive burdens on businesses is a good way for politicians to lose a lot of support, so it's not likely that a rule like this will get implemented over a short period of time without some kind of assistance being offered.
The possibly interesting case would be the "old-fashioned" style of cash registers which essentially consist of a cash drawer, calculator, and receipt printer and store no data whatsoever. This law may require such systems to start recording transaction information (ask your lawyer). Related would be the case where transactions are rung up by hand, written on a paper ticket (like is commonly done in some restaurants and small stores in the US). I often find it amusing how legislation focuses on such security for high-tech systems but leaves the "analog" systems unchanged and wide open for problems. Then again, Sweden may not be using older systems like this anymore.
I'm not sure exactly what US law requires in terms of encrypted records, but I do know that certain levels of security are required by many non-government entities. For example, if a business wants to accept credit card payments, then the credit card company will require them to follow certain security and encryption guidelines when handling and submitting credit card payment information. This is in part dictated by the local rules of legal liability. If a transaction record gets tampered with, lost, or hijacked by a third party the security of the transaction and record-keeping systems will be investigated. If the business did not make a reasonable effort to keep the data secure and verified, then the business may be held at fault (or possibly the equipment manufacturer) for the security breach which can lead to large losses through lawsuits. Because of this, companies tend to voluntarily secure their systems in order to reduce the incidence of security breaches and to limit their legal liability should such a breach happen.
Since device manufacturers can sell their equipment internationally, equipment complying with these Swedish restrictions will likely end up being used in other places as well over time. If the system ends up being successful, other businesses will probably volunteer to use such an encrypted system, even in the absence of legislation forcing them to do so. I compare it to the RoHS rules that the EU passed several years ago. Many countries that did not sign the RoHS legislation now manufacture and use RoHS-certified materials, not because of a legal mandate but because they are available.
Edit: I just read this in the linked article:
Certified control unit
A certified control unit must be
connected to a cash register. The
control unit must read registrations
made by the cash register.
To me, this sounds like the certified control unit attaches to the register but is not necessarily connected to it (or necessarily unique to a register). This definition alone doesn't (to my non-lawyer ears) sound like it prohibits existing cash registers from being connected over a network to a certified control unit on the server side. If so, this might be as simple as installing some additional software (and possibly a peripheral device) on the server side. The details link may clarify this, but it's not in English so I'm not sure what it says.
These types of requirements are becoming more and more common across most of Europe (and to a lesser, but increasing, extent North America). I'm not sure exactly which Europe-based banks are moving fastest on this, but in North America one of the front-runners is First Data (who have already made available the fully-encrypted POS devices like you describe needing).
I would further postulate that most merchants will not develop systems internally that do this (due to the PCI requirements, and challenges in doing so), but will instead rely on their merchant providers for the required technology.
I am working on a client proposal and they will need to upgrade their network infrastructure to support hosting an ASP.NET application. Essentially, I need to estimate peak usage for a system with a known quantity of users (currently 250). A simple answer like "you'll need a dedicated T1 line" would probably suffice, but I'd like to have data to back it up.
Another question referenced NetLimiter, which looks pretty slick for getting a sense of what's being used.
My general thought is that I'll fire the web app up and use the system like I would anticipate it be used at the customer, really at a leisurely pace, over a certain time span, and then multiply the bandwidth usage by the number of users and divide by the time.
This doesn't seem very scientific. It may be good enough for a proposal, but I'd like to see if there's a better way.
I know there are load tools available for testing web application performance, but it seems like these would not accurately simulate peak user load for bandwidth testing purposes (too much at once).
The platform is Windows/ASP.NET and the application is hosted within SharePoint (MOSS 2007).
In lieu of a good reporting tool for bandwidth usage, you can always do a rough guesstimate.
N = Number of page views in busiest hour
P = Average Page size
(N * P) /3600) = Average traffic per second.
The server itself will have a lot more internal traffic for probably db server/NAS/etc. But outward facing that should give you a very rough idea on utilization. Obviously you will need to far surpass the above value as you never want to be 100% utilized, and to allow for other traffic.
I would also not suggest using an arbitrary number like 250 users. Use the heaviest production day/hour as a reference. Double and triple if you like, but that will give you the expected distribution of user behavior if you have good log files/user auditing. It will help make your guesstimate more accurate.
As another commenter pointed out, a data center is a good idea, when redundancy and bandwidth availability become are a concern. Your needs may vary, but do not dismiss the suggestion lightly.
There are several additional questions that need to be asked here.
Is it 250 total users, or 250 concurrent users? If concurrent, is that 250 peak, or 250 typically? If it's 250 total users, are they all expected to use it at the same time (eg, an intranet site, where people must use it as part of their job), or is it more of a community site where they may or may not use it? I assume the way you've worded this that it is 250 total users, but that still doesn't tell enough about the site to make an estimate.
If it's a community or "normal" internet site, it will also depend on the usage - eg, are people really going to be using this intensely, or is it something that some users will simply log into once, and then forget? This can be a tough question from your perspective, since you will want to assume the former, but if you spend a lot of money on network infrastructure and no one ends up using it, it can be a very bad thing.
What is the site doing? At the low end of the spectrum, there is a "typical" web application, where you have reasonable size (say, 1-2k) pages and a handful of images. A bit more intense is a site that has a lot of media - eg, flickr style image browsing. At the upper end is a site with a lot of downloads - streaming movies, or just large files or datasets being downloaded.
This is getting a bit outside the threshold of your question, but another thing to look at is the future of the site: is the usage going to possibly double in the next year, or month? Be wary of locking into a long term contract with something like a T1 or fiber connection, without having some way to upgrade.
Another question is reliability - do you need redundancy in connections? It can cost a lot up front, but there are ways to do multi-homed connections where you can balance access across a couple of links, and then just use one (albeit with reduced capacity) in the event of failure.
Another option to consider, which effectively lets you completely avoid this entire question, is to just host the application in a datacenter. You pay a relatively low monthly fee (low compared to the cost of a dedicated high-quality connection), and you get as much bandwidth as you need (eg, most hosting plans will give you something like 500GB transfer a month, to start with - and some will just give you unlimited). The datacenter is also going to be more reliable than anything you can build (short of your own 6+ figure datacenter) because they have redundant internet, power backup, redundant cooling, fire protection, physical security.. and they have people that manage all of this for you, so you never have to deal with it.
I'm thinking about making a networked game. I'm a little new to this, and have already run into a lot of issues trying to put together a good plan for dead reckoning and network latency, so I'd love to see some good literature on the topic. I'll describe the methods I've considered.
Originally, I just sent the player's input to the server, simulated there, and broadcast changes in the game state to all players. This made cheating difficult, but under high latency things were a little difficult to control, since you dont see the results of your own actions immediately.
This GamaSutra article has a solution that saves bandwidth and makes local input appear smooth by simulating on the client as well, but it seems to throw cheat-proofing out the window. Also, I'm not sure what to do when players start manipulating the environment, pushing rocks and the like. These previously neutral objects would temporarily become objects the client needs to send PDUs about, or perhaps multiple players do at once. Whose PDUs would win? When would the objects stop being doubly tracked by each player (to compare with the dead reckoned version)? Heaven forbid two players engage in a sumo match (e.g. start pushing each other).
This gamedev.net bit shows the gamasutra solution as inadequate, but describes a different method that doesn't really fix my collaborative boulder-pushing example. Most other things I've found are specific to shooters. I'd love to see something more geared toward games that play like SNES Zelda, but with a little more physics / momentum involved.
Note: I'm not asking about physics simulation here -- other libraries have that covered. Just strategies for making games smooth and reactive despite network latency.
Check out how Valve does it in the Source Engine: http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
If it's for a first person shooter you'll probably have to delve into some of the topics they mention such as: prediction, compensation, and interpolation.
I find this network physics blog post by Glenn Fiedler, and even more so the response/discussion below it, awesome. It is quite lengthy, but worth-while.
In summary
Server cannot keep up with reiterating simulation whenever client input is received in a modern game physics simulation (i.e. vehicles or rigid body dynamics). Therefore the server orders all clients latency+jitter (time) ahead of server so that all incomming packets come in JIT before the server needs 'em.
He also gives an outline of how to handle the type of ownership you are asking for. The slides he showed on GDC are awesome!
On cheating
Mr Fiedler himself (and others) state that this algorithm suffers from not being very cheat-proof. This is not true. This algorithm is no less easy or hard to exploit than traditional client/server prediction (see article regarding traditional client/server prediction in #CD Sanchez' answer).
To be absolutely clear: the server is not easier to cheat simply because it receives network physical positioning just in time (rather than x milliseconds late as in traditional prediction). The clients are not affected at all, since they all receive the positional information of their opponents with the exact same latency as in traditional prediction.
No matter which algorithm you pick, you may want to add cheat-protection if you're releasing a major title. If you are, I suggest adding encryption against stooge bots (for instance an XOR stream cipher where the "keystream is generated by a pseudo-random number generator") and simple sanity checks against cracks. Some developers also implement algorithms to check that the binaries are intact (to reduce risk of cracking) or to ensure that the user isn't running a debugger (to reduce risk of a crack being developed), but those are more debatable.
If you're just making a smaller indie game, that may only be played by some few thousand players, don't bother implementing any anti-cheat algorithms until 1) you need them; or 2) the user base grows.
we have implemented a multiplayer snake game based on a mandatory server and remote players that make predictions. Every 150ms (in most cases) the server sends back a message containing all the consolidated movements sent by each remote player. If remote client movements arrive late to the server, he discards them. The client the will replay last movement.
Check out Networking education topics at the XNA Creator's Club website. It delves into topics such as network architecture (peer to peer or client/server), Network Prediction, and a few other things (in the context of XNA of course). This may help you find the answers you're looking for.
http://creators.xna.com/education/catalog/?contenttype=0&devarea=19&sort=1
You could try imposing latency to all your clients, depending on the average latency in the area. That way the client can try to work around the latency issues and it will feel similar for most players.
I'm of course not suggesting that you force a 500ms delay on everyone, but people with 50ms can be fine with 150 (extra 100ms added) in order for the gameplay to appear smoother.
In a nutshell; if you have 3 players:
John: 30ms
Paul: 150ms
Amy: 80ms
After calculations, instead of sending the data back to the clients all at the same time, you account for their latency and start sending to Paul and Amy before John, for example.
But this approach is not viable in extreme latency situations where dialup connections or wireless users could really mess it up for everybody. But it's an idea.