Part of this question is I'm not even sure what exactly I'll need to ask, so I'll start with the situation and work out from there.
One project I'm working on involves use of COMET via the aspComet library. The use case of the program is somewhat of a collaborative slideshow. One person runs the bulk of it, with one or more participants able to perform certain actions. Low latency between when an action is performed on screen
Previously, it was just running on one server. Now, we're wanting to scale it out a bit, more for reliability than performance reasons. So, we have some boxes out in Rackspace's cloud and all that fun stuff.
I knew from the start of this that I was going to need to make some changes to the way the COMET stuff works since different people in the same "show" might be on different servers, and I have no way of knowing what "show" they belong to until after they have already arrived on the site.
I initially tackled this using the WCF Mesh provider, which wasn't well documented to start with, now I'm running into issues with dispatching messages to it sometimes get lost, or delayed (I'm not 100% sure what is going on there), but it screws up the long poll for COMET and breaks things in rather strange ways (Clicking a button may trigger an event, or it'll hang for 10 seconds {long poll duration} and not actually do anything).
More research leads me to believe one of the .Net service bus providers may do what I need. However, I can't find examples that would cover what I need:
No single point of failure (outside of a database)
No hardcoding of peers.
Near-realtime (no polling, event based would be best)
My ideal solution would involve that when a server comes up, it lets other servers know of its existence (Even if it's just a row in a table somewhere), and they can start sending broadcast messages among each other, with each server being both a publisher and subscriber. This is what I somewhat had in the WCF Mesh provider, but I'm not overly confident in that code.
Can anyone point me in the right direction with this? Even proper terms to look for in the docs for service bus providers would be good at this point. Or are service buses not what I want? At this point I would settle for setting up a Jabber server on each web server and use that, if it could fit within my constraints.
I can't speak a ton to NServiceBus, but I expect the answers will be similar.
Single point of failure: MSMQ can use multicasting, which means each endpoint will broadcast it's existence and no DB table is needed. RabbitMQ uses this Exchange-to-Queue binding process which means as long as the Rabbit instance or cluster is up then messages still exist. RabbitMQ can be clustered, MSMQ cannot be. *Note: You might have issues with multicasting with Rackspace, no idea how they work. If so, you'll have to fall back on the runtime services for MSMQ (not RabbitMQ), that would create a single point of failure because everyone has a single point to coordinate control messages through.
Hard coding of peers: discussed above a bit; MSMQ's multicast handles it. Rabbit it can also be done, just bind queues to an exchange you want to listen to. MassTransit takes care of this for you.
Near-realtime: These both use messaging which is near real time. There's no polling in your message consumer code.
I think a service bus seems like a reasonable solution for what you're trying. Some more details would likely be needed, but the general messaging approach is correct. There are other more light weight messaging libraries if you decided you just want something on top of RabbitMQ and configure Rabbit to handle most of the stuff.
To get started with MassTransit, we have documentation up: http://readthedocs.org/projects/masstransit/ and mailing list http://groups.google.com/group/masstransit-discuss. Join the mailing list if you have future questions and someone will try and help you out.
Related
I want to fetch the data of a stock. Since the data changes very fast, is there any way to pull the data like 50-100 times a second from trading websites?
And can we implement that using a raspberry Pi 4 8gig model.
RasPi4 should be more than adequate for this task. Both the ethernet and WiFi hardware is capable of connections at these speeds. (Unless you’re running a bunch of other stuff on it.) Consider where your bottlenecks may be, likely ISP or other network traffic). Consider avoiding WiFi in favor of cat5e or cat6. Consider hanging this device off your router (edge) to keep lan traffic lower and consider QOS settings if you think this traffic may compete with other lan traffic.
This appears to be a general question with no specific platform in mind. For stocks, there are lots of platforms to choose from.
APIs for trading platforms often include a method to open a stream. Instead of a full TCP conversation for each price check, a stream tells the server to just keep on sending data. There are timeout mechanisms of course, but it is good to close that stream gracefully (It’s polite since you’re consuming server resources at a different scale. I’ve seen some financial APIs monitor and throttle stream subscribers who leave sessions open.).
For some APIs/languages you can find solid classes already built on GitHub. Although, if simply pulling and reading a stream then the platform API doc code snippets should be enough to get you going.
Be sure to find out what other overhead may be implicated. For example, if an account or API key is needed to open a stream then either a session must be opened first or the creds must be passed with the stream being opened. The API docs will say. If you’re new to this sort of thing, just be a detective and try to infer what is needed. API docs usually try to be precise and technically correct with the absolute minimum word count.
Simply checking the steam should be easy. Depending on how that steam can be handled by your code/script, it may be harder to perform logic on the stream while it is being updated. That’s usually a thread issue or a variable scope issue depending on the script/code. For what you’re doing I would consider Python or PowerShell depending on your skill-set and other design parameters.
I feel like I'm getting mixed reviews about SignalR and disconnection functionality and I'm trying to figure out who is right (these packages move so fast it's hard to tell what is right information these days since something you find online could be 2 months old and outdated).
I've seen many people setup pinging code to tell if a client is still connected. Yet I see others talking about the Disconnect() function that gets fired from the Hub when a client disconnects. Then I see some say the Disconnect() method isn't reliable?
Does anyone have the details on this as it stands today? Should I not be using the Disconnect() method because in some cases (which maybe I haven't ran into yet) it's not reliable? It's so confusing trying to search for information when these things change so often invalidating older information you find on the web about it.
There might be a couple of edge cases where you don't get timely notifications but in general it is reliable. Also, we raise disconnect events on the client as well and we have a keep alive functionality which ensures that if the client doesn't hear from the server within a specified timeout, we will try to reconnect and ultimately disconnect if reconnects fail. Therefore, you can take appropriate actions on the client.
You can read more about this here http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#connectionlifetime
I am sending small messages consisting of xml(about 1-2 KB each) across the internet from a windows application to a asp.net web service.
99% of the time this works fine but sometimes a message will take an inordinate amount of time to arive, 25 - 30 seconds instead of the usual 4 - 5 seconds. This delay also causes the message to arrive out of sequence.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
If its not possible to resolve can i please get recomendations of a low latency messaging framework that can deliver messages in order over the internet.
Thanks.
Is there anyway i can solve this issue so that all the messages arrive quickly and in squence or is that not possible to gurantee when using a web service in this manner ?
Using just webservices this is not possible. You will always run into situations where occasionally something will take much longer that it "should". This is the nature of network programming and you have to work around it.
I would also recommend using XMPP for something like this. Have a look at xmpp.org for info on the standard and jabber-net for a set of client libraries for .Net.
Well this is a little off target, but have you looking into the XMPP (Jabber) protocol.
It's the messaging system that GTalk uses. Quite simple to use. Only downside to it, is that you will need a stateful service to receive and process the messages.
I also agree with #Mat's comment. It was the first solution that came to mind, then i remembered that I used XMPP in the pas to acomplish fast/ small and reliable messages between servers.
http://xmpp.org/about-xmpp/
if you search google you will easily find .net libraries which support this protocol.
and there are plenty of free jabber servers out there.
One way to ensure your messages are sent in sequence and are resolved as a batch together is to make one call to the webservice with all messages that are dependent on each other as a single batch.
Traditionally, when you make a call to a web service you do not expect that other calls to the web service will occur in a specific order. It sounds like you have an implicit sequence the data needs to arrive in the destination application, which makes me think you need to group your messages together and send them together to ensure that order.
No matter the speed of the messaging framework, you cannot guarantee to prevent a race condition that could send messages out of order, unless you send one message that has your data in the correct order.
If you are sending messages in a sequence across internet, you will never know how long will take the message to arrive from one point to another. One possible solution is to include in each message its position in the sequence, and in each endpoint implement the logic to order the messages prior to processing them. If you receive a message out of sequence, you can wait for the missing message, or request to the other endpoint to resend it.
I am looking for networking designs and tricks specific to games. I know about a few problems and I have some partial solutions to some of them but there can be problems I can't see yet. I think there is no definite answer to this but I will accept an answer I really like. I can think of 4 categories of problems.
Bad network
The messages sent by the clients take some time to reach the server. The server can't just process them FCFS because that is unfair against players with higher latency. A partial solution for this would be timestamps on the messages but you need 2 things for that:
Be able to trust the clients clock. (I think this is impossible.)
Constant latencies you can measure. What can you do about variable latency?
A lot of games use UDP which means messages can be lost. In that case they try to estimate the game state based on the information they already have. How do you know if the estimated state is correct or not after the connection is working again?
In MMO games the server handles a large amount of clients. What is the best way for distributing the load? Based on location in game? Bind a groups of clients to servers? Can you avoid sending everything through the server?
Players leaving
I have seen 2 different behaviours when this happens. In most FPS games if the player who hosted the game (I guess he is the server) leaves the others can't play. In most RTS games if any player leaves the others can continue playing without him. How is it possible without dedicated server? Does everyone know the full state? Are they transfering the role of the server somehow?
Access to information
The next problem can be solved by a dedicated server but I am curious if it can be done without one. In a lot of games the players should not know the full state of the game. Fog-of-war in RTS and walls in FPS are good examples. However, they need to know if an action is valid or not. (Eg. can you shoot me from there or are you on the other side of the map.) In this case clients need to validate changes to an unknown state. This sounds like something that can be solved with clever use of cryptographic primitives. Any ideas?
Cheating
Some of the above problems are easy in a trusted client environment but that can not be assumed. Are there solutions which work for example in a 80% normal user - 20% cheater environment? Can you really make an anti-cheat software that works (and does not require ridiculous things like kernel modules)?
I did read this questions and some of the answers https://stackoverflow.com/questions/901592/best-game-network-programming-articles-and-books but other answers link to unavailable/restricted content. This is a platform/OS independent question but solutions for specific platforms/OSs are welcome as well.
Thinking cryptography will solve this kind of problem is a very common and very bad mistake: the client itself of course have to be able to decrypt it, so it is completely pointless. You are not adding security, you're just adding obscurity (and that will be cracked).
Cheating is too game specific. There are some kind of games where it can't be totally eliminated (aimbots in FPS), and some where if you didn't screw up will not be possible at all (server-based turn games).
In general network problems like those are deeply related to prediction which is a very complicated subject at best and is very well explained in the famous Valve article about it.
The server can't just process them FCFS because that is unfair against players with higher latency.
Yes it can. Trying to guess exactly how much latency someone has is no more fair as latency varies.
In that case they try to estimate the game state based on the information they already have. How do you know if the estimated state is correct or not after the connection is working again?
The server doesn't have to guess at all - it knows the state. The client only has to guess while the connection is down - when it's back up, it will be sent the new state.
In MMO games the server handles a large amount of clients. What is the best way for distributing the load? Based on location in game?
There's no "best way". Geographical partitioning works fairly well, however.
Can you avoid sending everything through the server?
Only for untrusted communications, which generally are so low on bandwidth that there's no point.
In most RTS games if any player leaves the others can continue playing without him. How is it possible without dedicated server? Does everyone know the full state?
Many RTS games maintain the full state simultaneously across all machines.
Some of the above problems are easy in a trusted client environment but that can not be assumed.
Most games open to the public need to assume a 100% cheater environment.
Bad network
Players with high latency should buy a new modem. I don't think its a good idea to add even more latency because one person in the game got a bad connection. Or if you mean minor latency differences, who cares? You will only make things slower and complicated if you refuse to FCFS.
Cheating: aimbots and similar
Can you really make an anti-cheat software that works? No, you can not. You can't know if they are running your program or another program that acts like yours.
Cheating: access to information
If you have a secure connection with a dedicated server you can trust, then cheating, like seeing more state than allowed, should be impossible.
There are a few games where cryptography can prevent cheating. Card games like poker, where every player gets a chance to 'shuffle the deck'. Details on wikipedia : Mental Poker.
With a RTS or FPS you could, in theory, encrypt your part of the game state. Then send it to everyone and only send decryption keys for the parts they are allowed to see or when they are allowed to see it. However, I doubt that in 2010 we can do this in real time.
For example, if I want to verify, that you could indeed be at location B. Then I need to know where you came from and when you were there. But if you've told me that before, I knew something I was not allowed to know. If you tell me afterwards, you can tell me anything you want me to believe. You could have told me before, encrypted, and give me the decryption key when I need to verify it. That would mean, you'll have to encrypt every move you make with a different encryption key. Ouch.
If your not implementing a poker site, cheating won't be your biggest problem anyway.
With a lot of people accessing games on mobile devices, a "bad network" can occur when a player is in an area of poor reception or they're connected to a slow-wifi connection. So it's not just a problem of people connecting in sparsely populated areas. With mobile clients "bad networks" can occur very very often and it's usually EXTREMELY hard to diagnose.
UDP results in packet loss, but even games that use TCP and HTTP based can experience problems where the client & server communication slows to a crawl while packets are verified to have been sent. With communication UDP compensation for packet loss USUALLY depends on what the packets contain. If you're talking about motion data, usually if packets aren't received, the server interpolates the previous trajectory and makes a position change. Usually it's custom to the game how this is handled, which is why people often avoid UDP unless their game type requires it. Often to handle high network latency, problems games will automatically degrade the amount of features available to the users so that they can still interact with the game without causing the user to get kicked or experience too many broken features.
Optimally you want to have a logging tool like Loggly available that can help you find errors related to bad connection and latency and show you the conditions on the clients and server at the time they happened, this visibility lets you diagnose common problems users experience and develop strategies to address them.
Players leaving
Most games these days have dedicated servers, so this issue is mostly moot. However, sometimes yes, the server can be changed to another client.
Cheating
It's extremely hard to anticipate how players will cheat and create a cheat-proof system no one can hack. These days, a lot of cheat detection strategies are based on heuristic analysis of logging and behavioral analytics information data to spot abnormalities when they happen and flag it for review. You definitely should try to cheat-proof as much as is reasonable, but you also really need an early detection system that can spot new flaws people are exploiting.
All,
so, I inventedmade up a simple protocol that I want to use for a client to talk to a server. It's the typical (I think) three-phase layout:
Connection Establishment (will eventually include capability negotiation)
Actual Data Exchange - packets are happily travelling to and fro', get interpreted by the respective receiver which acts on them accordingly
Connection Teardown - one side says "don't wanna no more', other side says 'so be it' (will eventually allow the other side to send some data until it is done instead of simply closing the conversation)
The framework is a simple setup: The server does java.net.ServerSocket.accept() and starts a thread to handle the incoming connection by a client, which creates a java.net.Socket() to the host/port where the server is waiting. Both sides use the java.io.InputStream and java.io.OutputStream and spew data at each other, assembling outgoing and parsing incoming messages. Fine, so far.
So far, the protocol is hard-coded. Connection Establishment and Teardown are pretty much ok, while the Data Exchange part - which I want to be full-duplex - is pretty much a mess.
So, thinks me, let's do this the good way and set up a state machine using, surprise, the design pattern of the same name. I'm pretty clear about what the states should be for the server and the client, respectively, and what kinds of events should happen for a transition to take place, and what actions should be undertaken when a transition does happen. That looks good - on paper, that is. In practice, I've stubmled over a couple of questions that I can't solve on paper.
In particular, the inputs of the state machine are ... a little diverse. How could I possibly be able to write data, read data and check the connection (it might have closed or may be broken) at the same time? Also, the 1st and 3rd phase should get timers to avoid potentially infinite waiting times for answers.
So, I'd be grateful for any help that bridges my gap between the theory state machine and the code state machine.
BTW, I can read C/C++/C# too - no need to translate to Java (which is what I'm using).
The state for your machine needs to be stored per "Connection"
Each client connecting might be in a different state. So if you had an object tracking your state, you would have an instance of that object for every connection.
I actually wrote a little library that abstracts out just about everything from the state machine if you're interested. There is some test code in there as well that should show you how to work it. State Machine Code
It does some stuff you might forget, like ensuring that state transitions that are not "valid" are actually an error rather than maybe being missed, and logging state transitions is free.
ps. (Anyone) If you look at it and don't like it--please let me know why. I'd like to make it usable for anyone.