I'm having a lot of trouble with an EC2 instance and I can't figure out what's going on. We're using it as a web server and it seems to work fine for single connection stuff - loading a simple page, RDP connection, ping etc. But as soon as a single client computer has more than one connection active with the server (a good example is if I try to browse the web site while I'm also logged into the server via RDP) the whole connection becomes incredibly unstable.
The biggest most annoying consequence of this is that the ASP.NET site that we're running consistently fails to load some pages since those pages use more than one connection. This wasn't a problem up until a few days ago when we were forced to migrate to different hardware because our hardware was apparently being retired by Amazon. Ever since then it's been tricky like this. Is it possible that there's a kink in Amazon's network and that it could potentially be resolved by stopping and starting the instance (and thus getting a different server?)
It turns out the problem was an underlying issue on Amazon's end. They investigated the issue and found a problem that they're correcting. I hope I haven't wasted too much StackOverflow brainpower with this dead-end of a question!
Related
Sorry if this is a dumb question that's already been asked, but I don't even know what terms to best search for.
I have a situation where a cloud app would deliver a SPA (single page app) to a client web browser. Multiple clients would connect at once and would all work within the same network. An example would be an app a business uses to work together - all within the same physical space (all on the same network).
A concern is that the internet connection could be spotty. I know I can store the client changes locally and then push them all to the server once the connection is restored. The problem, however, is that some of the clients (display systems) will need to show up-to-date data from other clients (mobile input systems). If the internet goes down for a minute or two it would be unacceptable.
My current line of thinking is that the local network would need some kind of "ThinServer" that all the clients would connect to. This ThinServer would then work as a proxy for the main cloud server. If the internet breaks then the ThinServer would take over the job of syncing data. Since all the clients would be full SPAs the only thing moving around would be the data - so the ThinServer would really just need to sync DB info (it probably wouldn't need to host the full SPA - though, that wouldn't be a bad thing).
However, a full dedicated server is obviously a big hurdle for most companies to setup.
So the question is, is there any kind of tech that would allow a web page to act as a web server? Could a business be instructed to go to thinserver.coolapp.com in a browser on any one of their machines? This "webpage" would then say, "All clients in this network should connect to 192.168.1.74:2000" (which would be the IP:port of the machine running this page). All the clients would then connect to this new "server" and that server would act as a data coordinator if the internet ever went down.
In other words, I really don't like the idea of a complicated server setup. A simple URL to start the service would be all that is needed.
I suppose the only option might have to be a binary program that would need to be installed? It's not an ideal solution - but perhaps the only one? If so, are their any programs out there that are single click web servers? I've tried MAMP, LAMP, etc, but all of them are designed for the developer. Any others that are more streamlined?
Thanks for any ideas!
There are a couple of fundamental ways you can approach this. The first is to host a server in a browser as you suggest. Some example projects:
http://www.peer-server.com
https://addons.mozilla.org/en-US/firefox/addon/browser-server/
Another is to use WebRTC peer to peer communication to allow the browsers share information between each other (you could have them all share date or have one act as a 'master' etc deepening not he architecture you wanted). Its likely not going to be that different under the skin, but your application design may be better suited to a more 'peer to peer' model or a more 'client server' one depending on what you need. An example 'peer to peer' project:
https://developer.mozilla.org/en-US/docs/Web/Guide/API/WebRTC/Peer-to-peer_communications_with_WebRTC
I have not used any of the above personally but I would say, from using similar browser extension mechanisms in the past, that you need to check the browser requirements before you decide if they can do what you want. The top one above is Chrome based (I believe) and the second one is Firefox. The peer to peer one contains a list of compatible browser functions, but is effectively Firefox and Chrome based also (see the table in the link). If you are in an environment where you can dictate the browser type and plugins etc then this may be ok for you.
The concept is definitely very interesting (peer to peer web servers) and it is great if you have the time to explore it. However, if you have an immediate business requirement, it might be that a simple on site server based approach may actually be more reliable, support a wider variety of browser and actually be easier to maintain (as the skills required are quite commonly available).
BTW, I should have said - 'WebRTC' is probably a good search term for you, in answer to the first line of your question.
httprelay.io v.s. WebRTC
Pros:
Simple to use
Fast
Supported by all browsers and HTTP clients
Can be used with the not stable network
Opensource and cross-platform
Cons:
Need to run a server instance
No data streaming is supported (yet)
I currently have a virtual dedicated server through Media Temple that I use to run several high traffic Wordpress blogs. Both tend to receive sudden StumbleUpon traffic surges that (I'm assuming) cause the server CPU to run at 100% and slow down everything. I'm currently using WP-Super-Cache, S3, and CloudFront for most static files, but high traffic is still causing slowdown on the CPU.
From what I'm reading, it seems like I might want to use EC2 to help the existing server when traffic spikes occur. Since I'm currently using the top tier of virtual dedicated servers on Media Temple, I'd like to avoid jumping to a dedicated server if possible. I get the sense that AWS might help boost the existing server's power. How would I go about doing this?
I apologize if I'm using any of these terms incorrectly -- I'm relatively amateur when it comes to server administration. If this isn't the best way to improve performance, what is the recommended course of action?
The first thing I would do is move your database server to another Media Temple VPS. After that, look to see which one is hitting 100% CPU. If it's the web server, you can create a second instance, and use a proxy to balance the load. If it's the database, you may be able to create some indexes.
Alternatively, setting up a Squid caching server in front of your web server can take off a lot of load from anonymous users. This is the approach Wikipedia takes, as the page doesn't need to be re-rendered for each user.
In either case, there isn't an easy way to spin up extra capacity on the EC2 unless your site is on the EC2 to begin with.
There is just 3 type of instance you can have. Other than that they cant give you any more "server power". You will need to do some load balancing. There are software Load Balancers, such as HAProxy, NginX, which are not bad, if you dont want to deal with that, you can do DNS Round Robin, after setting up the high load blogs on different machines.
You should be able to scale them, that s the beauty of AWS, scaling.
I have a ASP.NET AJAX intranet application that has been running for a few months. It runs reasonably fast on the LAN.
However when going over a VPN it slows down dramatically. Even taking line speed into account its takes like 60 seconds to change a page. I eventually got a vmware up and running to test the VPN speed, so the connection is super fast, but it still takes the same amount of time. I can even remote desktop over the VPN to the VMware and its perfect.
This makes me think that its has nothing to do with line speed. FYI I am using Hamachi.
I have tried other VPN software and it gives the same results.
I am really stuck... any help would be much appreciated!
I found the problem was due to trying to do a dns lookup in the code. Changed to just the IP and now working!!!
Try to add system counters for "Byte per second" and other similar parameters of you network. This helps you to figure out the bottleneck of you system.
I have a website running a basic ASP.NET application that is mostly used from a single location, which is my client's office. The server is at a high-class datacenter.
Whenever I've been testing or using my application from outside their office I have consistently good connections but from their office the connection seems inconsistent. Sometimes requests just don't seem to make it to the server from the browser. I'm not familiar with the network hardware in the office, but they do have a T1 connection which should always be on.
I've tried ping and tracert and everything looks normal. When running Firebug during a failed request the request shows up in the log, then just sits there without showing it is sending any data, eventually it times out.
My question is, what tools can I use to diagnose this connection problem and start to narrow it down to a specific cause so I can fix it? Its an intermittent problem so a long running tool would probably make more sense, if there is any available.
Thanks for any help.
All of your standard ping and traceroute tools are probably your best bet. I'm not understanding though, where is the site located?
If you open command prompt, run ping -t aspwebsiteurl.domain <- will show if there is packet loss.
From command prompt again, tracert aspwebsiteurl.domain <- will show you what route the packet is taking to get the site. May also show you if there is one particular hop that is giving you the hickup.
Is there a proxy between the office and the datacenter that could be causing issues?
Also you could try Wireshark to try to debug the problem in more detail.
Speed Test - Internet Network Connection Speed may be of some help with some links to test out the connection at the client's office to see how well it works.
Another question is how far away is the client and the datacenter? If one is in New York and the other in Los Angeles then the distance apart may be a factor. Also, have you examined any possible DNS issues?
Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.