Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to go to the site which currently under DDoS attack or Under huge load. Is it some way to do that? Maybe some specefic options in browser or some ports, or anything else?
Thanks
Theres various types of DDoS attacks which can happen, but your chances of accessing the site amongst them is pretty much own to dumb luck.
Memory DDOS - This happens when the attackers are exploiting a specific flaw in the code to cache large amounts of data and run the server out of ram. The result will be lots of slow connections extending into aborted. Nothing you can do here, just wait it out.
Network DDOS - This happens when a large amount of data comes into the network from the attackers, in this case you can sometimes visit it, patience is a virtue though. But chances are your connection will timeout before the data is sent back
CPU DDOS - This happens when the attackers are exploiting a specific flaw in the code to process large amounts of data, sending the CPU skyrocketing. Again this is a wait it out scenario as chances are theres not enough juice left to process the requests.
In a DDOS the best way to deal with something like this is wait it out I'm afraid, hitting a already downed website with more data is also just not polite ;)
The whole point of DDoS is to prevent access to the site :-) So contact your network administrator so that he configures the firewall to block access to this site from all IP addresses except yours.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Say I have a DNS Server configured for a URL with a TTL of 5 minutes. The browser will cache the URL to IP address mapping for 5 minutes.
But, if the user clicks on refresh for that URL, will it's corresponding entry in the browser's cache be cleared? Is the browser going to fetch information at the DNS server again?
The case is the following: I need to set a proper TTL to avoid excessive DNS traffic (so it should not be too low), but in the case of VM failure, the traffic should be redirected to another IP address (so it should not be too high).
If a refresh clears the DNS mapping cache entry, then I might choose a higher value.
On clicking the refresh in the browser doesn't query the DNS again, if there is already a cached DNS entry in the browser which is not expired.
If your site relies on DNS failover than in general you shouldn't have anything more than 60 seconds as TTL for your DNS. Please note this is just a suggestion not a full proof way, most of the top 100 websites use this TTL.
HTTP and DNS are on different layers. There is no reason to do a DNS query again if it is in the cache and not expired, when the user requests again some URLs.
DNS, alone, is not a good fail over mechanism. You need to add some kind of load-balancing or master/slave virtual IP handling to have an "immediate" switchover in case of some dead server. Or use IP anycasting. In short, many solutions but even if they can use DNS to their advantage DNS alone can not solve it.
You need to define what amount of time of unavailability is accepted in your setup and based on that it gives you the amount of time/energy/money you can invest on a setup to achieve this fail over.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What I'm trying to do is send information in web requests between a application I've made for a computer, and obviously a web server.
I want this information to be encrypted for security issues, this software may be something people want to crack and I don't want them seeing whats being exchanged between the client and the server.
So, my question is, what is the most efficient way to encrypt data on the client side, send it to the server side, then be decrypted. And then also in reverse with the server encrypting and the client decrypting.
EDIT:
I just want a method of valid encryption for the data being sent between the client and the server. A secure way to encrypt data on the client, then send it to the server, and have it decrypted. This whole thing was described very poorly. As programs such as fiddler, can monitor the requests sent from the C++ application to the server, and the response it gives back. All in plain text. I just need this data and response to be encrypted and be able to be decrypted on both sides.
The tool you want is a pinned TLS certificate. See the OWASP introduction to the topic.
The point of pinning a certificate is that your HTTPS session will not trust every root in the local keystore. It will instead only trust a limited number of roots, specifically the ones you specify (and ideally control). With that, it is not possible to simply inject a rogue root certificate into the local keystore in order to monitor local traffic.
That said, it is not particularly difficult to circumvent pinned certificates if you control the machine the client is running on. But it is not particularly difficult to circumvent any simple mechanism if you control the machine the client is running on. The techniques used to circumvent certificate pinning (namely, modifying the client), will circumvent every client-side encryption scheme.
This is discussed regularly on StackOverflow, and has been for years. Here is one of the various answers that links to several others:
Secure https encryption for iPhone app to webpage
The key lesson is that "anti-cracking" is not "security." It is achieved through obfuscation and other anti-reverse-engineering techniques. This is not a winnable problem. It requires ongoing improvements as attackers defeat your defenses. You should expect to allocate non-trivial resources to this on an ongoing basis, or you should apply modest resources (like pinning) and accept that they won't be very effective but they aren't very costly to manage.
(I used to do this as part of a team of over a dozen full-time people committed to preventing these kinds of attacks. We spend millions of dollars a year on the problem, working together with law enforcement around the world, and deploying extensive custom security hardware. We still got beaten and had to adapt our methods as attacks improved. That's what I mean by "non-trivial resources.")
Use SSL to encrypt traffic between client and server.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Of course i want to reach maximum perfomance.
What can I do for it?
Use Bundles for CSS & JS files? Ok.
What kind of storage shold I use? Now its SQL Database.
But site and DB are placed in different regions. Size of DB will be not too big -1 gb - is enough. And - how to reduce query-time. Now - it's too long.
Should I turn on "always on" feature for my site?
Is there anything else? Is the any article ti read?
Thanks in advance.
There is only so much optimization you can do - if you really want "maximum performance" then you'd rewrite your site in C/C++ as a kext or driver-service and store all of your data in memcached, or maybe encode your entire website as a series of millions of individual high-frequency electronic logic-gates all etched into an integrated circuit and hooked-up directly to a network interface...
...now that we're on realistic terms ;) your posting has the main performance-issue culprit right there: your database and webserver are not local to each other, which is a problem: every webpage users request is going to trigger a database request, and if the database is more than a few miliseconds away then it's going to cause problems (MSSQL Server has a rather chatty network protocol too, which multiplies the latency effect considerably).
Ideally, total page generation time from request-sent to response-arrived should be under 100ms before users will notice your site being "slow". Considering that a webserver might be 30ms or more from the client, that means you have approximately 50-60ms to generate the page, which means your database server has to be within 0-3ms of your webserver. Even 5ms latency is too great because something as innocuous as 3-4 database queries is going to incur a delay of at least 4 * ( 5ms + DB read time)ms - DB read-time can vary from 0ms (if the data is in memory) or up to 20ms if it's on a slow platter drive, or even slower depending on server-load - that's how you can easily find a "simple" website taking over 100ms just to generate on the server, let alone send to the client.
In short: move your DB to a server on the same local network as your webserver to reduce the latency.
The immediate and simplest way to start in your conditions is to move the database and the site in the same datacenter.
Later you may think to:
INSTRUMENT YOUR CODE
Add (Azure Redis) Cache
Load balance your web site (if it is charged enough)
And everything around compacting/bundling/minimizing your code.
Hope it helps,
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am assigned on project to live broadcast an event on the internet which may have 50,000 user s. This will be a broadcast so all users will see same live video. My query is what would be my bandwidth requirement in terms of no of users viewing that stream.
I am little confused & the reason is does every users need a separate stream of bandwidth in broadcast ?
If i would install a server with streaming capability having Bandwidth of 50Mbps would it be enough considering it is a broadcast.
Do i necessarily need to have Class C IP to run streaming server on rtmp protocol.
Is is possible to achieve this through RTSP. How do services like Google Hangout air work? What protocol is used in Hangout and Youtube when live broadcasts are done.
Kindly suggest me solution especially mention if you have such practical experiance.
Thanks in Advance
You need a ton of bandwidth and resources.
To calculate how much bandwidth you need, you will need to know your average bitrate for the video. Let's just say that your live video's bitrate is 1 megabit (ignoring overhead, retransmissions, sequences that require more bandwidth, etc.). Your 50Mbps only covers 50 users. That's 0.1% of what you require. You need 1,000 of those connections, to barely handle the load.
If you actually have a live event that 50,000 people will see, you no doubt have sponsors and should be able to afford a proper CDN. This isn't something you host yourself. You pay for a CDN so that capacity is available as you need it, and servers are close to your audience.
The best thing to do would be to get a YouTube account with live streaming, and let YouTube pay for the bandwidth.
Now, the protocol you use has nothing to do with what size of IP block you have. Those are unrelated, separate issues.
RTMP, RTSP, HTTP progressive, etc... if you use a CDN, you get to use all of them with little effort. You pick a streaming protocol based on device compatibility and capability.
Google Hangout works using WebRTC, which is primarily peer-to-peer. When you stream them to YouTube, there is a massive CDN that handles distribution in multiple codecs, multiple protocols, and multiple points of presence.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I will be moving a high load prod system over to new hardware over the next few weeks. However in the mean time I would like to validate that the new hardware will handle the expected loads. I would really like to stick some kind of 'proxy' infront of the current web server and copy all that http traffic to the new environment, i.e. run them both in parallel.
Ideally this proxy would also validate that the responses are the same.
I can then monitor the new hardware stats (cpu, mem, etc) and see if it looks ok.
What is this kind of proxy called? Any one have any suggestions? This is for a Windows .Net (asp.net) and SQL server environment.
Thanks all
Varnish comes to mind - https://www.varnish-cache.org/
Edit
I'd actually use nginx... (two years experience after answering this question).. varnish would be silly to use. nginx would definitely be the better option.
Have a look a JMeter. It's Java based but allows you to record user journeys and play them back in bulk for stress testing.