Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a question about downloading with IDM or without it
the question is, we have the same bandwidth and the server sharing same bandwidth
for IDM and simple downlaod manager.
but why we can download faster with IDM? what is the reason?
TNX...
Without a download accelerator, you may not be hitting your and the remote server's bandwidth bottleneck. This means that either or both of you have still more bandwidth that can be tapped.
Download accelerators tap this extra bandwidth in two ways:
By increasing the number of connections to the server, IDM consumes
all or maximum of your bandwidth and increases the proportion of your total internet bandwidth that goes to the download.
The remote server divides it's total bandwidth to the number of connections to it. So, multiple connections to the server ensure that the total bandwidth you're tapping is a sum of those divided bandwidths thus removing another bottleneck.
See http://en.wikipedia.org/wiki/Download_manager#Download_acceleration for more.
Typically a download manager accelerates by making multiple simultaneous connections to the remote server. I believe IDM may actually make multiple requests to the same file at the same time, and thus trick the server into providing higher bandwidth through the multiple connections. Servers are typically bandwidth limited on a per-connection basis, so by making multiple connections, you get higher total bandwidth.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I made tests with iperf, iperf3, ssh + pv + /dev/zero, copying files, and netcat (nc).
Most of the tests are showing 940 Mbit/s, like expected, including netcat.
But when I saw that actually for some of these tools, the cpu was a bottleneck, I moved to exposing multiple ports for netcat, and using up to 8 parallel connections. This increased the speed from 800 Mbit/s to over 3 Gbit/s.
My router is a cheap one, Hub 3 from Virgin Media. The ethernet cables are of quality.
Could this be real? Or could netcat be compressing by default?
Thanks,
Nicu
Actually indeed it is not real. I make the test again now and it is a bit under 1Gbit/s aggregated across connections, on average, as expected. Previously probably the time was not enough to show enough degradation of the speeds of previous connections as i was opening more and i incorrectly assumed that the connections will quickly stabilize to equal throughputs, which is not real either.
I did get 3 Gbit/s via a thunderbolt direct connection between computers. However it was 3 Gbit in one direction and 500 Mbit in the other.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a lot of general networking knowledge, but I am unsure about some of the specifics. This topic was lead to me through a question within my statistics class discussing discrete or continuous variables.
The example used the time to download a file from a website.
I know that the maximum speed that a file is capable of being transferred is directly related to the speed of the connection, the protocol used, network conditions, and can be slightly enhanced (or degraded) by the use of compression. Regardless, the fastest possible speed can be calculated for any given connection, but I have never found any information detailing what the minimum sustained transfer rate could be.
Is it possible to go below 1bps?
The minimum for any kind of transfer is 0 unit/second.
If someone temporarily unplugs your router from the internet, during the seconds the router is unplugged you will have 0 bps. If the disconnection is short the connection could recover, but during the disconnect you will have no transfering of data.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
Im examining solutions for new pbx.
between local freepbx/asterisk to hosted pbx.
the only /main question issue is the call quality between two local sips.
let`s say - two people from the same office, using hosted pbx - will the call quality be based on network speed or internet line speed ?
are there any hosted pbx who knows to connected two peers from the same Nat localy? or it allways (the call) have to go trough the internet.
Call quality will depend of bandwidth(should be enought), codec and latency.
If you have local pbx, you usually use local lan and have much more bandwidth.
On any PBX(hosted, local) based on asterisk you can use
canreinvite=yes
directrtpsetup=yes
after that rtp data will go directly between peers. If peers not in same line, you will have no sound. As result no call recordings posible on such calls.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to go to the site which currently under DDoS attack or Under huge load. Is it some way to do that? Maybe some specefic options in browser or some ports, or anything else?
Thanks
Theres various types of DDoS attacks which can happen, but your chances of accessing the site amongst them is pretty much own to dumb luck.
Memory DDOS - This happens when the attackers are exploiting a specific flaw in the code to cache large amounts of data and run the server out of ram. The result will be lots of slow connections extending into aborted. Nothing you can do here, just wait it out.
Network DDOS - This happens when a large amount of data comes into the network from the attackers, in this case you can sometimes visit it, patience is a virtue though. But chances are your connection will timeout before the data is sent back
CPU DDOS - This happens when the attackers are exploiting a specific flaw in the code to process large amounts of data, sending the CPU skyrocketing. Again this is a wait it out scenario as chances are theres not enough juice left to process the requests.
In a DDOS the best way to deal with something like this is wait it out I'm afraid, hitting a already downed website with more data is also just not polite ;)
The whole point of DDoS is to prevent access to the site :-) So contact your network administrator so that he configures the firewall to block access to this site from all IP addresses except yours.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Asked on Server Fault:
Load Balancing a UDP server
I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment ill probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collect's data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances.
My question is how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, i mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y.
By the way it is a .NET system...
what would you recommend?
How many clients will be using these servers? If the number is reasonably high DNS round robin loadbalancing would probably be fine.