Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I appreciate there is no 'set' answer to this question. I am trying to assess the performance of our dedicated mail server for sending out emails. The server is of the spec below:
2G RAM
CPU Xeon 2.80GHz (x2)
Currently we're only managing to send out approximately 21,000 emails per hour from this. Which I suspect is massively under-performing.
Are there any guidelines as to what capacity can be expected?
Actually it also depends on the configuration. For exxample, if you use amavis, spamassassin or clam or another content filter it will directy affect the performance.
If you do no use any contentfilter, you should have capacity limit higher then 21,000 emails/hour.
Another point is queue size. If you have a growing queue you have a problem. If the queue is steady, no need to worry. Check queue size with "mailq | tail -1"
Check some params:
default_destination_concurrency_limit = 40
initial_destination_concurrency = 5
lmtp_destination_concurrency_limit = $default_destination_concurrency_limit
local_destination_concurrency_limit = 10
relay_destination_concurrency_limit = $default_destination_concurrency_limit
smtp_destination_concurrency_limit = $default_destination_concurrency_limit
virtual_destination_concurrency_limit = 35
Check master.cf
smtp inet n - n - 300 smtpd
smtp unix - - n - - smtp
smtpd is for incoming limit
smtp is for outgoing. If 7th field has a value this will limit concurrent server processes.
You can check google for further analysis.
http://www.google.com.tr/search?q=postfix+performance&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:tr:official&client=firefox-a
Use current network bandwidth and CPU usage to determine the capacity. If you are using 25% percent of the bandwidth and CPU then you should be able to get at least 42 000 emails per hour. (I just doubled to be on the safe side)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I made tests with iperf, iperf3, ssh + pv + /dev/zero, copying files, and netcat (nc).
Most of the tests are showing 940 Mbit/s, like expected, including netcat.
But when I saw that actually for some of these tools, the cpu was a bottleneck, I moved to exposing multiple ports for netcat, and using up to 8 parallel connections. This increased the speed from 800 Mbit/s to over 3 Gbit/s.
My router is a cheap one, Hub 3 from Virgin Media. The ethernet cables are of quality.
Could this be real? Or could netcat be compressing by default?
Thanks,
Nicu
Actually indeed it is not real. I make the test again now and it is a bit under 1Gbit/s aggregated across connections, on average, as expected. Previously probably the time was not enough to show enough degradation of the speeds of previous connections as i was opening more and i incorrectly assumed that the connections will quickly stabilize to equal throughputs, which is not real either.
I did get 3 Gbit/s via a thunderbolt direct connection between computers. However it was 3 Gbit in one direction and 500 Mbit in the other.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to understand network latency a bit better.
Lets say there's one Client and two Servers. Client sends 1000 bytes to each of the Servers, each Server responds instantly with 1000 bytes.
Ping round trip times from Client:
To Server 1 - 2ms
To Server 2 - 20ms
Assume both Client and Servers are connected to quality 1 Gbps pipe (but not via dedicated line between them).
Question: how to calculate real time from when Client starts sending its 1000 bytes to when it fully receives the last byte of the response data. Will it be something close to 2ms for Server 1 and 20ms for Server 2?
Yes, that's exactly right!
The ping round-trip delay measures how long it takes a small packet of data to travel from one host on the network to another, and back to the original host.
You should keep in mind that the numbers you get fluctuate a bit based on network conditions and load on the processors of the hosts. You should average the round-trip delay over a few samples but be prepared that any other packet may experience an unusual delay for a variety of reasons.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a question about downloading with IDM or without it
the question is, we have the same bandwidth and the server sharing same bandwidth
for IDM and simple downlaod manager.
but why we can download faster with IDM? what is the reason?
TNX...
Without a download accelerator, you may not be hitting your and the remote server's bandwidth bottleneck. This means that either or both of you have still more bandwidth that can be tapped.
Download accelerators tap this extra bandwidth in two ways:
By increasing the number of connections to the server, IDM consumes
all or maximum of your bandwidth and increases the proportion of your total internet bandwidth that goes to the download.
The remote server divides it's total bandwidth to the number of connections to it. So, multiple connections to the server ensure that the total bandwidth you're tapping is a sum of those divided bandwidths thus removing another bottleneck.
See http://en.wikipedia.org/wiki/Download_manager#Download_acceleration for more.
Typically a download manager accelerates by making multiple simultaneous connections to the remote server. I believe IDM may actually make multiple requests to the same file at the same time, and thus trick the server into providing higher bandwidth through the multiple connections. Servers are typically bandwidth limited on a per-connection basis, so by making multiple connections, you get higher total bandwidth.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Around 24 hours ago I set a new IP address for the A record on my website and it appears to be working well by pointing visitors to that new IP address. But, sometimes it still points users to the old IP address which is now set up as a restricted access test environment. How can I go about ensuring that only the new DNS A record are sent to clients? How can I refresh/flush the DNS on the server?
EDIT: Can one lower the timeout BEFORE the IP change so that they flush the old one sooner? How?
Looking at the SOA record for the domain:
primary name server = ns21.ixwebhosting.com
responsible mail addr = admin.ixwebhosting.com
serial = 2011060963
refresh = 10800 (3 hours)
retry = 3600 (1 hour)
expire = 604800 (7 days)
default TTL = 86400 (1 day)
The default TTL says that anyone can cache the result for up to 1 day. Besides the refresh says that a slave server should get new data from the master every three hours, so you have to wait at least 24 + 3 = 27 hours before you can trust everyone to have the new information.
The best way to handle this kind of DNS changes is to prepare at least 24 hours (or whatever TTL you have) ahead by temporarily setting down the TTL (maybe to 600, which is 10 minutes). Then you can do the changes and they take effect within 10 minutes. When you see that everything works and you don't need the possibility for a quick rollback, you can reset the TTL to 86400 again.
When you change the DNS on the server, the change is immediate, but for the others around the world, the DNS could take 24-48 hours to see the new change. So mainly, you have to wait :D
If you are close to your server location, it could take 2 or 3 hours but that depends on when your ISP and others ISP flush their DNS server's cache.
You can't.
DNS is a distributed system and clients and intermediary caching servers (including the root servers) will regard the cached values as correct until they timeout.
An approach to make it faster is to reduce the TTL (time-to-live) on the record well in advance of the actual change and then put it back up when you make the change. This way once the old record with a long TTL times out the caching and root will refresh more frequently from the authoritative server. But if you've already changed it, it's too late for that and you can only wait.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm about to release a MMORPG. In my game, every 1 second, each player sends 30 TCP messages and gets back from the server 30. Every message is not really long. Like 20~ chars.
the point is that I never got my hands with multiplayer games. I have programmed all the server and client, but I don't know what server I'm gonna need. I mean, RAM, CPU, etc... I still don't know what to be ready for, but let's say for 15K same-time clients. As said, every 1 second every client need to get and send 30 TCP messages, and in the most cases I need also to update my non-SQL DB with the data.
Update: It's a multiplayer game, I must have 30 msgs/sec. Most of the msgs are for the current position of the player. Also I'm using C++.
It depends on what your (already implemented) server requires. You'll never know until you try some particular hardware. Rent a powerful dedicated server for a month and profile your game server. At least just check CPU usage. You'll need multithreaded asynchronous networking.
Details you provided help only to calculate how much bandwidth you need:
~94 bytes (TCP + IP + Ethernet headers) + ~20 bytes (your data) = 114 bytes every packet * 30 per second * 15000 users = ~50MBps * 8 bits = ~400Mbps of both incoming and outgoing traffic. Seems you're in troubles here. Consider something wiser than sending your every packet in separate TCP frame. E.g., implement buffer that collects data ready to be sent and is filled by your game logic threads and separate thread that sends this data to network. In this case your several soft packets can be combined into one TCP packet greatly reducing your bandwidth requirements.
But even after this you're still in troubles. I'd recommend to wait for users first before investing into complicated solution. When you'll have them you'll need to implement some kind of clustering. It's separate story, much more complicated than simple networking.
Try to handle at least 1K users by single server. This can bring you some money to hire somebody experienced in game networking/clustering.
If you know you sending 30 messages every second, why not bundle them into 1 request, every second? Makes a lot of difference in terms in server resources...
And in which language are you going to run your server? I hope you write something dedicated to process/manage these connections? If so: do some profiling and just measure what you need...
And what is your processor doing every second, to process those 30*15K messages?
There is no generic answer to your question. It all depends on your software.