Checking latency between client and host - networking

The user is complaining about uploading a 1.2 GB file from China Shangai to our data center in Japan Tokyo is taking more than 1 hour.But when I try to upload the file from the USA West it's faster and takes 1 minute.I am thinking it might be latency issue and also user has a BandWidth of 16-17 Mbps
How to perform latency test.I can ask the user to run latency test to my servers and conclude its latency issues.
I know it's more generic question but is there any way we can improve this upload performance?

Try to use ping and compare last your and your customer columns:
Here is an example of ping result from 2 different servers:
First one has 17.3 ms
64 bytes (173.194.222.113): icmp_seq=1 ttl=49 time=17.3 ms
The second one 9.35 ms
64 bytes (216.58.217.206): icmp_req=1 ttl=58 time=9.35 ms

Related

Downloading speed extremely slow in IIS for ASP.NET Core 5 Blazor server application

I am extremely sad and couldn't find the solution from last 1 week so I end up asking help here.
I am hosting my business platform on IIS Server running on Windows Server 2021.
I am using server port speed of 10 gb/ps and I have 125 GB ram and 25 cores.
When a user downloads a file from my website, I am getting the server speed of just 100 kbps maximum 500 kbps.
My internet speed is 200 mbps and I am getting the same speed on my PC from the online network speed test.
Please help me to get the highest possible downloading speed on my server, must be I lacking something but I tried all possible things to speedup the server speed.
I am getting this speed in file downloading where I am getting 4-5 mbps when I download anything from google drive-

Having a Database in different data center

Our entire infrastructure is built in Linode, Singapore region. The problem is that as of now, Linode does not provide any block storage option. We have a 3 node Cassandra cluster which is growing at the rate of 4-5 gb per day. Each node has 192 GB SSD disk allotted to it. Adding a Cassandra node is simple , but it comes at the cost of maintaining it. The rate at which we are growing, we'd be needing 20-30 servers in 3 month time.
Digital Ocean, Singapore region, on the other hand, does have a block storage option. We can simply add more SSD to our servers rather than provision for 30 small servers.
Data is streamed from Kafka and stored in Cassandra.
What would be the pros and cons of having your Cassandra cluster in a different Data center but in the same country? The latency between the two is about 2 ms. The ratio between reads and writes are roughly 5% Read ops and 95 % write ops.

I can see cassandra is CPU bound for write heavy work loads but is it network bound as well?

SETUP: 1
3-node cassandra cluster. Each node is on a different machine with 4 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
2 cassandra-stress client machines with same exact configuration as above.
Experiment1: Ran one client on one machine creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 8MBytes/sec with a CPU Usage of 85-90 percent on both cassandra node and the client
Experiment2: Ran two clients on two different machines creating anywhere from one to 1000 threads with Conistency Level of Quorum and the max network throughput on a cassandra node was around 12MBytes/sec with a CPU Usage of 90 percent on both cassandra node and both the client
Did not see double the throughput even though my clients were running on two different machines but I can understand the cassandra node is CPU bound and thats probably why. so that lead me to setup2
SETUP 2
3-node cassandra cluster. Each node is on a different machine with 8 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
2 cassandra-stress client machines with 4 cores 32 GB RAM, 800GB SSD (DISK), 1Gbit/s = 125 MBytes/sec Network bandwith
Experiment3: Ran one client on one machine creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 18MBytes/sec with a CPU Usage of 65-70 percent on a cassandra node and >90% on the client node.
Experiment4: Ran two clients on two different machines creating anywhere from 1 to 1000 threads and with Conistency Level of Quorum and the max network throughput on a cassandra node was around 22MBytes/sec with a CPU Usage of <=75 percent on a cassandra node and >90% on both client nodes.
so the question here is with one client node I was able to push 18MB/sec (Network throughput) and with two client nodes running two different machine I was only able to push at a peak of 22MB/sec(Network throughput) ?? And I wonder why this is the case even though this time the cpu usage on cassandra node is around 65-70 percent on a 8 core machine.
Note: I stopped cassandra and ran a tool called iperf3 on two different ec2 machines and I was able to see the network bandwith of 118 MBytes/second. I am converting everything into Bytes rather than bits to avoid any sort of confusion.

Cached memory on unix machine continuously grows

On my Ubuntu 12 vps I am running a full bitcoin node. When I first start this up it uses around 700mb of memory. If I come back 24 hours later (free -m) will look something like this:
total used free shared buffers cached
4002 3881 120 0 32 2635
-/+ buffers/cache: 1214 2787
Swap: 255 0 255
But then if I clear "cached" using
echo 3 > /proc/sys/vm/drop_caches
and then do free -m again:
total used free shared buffers cached
4002 1260 2742 0 1 88
-/+ buffers/cache: 1170 2831
Swap: 255 0 255
Can see the cached column clears and I have way more free memory than it looked like before.
I have some questions:
what is this cached number?
my guess is it's files being cached for quicker access to the disk?
is it okay to let it grow and use all my free memory?
will other processes that need memory be able to evict the cached memory?
if not, should I routinely clear it using the echo3 command I showed earlier?
Linux tries to utilize the system resources more efficiently. Linux caches the data to reduce the no. of io operations thereby speeding up the system. The metadata about the data is stored in buffers and the actual data will be stored in the cache.
When you clear the cache the processes using cache will lose the data so you have to run
sync
before clearing the cache so that the system will copy the data to secondary storage which reduces the errors.

windows 2003 server cpu utilization

I have DELL Power Edge T410 server (Quad Core Dual Xeon 5500 Series with 16 GB Ram), installed Windows 2003 Server.
I write a code in C# to play with large amount of nos and after certain calculations the results are stored in a 6000 x 6000 matrix. Finally it write this matrix (36 Million entries) to a text file (172 MB).
When I run this program on my laptop, the CPU utilization goes to 100 % and it takes abput 40 hours to complete this task.
When I run this program on my server, the CPU utilization goes to just 10 % and it takes almost same 40 hours to complete this task.
Now my problem, is that obviously, the server should utilize more CPU , at least 70 % and should complete this task in shorter time, How can I achieve this goal ?
Rewrite the code to take advantage of the greater capabilities of the server, such as the additional cores.

Resources