Unsynchronization and synchronization for NTP - networking

I am using a GPS source for my NTP server. I am using the Meinberg program for my NTP as well.
My NTP client configuration is:
Server ntpclock prefer iburst minpoll 2 maxpoll 5
If my NTP client is set 30 minutes or more back in time, is it really difficult to synchronise back? Because I have been testing with 5 minutes, 10 minutes and 15 minutes. But not 30 minutes and more.
If the synchronization takes longer time for the 30 minutes back in time and more, what would be the reason or explanation?

Since in all examples the offset is greater than the step threshold of 128 ms, the clock should step in a little more than 10 mins (the step out threshold); how much more will depend on the current poll interval (2^2=4s to 2^5=32s) which depends on how stable it was before, and on when the clock fell out of synch relative to the polling. iburst speeds up the sync after the step out threshold timer triggers.

Related

asp.net application high cpu, high requests/sec

I am testing an asp.net web application iis7.Appserver- dual core,8gb, Webserver -dualcore 8gb.Running with 50 users for 1 hour.Requests/sec is 550 and CPU maxes out. Requests queue & current are 4 and 21 on an average. We run on default configuration. Number of logical and physical threads on webserver are 55 and 48 on an avg.Private bytes consumed went up to 0.4gb by end of load test.# of Exceps thrown/sec increases but errors total/sec is 0. Cache total turnover rate is high.% GC time is 4.73 on an avg.There are no errors in the load test and no. of passed transactions are also good.The concerns that we have are:
-How to improve the response time
-Should we limit the request/sec and are we stressing the server coz of high no. of requests
-Is 50 users high for the current configuration.We dont have any biz requirement or SLAs as of now.
I changed the process config to he following and that improved the resp time by 1-4 secs.
memoryLimit="60"
autoConfig="false"
maxWorkerThreads="100"
maxIoThreads="100"
minWorkerThreads="40"
minIoThreads="30"
Any insights will be appreciated.

Firebase concurrent connections?

I came across a post Concurrent firebase
As per top answer on the post if a user John come online at 12:00 , stay online for 24 hours , user Mike come online at 12:01 , stay online for 24 hours , user Jack come online at 12:02 , stay only for 24 hours, Then firebase have only 1 concurrent connection in 24 hours.
Did I understand correctly?
I was confused because I was thinking concurrent connection means connections to server at a time but as per explanation above concurrent connections mean connections start at same time?
I have no idea how you got from that accepted answer by #MikePugh to your conclusions. The text:
Concurrent connections are just that - connections established at the same time. So if you have 3 people using your app to check scores, but user 1's app goes online at 12:00 PM and the connection lasts for 5 seconds, then user 2's app goes online at 12:01 PM for 5 seconds, and user 3's app goes online at 12:02 PM for 5 seconds then you've only ever had 1 concurrent connection.
I added emphasis on the parts that you seem to have skipped in your copy.
In the example Mike gave, each user was connected only for a very short time. So there was never more than a single concurrent connection. In fact, for the majority of the day (23 hours 59 minutes 45 seconds) there were 0 connections. Given that Firebase bills at the 95th percentile, you'd be billed for 0 concurrent connections (if they'd offer such a tier).
You indicate that your users stay connected for 24 hours, which leads to 3 concurrent connections from 12:02 to midnight. So you'd have 3 concurrent connections for the majority of the time.

Odd Asp.Net threadpool sizing behavior

I am load testing an .Net 4.0 MVC application hosted on IIS 7.5 (default config, in particular processModel autoconfig=true), and am observing odd behavior in how .Net manages the threads.
http://msdn.microsoft.com/en-us/library/0ka9477y(v=vs.100).aspx mentions that "When a minimum is reached, the thread pool can create additional threads or wait until some tasks complete".
It seems the duration that threads are blocked for, plays a role in whether it creates new threads or waits for tasks to complete. Not necessarily resulting in optimal throughput.
Question: Is there any way to control that behavior, so threads are generated as needed and request queuing is minimized?
Observation:
I ran two tests, on a test controller action, that does not do much beside Thread.Sleep for an arbirtrary time.
50 requests/second with the page sleeping 1 second
5 requests/second with the page sleeping for 10 seconds
For both cases .Net would ideally use 50 threads to keep up with incoming traffic. What I observe is that in the first case it does not do that, instead it chugs along executing some 20 odd requests concurrently, letting the incoming requests queue up. In the second case threads seem to be added as needed.
Both tests generated traffic for 100 seconds. Here are corresponding perfmon screenshots.
In both cases the Requests Queued counter is highlighted (note the 0.01 scaling)
50/sec Test
For most of the test 22 requests are handled concurrently (turquoise line). As each takes about a second, that means almost 30 requests/sec queue up, until the test stops generating load after 100 seconds, and the queue gets slowly worked off. Briefly the number of concurrency jumps to just above 40 but never to 50, the minimum needed to keep up with the traffic at hand.
It is almost as if the threadpool management algorithm determines that it doesn't make sense to create new threads, because it has a history of ~22 tasks completing (i.e. threads becoming available) per second. Completely ignoring the fact that it has a queue of some 2800 requests waiting to be handled.
5/sec Test
Conversely in the 5/sec test threads are added at a steady rate (red line). The server falls behind initially, and requests do queue up, but no more than 52, and eventually enough threads are added for the queue to be worked off with more than 70 requests executing concurrently, even while load is still being generated.
Of course the workload is higher in the 50/sec test, as 10x the number of http requests is being handled, but the server has no problem at all handling that traffic, once the threadpool is primed with enough threads (e.g. by running the 5/sec test).
It just seems to not be able to deal with a sudden burst of traffic, because it decides not to add any more threads to deal with the load (it would rather throw 503 errors than add more threads in this scenario, it seems). I find this hard to believe, as a 50 requests/second traffic burst is surely something IIS is supposed to be able to handle on a 16 core machine. Is there some setting that would nudge the threadpool towards erring slightly more on the side of creating new threads, rather than waiting for tasks to complete?
Looks like it's a known issue:
"Microsoft recommends that you tune the minimum number of threads only when there is load on the Web server for only short periods (0 to 10 minutes). In these cases, the ThreadPool does not have enough time to reach the optimal level of threads to handle the load."
Exactly describes the situation at hand.
Solution: Slightly increased the minWorkerThreads in machine.config to handle expected traffic burst. (4 would give us 64 threads on the 16 core machine).

What specifically are wall-clock-time, user-cpu-time, and system-cpu-time in Unix?

I can take a guess based on the names, but what specifically are wall-clock-time, user-cpu-time, and system-cpu-time in Unix?
Is user-cpu time the amount of time spent executing user-code while kernel-cpu time the amount of time spent in the kernel due to the need of privileged operations (like I/O to disk)?
What unit of time is this measurement in?
And is wall-clock time really the number of seconds the process has spent on the CPU or is the name just misleading?
Wall-clock time is the time that a clock on the wall (or a stopwatch in hand) would measure as having elapsed between the start of the process and 'now'.
The user-cpu time and system-cpu time are pretty much as you said - the amount of time spent in user code and the amount of time spent in kernel code.
The units are seconds (and subseconds, which might be microseconds or nanoseconds).
The wall-clock time is not the number of seconds that the process has spent on the CPU; it is the elapsed time, including time spent waiting for its turn on the CPU (while other processes get to run).
Wall clock time: time elapsed according to the computer's internal clock, which should match time in the outside world. This has nothing to do with CPU usage; it's given for reference.
User CPU time and system time: exactly what you think. System calls, which include I/O calls such as read, write, etc. are executed by jumping into kernel code and executing that.
If wall clock time < CPU time, then you're executing a program in parallel. If wall clock time > CPU time, you're waiting for disk, network or other devices.
All are measured in seconds, per the SI.
time [WHAT-EVER-COMMAND]
real 7m2.444s
user 76m14.607s
sys 2m29.432s
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
real or wall-clock
real 7m2.444s
On a system with a 24 core-processor, this cmd/process took more than 7 minutes to complete. That by utilizing the most possible parallelism with all given cores.
user
user 76m14.607s
The cmd/process has utilized this much amount of CPU time.
In other words, on machine with single core CPU, the real and user will be nearly equal, so the same command will take approximately 76 minutes to complete.
sys
sys 2m29.432s
This is the time taken by the kernel to execute all the basic/system level operations to run this cmd, including context switching, resource allocation, etc.
Note: The example assumes that your command utilizes parallelism/threads.
Detailed man page: https://linux.die.net/man/1/time
Wall clock time is exactly what it says, the time elapsed as measured by the clock on your wall (or wristwatch)
User CPU time is the time spent in "user land", that is time spent on non-kernel processes.
System CPU time is time spent in the kernel, usually time spent servicing system calls.

What's the delay for in TCP/UDP?

HELP PLEASE! I have an application that needs as close to real-time processing as possible and I keep running into this unusual delay issue with both TCP and UDP. The delay occurs like clockwork and it is always the same length of time (mostly 15 to 16 ms). It occurs when transmitting to any machine (eve local) and on any network (we have two).
A quick run down of the problem:
I am always using winsock in C++, compiled in VS 2008 Pro, but I have written several programs to send and receive in various ways using both TCP and UDP. I always use an intermediate program (running locally or remotely) written in various languages (MATLAB, C#, C++) to forward the information from one program to the other. Both winsock programs run on the same machine so they display timestamps for Tx and Rx from the same clock. I keep seeing a pattern emerge where a burst of packets will get transmitted and then there is a delay of around 15 to 16 milliseconds before the next burst despite no delay being programmed in. Sometimes it may be 15 to 16 ms between each packet instead of a burst of packets. Other times (rarely) I will have a different length delay, such as ~ 47 ms. I always seem to receive the packets back within a millisecond of them being transmitted though with the same pattern of delay being exhibited between the transmitted bursts.
I have a suspicion that winsock or the NIC is buffering packets before each transmit but I haven't found any proof. I have a Gigabit connection to one network that gets various levels of traffic, but I also experience the same thing when running the intermediate program on a cluster that has a private network with no traffic (from users at least) and a 2 Gigabit connection. I will even experience this delay when running the intermediate program locally with the sending and receiving programs.
I figured out the problem this morning while rewriting the server in Java. The resolution of my Windows system clock is between 15 and 16 milliseconds. That means that every packet that shows the same millisecond as its transmit time is actually being sent at different milliseconds in a 16 millisecond interval, but my timestamps only increment every 15 to 16 milliseconds so they appear the same.
I came here to answer my question and I saw the response about raising the priority of my program. So I started all three programs, went into task manager, raised all three to "real time" priority (which no other process was at) and ran them. I got the same 15 to 16 millisecond intervals.
Thanks for the responses though.
There is always buffering involved and it varies between hardware/drivers/os etc. The packet schedulers also play a big role.
If you want "hard real-time" guarantees, you probably should stay away from Windows...
What you're probably seeing is a scheduler delay - your application is waiting for other process(s) to finish their timeslice and give up the CPU. Standard timeslices on multiprocessor Windows are from 15ms to 180ms.
You could try raising the priority of your application/thread.
Oh yeah, I know what you mean. Windows and its buffers... try adjusting the values of SO_SNDBUF on sender and SO_RCVBUF on reciever side. Also, check involved networking hardware (routers, switches, media gateways) - eliminate as many as possible between the machines to avoid latency.
I meet the same problem.
But in my case,I use GetTickCount() to get current system time,unfortunately it always has a resolution of 15-16ms.
When I use QueryPerformanceCounter instead of GetTickCount(), everything's all right.
In fact,TCP socket recv data evenly,not 15ms deal once.

Resources