How to synchronize machines' time - datetime

I am working on 4 machines at a time. It is a real time environment so I have to synchronize the time up to the millisecond, is there anyway to sync the system cliock with millisecond accuracy?
Currently I am syncing the machines with the help of batch script command NET TIME \\TIMESRV /SET /YES, but I am not sure that it sync machines time up to millisecond or not. Is there any other way to do this?

Microsoft does not guarantee such precision:
We do not guarantee and we do not support the accuracy of the W32Time
service between nodes on a network. The W32Time service is not a
full-featured NTP solution that meets time-sensitive application
needs.
The W32Time service is primarily designed to do the following:
Make the Kerberos version 5 authentication protocol work.
Provide loose sync time for client computers.
The W32Time service cannot reliably maintain sync time to the range of
1 to 2 seconds. Such tolerances are outside the design specification
of the W32Time service.
http://support.microsoft.com/kb/939322/en-us

Related

How to send 50.000 HTTP requests in a few seconds?

I want to create a load test for a feature of my app. It’s using a Google App Engine and a VM. The user sends HTTP requests to the App Engine. It’s realistic that this Engine gets thousands of requests in a few seconds. So I want to create a load test, where I send 20.000 - 50.000 in a timeframe of 1-10 seconds.
How would you solve this problem?
I started to try using Google Cloud Task, because it seems perfect for this. You schedule HTTP requests for a specific timepoint. The docs say that there is a limit of 500 tasks per second per queue. If you need more tasks per second, you can split this tasks into multiple queues. I did this, but Google Cloud Tasks does not execute all the scheduled task at the given timepoint. One queue needs 2-5 minutes to execute 500 requests, which are all scheduled for the same second :thinking_face:
I also tried a TypeScript script running asynchronous node-fetch requests, but I need for 5.000 requests 77 seconds on my macbook.
I don't think you can get 50.000 HTTP requests "in a few seconds" from "your macbook", it's better to consider going for a special load testing tool (which can be deployed onto GCP virtual machine in order to minimize network latency and traffic costs)
The tool choice is up to you, either you need to have powerful enough machine type so it would be able to conduct 50k requests "in a few seconds" from a single virtual machine or the tool needs to have the feature of running in clustered mode so you could kick off several machines and they would send the requests together at the same moment of time.
Given you mention TypeScript you might want to try out k6 tool (it doesn't scale though) or check out Open Source Load Testing Tools: Which One Should You Use? to see what are other options, none of them provides JavaScript API however several don't require programming languages knowledge at all
A tool you could consider using is siege.
This is Linux based and to prevent any additional cost by testing from an outside system out of GCP.
You could deploy siege on a relatively large machine or a few machines inside GCP.
It is fairly simple to set up, but since you mention that you need 20-50k in a span of a few seconds, siege by default only allows 255 requests per second. You can make this larger, though, so it can fit your needs.
You would need to play around on how many connections a machine can establish, since each machine will have a certain limit based on CPU, Memory and number of network sockets. You could just increase the -c number, until the machine gives an "Error: system resources exhausted" error or something similar. Experiment with what your virtual machine on GCP can handle.

Forcing a Biztalk Host to Throttle for Debugging Purposes

we're currently having issue in our production servers and would like to try to replicate the issue in our dev. I'm currently awaiting access to our Performance Monitoring Tool, and while waiting would like to play with it a little.
I'm thinking of, since I suspect a host throttling in prod, forcing hosts to throttle in dev and see if it will recreate the issue.
Is there a way to do this?
As others have mentioned, monitoring of the throttling counters and other counters like memory and WIP messages is a must to see what is going on in your production server. Also would recommend that set up a SCOM alert on throttling states of 3+ (publishing + delivery states), if you have SCOM.
Message throughput can grind to a halt on especially the memory (4, 5) and Queue Size (6) states. States 1+2 are generally short lived (e.g. arrival of a large batch of messages) and Biztalk recovers within a few seconds.
Simulating the memory state in your Dev environment should be straightforward by tweaking the throttling thresholds (obviously not something to be taken lightly in production!)
e.g. to trigger the Memory threshold states - AFAIK the lowest memory usage threshold you can set is 101MB. Running a load test in dev should then be able reproduce the throttle.
There is also apparantly a user-based throttling override to set states 10 and 11 although haven't actually tried this.
Some other experience on avoiding throttling:
(Caveat - I don't have an active BizTalk 2006/R2 setup - this is for 2009 / 2010)
If you do a lot of asynchronous processing (e.g. Queue receives), ensure that you have split functionality into separate Hosts for Receive, Processing and Send hosts. This way you can adjust the throttling for asynch Receive hosts to trigger much earlier than the processing and sending hosts - this should have the effect of constricting new incoming messages to the messagebox but allowing existing messages to complete processing.
On 64 bit hosts, the default 25% memory host usage throttling level is usually an unnecessary liability - we increased this using Yossi Dahan's recommendation of 50% on a 4GB server
Note that suspended messages count toward throttling state 6 - ensure that you have a strategy for dealing with suspended messages (and obviously ensure that the Sql Agent jobs are running!).

Azure - How many users can a webservice role support?

I saw this question:
How many users on one azure instance before I hit performance issues?
Which discusses how many users an azure instance could support for a webpage. I'm wondering if this would be any different for a webpage vs webserver that client applications (such as mobile phones) are call into, to get data. For example, if you have a single azure webrole running that exposes a REST enpoint, how many devices could call into the service before it starts to buckle under pressure?
How long is a string? :-)
If your app computes one million digits of pi on each web request, it will probably handle fewer concurrent web requests than an app that replies to each web request with "hello world."
(This is another, blunter, version of David's answer.)
A Web Role instance is merely a Windows 2008 Server R2 (or SP2) virtual machine of a given size (1-8 cores, 1.75-14GB usable RAM, 100-800Mbps network). You can run web sites, different web servers (tomcat, for example), WCF services (through IIS or standalone ServiceHosts), etc.
Scaling is going to depend heavily on the app itself: Is it CPU-constrained? Network-constrained? Do you have queue-based workload and your queue backlog is growing?
Sometimes it's critical to scale up to larger VMs, just to handle one of the constraints mentioned. It's always wise to pick the smallest VM size to run in a baseline mode (e.g. 1 or 2 users), then scale out to more instances as needed.
It's important to identify the key performance indicators (KPI's) for your app. You can then automate your scaling, with something like the Autoscale Appliction Block (WASABi).
Here's a reference page with all VM sizes, with details about CPU, local disk, network bandwidth, and RAM.

web service that can withstand with 1000 concurrent users with response in 25 millisecond

Our client requirement is to develop a WCF which can withstand with 1-2k concurrent website users and response should be around 25 milliseconds.
This service reads couple of columns from database and will be consumed by different vendors.
Can you suggest any architecture or any extra efforts that I need to take while developing. And how do we calculate server hardware configuration to cope up with.
Thanks in advance.
Hardly possible. You need network connection to service, service activation, business logic processing, database connection (another network connection), database query. Because of 2000 concurrent users you need several application servers = network connection is affected by load balancer. I can't imagine network and HW infrastructure which should be able to complete such operation within 25ms for 2000 concurrent users. Such requirement is not realistic.
I guess if you simply try to run the database query from your computer to remote DB you will see that even such simple task will not be completed in 25ms.
A few principles:
Test early, test often.
Successful systems get more traffic
Reliability is usually important
Caching is often a key to performance
To elaborate. Build a simple system right now. Even if the business logic is very simplified, if it's a web service and database access you can performance test it. Test with one user. What do you see? Where does the time go? As you develop the system adding in real code keep doing that test. Reasons: a). right now you know if 25ms is even achievable. b). You spot any code changes that hurt performance immediately. Now test with lots of user, what degradation patterns do you hit? This starts to give you and indication of your paltforms capabilities.
I suspect that the outcome will be that a single machine won't cut it for you. And even if it will, if you're successful you get more traffic. So plan to use more than one server.
And anyway for reliability reasons you need more than one server. And all sorts of interesting implementation details fall out when you can't assume a single server - eg. you don't have Singletons any more ;-)
Most times we get good performance using a cache. Will many users ask for the same data? Can you cache it? Are there updates to consider? in which case do you need a distributed cache system with clustered invalidation? That multi-server case emerging again.
Why do you need WCF?
Could you shift as much of that service as possible into static serving and cache lookups?
If I understand your question 1000s of users will be hitting your website and executing queries on your DB. You should definitely be looking into connection pools on your WCF connections, but your best bet will be to avoid doing DB lookups altogether and have your website returning data from cache hits.
I'd also look into why you couldn't just connect directly to the database for your lookups, do you actually need a WCF service in the way first?
Look into Memcached.

What are the best ways for maintaining server time

Many applications require server time frequently.
What are the best ways for maintaining server time on client machine with minimized request to server.
The standard way is to use NTP. You can configure it to synchronize as needed, if your machines have decent clocks and unless you want microsecond accuracy or anything like that you should be fine synchronizing once a day or so

Resources