Multiple Azure redis connection - asp.net-core-webapi

To overcome latency, On "Startup.cs" of asp.net core 2.1, I am creating 2 static connections to Azure Redis & reuse those same connection instances during the application entire life cycle.
Is it is good practice to create multiple connections to one Azure redis instance? what is max. no. of connections? will multiple instance have billing implications? Is Azure redis usage charges based number of connections or as per the amount of data transfer? please confirm.

First, it is not a good practice to create two Azure Redis static connections in the application.
In general projects, Redis is not frequently used, but is instantiated and created when the business needs it, and released after use. If you need to use it frequently, you can instantiate it in Startup.cs when the project starts, and define an instance globally, so that there will be no frequent creation and deletion of instances.
For Azure Redis billing methods, you can refer to the official documentation. It is not based on the number of connections nor the amount of transmission. It is billed according to time.

It is actually recommend to use different connections to reflect the varying data packet sizes ie you could setup a connection with higher timeout for data that is bigger in size, as opposed to data that is small in size. This is recommended only when you have data packets being stored in redis of varying sizes ex: 1kb to 100 kb and you cannot reduce their size of the packet.
Having different connection ensures that pipelining that usually happens when fetching data does not result in cascading timeouts. Multi connection is also recommended in Microsoft docs, have a look here by scrolling to the bottom and see point 3
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-troubleshoot-client#large-request-or-response-size

Related

when will Cosmos DB Client's mapping cache be refreshed?

We are using Cosmos DB SDK whose version is 2.9.2. We perform Document CRUD operations. Usually, the end-to-end P95 latency is 20ms. But sometimes the latency is over 1000ms. The high latency period lasts for 10 hours to 1 day. The collection is not throttling.
We have get some background information from:
https://icm.ad.msft.net/imp/v3/incidents/details/171243015/home
https://icm.ad.msft.net/imp/v3/incidents/details/168242283/home
There are some diagnostics strings in the tickets.
We know that the client maintains a cache of the mapping of logical partition and physical replica address. This mapping may be outdated because of replicas movement or outage. So client tries to read from the second/third replica. However, this retry has significant impact on end to end latency. We also observe that the high latency/timeout can last for several hours, even days. I expect there’s some mechanism of refreshing mapping cache in the client. But it seems the client stops visiting more than one replica only after we redeploy our service.
Here are my questions:
How can the client tell whether it’s unable to connect to a certain replica? Will the client wait until timeout or server tells client that the replica is unavailable?
In which condition the mapping cache will be refreshed? We are using Session consistency and TCP mode.
Will restarting our service force the cache to be refreshed? Or refreshing only happens when the machine restarts?
When we find there’s replica outage, is there any way to quickly mitigate?
What operations are performed (Document CRUD or query)?
And what are the observed latencies & frequencies? Also please check if the collection is throttling (with custom throttling policy).
Client do manage the some metada and does handle its staleness efficiently with-in SLA bounds.
Can you please create a support ticket with account details and 'RequestDiagnostis' and we shall look into it.

BizTalk 2010 Determine Host Throttle settings for receive locations

Because a required pipeline components seems to have trouble hitting a database to details of messages, I am planning to use Host Throttling to limit the amount of files BizTalk is processing at the receive location. I want to be able to indicate that X number of messages should be processed within Y seconds (or any other feasible timespan). Does anyone know which throttling settings can be used to force this behavior?
I know how to set values, however I cannot find the best configuration.
(note: one of the solutions might also adjust the pipeline, but it contains third party components which cannot be adjusted.)
From How BizTalk Server Implements Host Throttling BizTalk looks at
Amount of memory in use (both systemwide and host process memory).
Number of in-process messages being delivered or processed (threshold for outbound throttling).
Number of threads in use. Database size, measured by the number of items in the queue tables for all hosts and the number of items in the spool and tracking tables.
Number of concurrent database connections.
Rate of message publishing (inbound)
and delivery or processing (outbound).
The only one that throttles inbound is the Rate of message publishing, but that is possible after the pipeline/port has processed the message so may not be of any use in this scenario, but you would have to test that.
You will probably want to set up that process under it's own host so if it hits throttling thresholds it does not throttle everything else as well.
If possible you should move the component to a send port pipeline as throttling send ports is much more controllable. One way is to set the send port to ordered delivery, although that can cause a backlog especially if you get a suspended message.
I think your most straightforward approach here would be to write a custom adapter. Unfortunately, the out of the box File Adapter does not directly support throttling/polling intervals, and I don't think the suggestions given already would not directly impact the custom pipeline processing if it's directly hitting the DB through ADO.NET (but it couldn't hurt to try). You can set the BatchSize property on the file adapter settings, but even then there's nothing stopping it from submitting that batchsize as fast as it possibly can over and over again.
A custom adapter could be created to wait for a period before submitting additional files for processing. You could base it on the SDK File Adapter sample.

Should each application have its own host instance / host instances?

Should each application in BizTalk have its own host instance / host instances?
I've read various blog posts and books that it's good practice to have different host instances for different tasks, e.g. one for receive, one for send, one for orchestrations and one for tracking.
But should each application get its own receive, send, orchestration and tracking host?, or is it just one of each for all applications?
Short answer, probably not unless there is a particular aspect of that application that is causing a performance issue for the other applications. If that is the case you would only create a host for the particular part that is causing the issue. e.g. if it is the receive adapter you would create a host for receive locations that run under that adapter. If it is the Orchestrations for that application then create a processing host for that application for the Orchestrations of that application to run under.
Long answer from from Microsofts
High Availability for BizTalk Hosts
Disadvantages of Additional Hosts
While there are benefits of creating additional host instances, there are also potential drawbacks if too many host instances are created. Each host instance is a Windows service (BTSNTSvc.exe or BTSNTSvc64.exe), which generates additional load against the MessageBox database and consumes computer resources, such as CPU, memory, and threads. Other than these, you have the following reasons for not configuring too many additional host instances:
Several performance counters are reported per host with too much granularity. This affects the usability for the administrator who would need to traverse through a lot of data. This has a negative impact on the overall view the administrator has.
Each host consumes considerable amount of memory that might lead to a situation of throttling and reduced performance.
If the hosts have receive adapters that continuously perform polling, each host will poll the database at short intervals, thereby resulting in degraded performance.
As you're read, there are no definite rules.
Should each Application have a dedicated Host/Instance? Sure, if you've got a relatively small number of Applications for the hardware.
It's something to also consider if one Application is problematic.
Should each Application have separate Receive, Send or Orch hosts? No, not to start. If you observe a particular operation consuming a disperportionate abount of resources, then split it out.

web service that can withstand with 1000 concurrent users with response in 25 millisecond

Our client requirement is to develop a WCF which can withstand with 1-2k concurrent website users and response should be around 25 milliseconds.
This service reads couple of columns from database and will be consumed by different vendors.
Can you suggest any architecture or any extra efforts that I need to take while developing. And how do we calculate server hardware configuration to cope up with.
Thanks in advance.
Hardly possible. You need network connection to service, service activation, business logic processing, database connection (another network connection), database query. Because of 2000 concurrent users you need several application servers = network connection is affected by load balancer. I can't imagine network and HW infrastructure which should be able to complete such operation within 25ms for 2000 concurrent users. Such requirement is not realistic.
I guess if you simply try to run the database query from your computer to remote DB you will see that even such simple task will not be completed in 25ms.
A few principles:
Test early, test often.
Successful systems get more traffic
Reliability is usually important
Caching is often a key to performance
To elaborate. Build a simple system right now. Even if the business logic is very simplified, if it's a web service and database access you can performance test it. Test with one user. What do you see? Where does the time go? As you develop the system adding in real code keep doing that test. Reasons: a). right now you know if 25ms is even achievable. b). You spot any code changes that hurt performance immediately. Now test with lots of user, what degradation patterns do you hit? This starts to give you and indication of your paltforms capabilities.
I suspect that the outcome will be that a single machine won't cut it for you. And even if it will, if you're successful you get more traffic. So plan to use more than one server.
And anyway for reliability reasons you need more than one server. And all sorts of interesting implementation details fall out when you can't assume a single server - eg. you don't have Singletons any more ;-)
Most times we get good performance using a cache. Will many users ask for the same data? Can you cache it? Are there updates to consider? in which case do you need a distributed cache system with clustered invalidation? That multi-server case emerging again.
Why do you need WCF?
Could you shift as much of that service as possible into static serving and cache lookups?
If I understand your question 1000s of users will be hitting your website and executing queries on your DB. You should definitely be looking into connection pools on your WCF connections, but your best bet will be to avoid doing DB lookups altogether and have your website returning data from cache hits.
I'd also look into why you couldn't just connect directly to the database for your lookups, do you actually need a WCF service in the way first?
Look into Memcached.

Client and Cache configuration for Oracle coherence

I have the specific scenario for which we want to use Coherence as sitributed cache. Which I am gonna describe here.
I have 20+ standalone processes which are going to put the data in cache continuously. the frequency of all of them differs, though thats not a concern.
And 2 procesess which will be reading data from those cache.
I dont need any underlying db except for the way which coherence provide. Data will be written to the cache and read from the cache.
I have 4 node cluster at my disposal (cost constraint whatever) and the coherence cluster will be on different boxes (infra constraint whatever) and both the populating portion of the cache and the reading part will be on differnt nmachines.
The peak memory size of the cache daily will hover around 6 GB max, min being 2 GB.
Cache will have daily data only and I will have separate archiving processes to simulatneosuly keep archiving it also. the point is that cache size for now will have this size only. Lets say I am gonna keep the date out of key equation.
Though Would like to explore if I can store more into those 4 nodes. Right now its simple serialization, can explore other nbinary formats. Or should I definietly at this size of the cache?
My read and write operations are fairly spread out in the day. Meaning the read and write will keep on happening by those 2 reading clients and 20+ writing clients. Its not like one of them is more. Though there is a startup batch process in all of the background process which push more to the cache than the continuous pushing afterwards. But continuous pushing pushes fair amount of data too.
Now my questions regarding those above points (and because of some confusion also)
The biggest one is somebody told me that I an have limited number of connection depending on the nodes we have bought. so he said if its 4, you ideally should have 4 connections only at the max. So, develop a gatekeeper kind of application and what not. Even if we use TCP Extend. Now from my reading so far, I dont think so. Is it? The point is dont wanna go that way if its really is not a constraint.
In other words is there limit on connection through Proxy Service dependeing on the nodes in the cluster?
Soemwhat related to above only. at the very max, I am going to get some penalty on the performance while pushing to cache only if I go the Extend way, right?
Partioned cache/near cache. As the reading time as well as the most update cache both are extremely critical. (the most imp question i have).
Really want to see the benefit which can be obtained from going to POF instead of lets say serialization/externalizatble/protobuf. Can coherence support protobuf out of the box? (may be for later on)
There's no technical limitation to the number of connections a Coherence Extend proxy can support except normal network and hardware resource constraints. You will have to ask an Oracle sales person if there are licensing limitations.
There is some performance impact from using a proxy because you are adding an additional network hop (client to proxy to cluster). If you use POF serialization then the proxy does not have to serialize/deserialize values. It can just pass the object through in its serialized form. In most applications the performance impact of using a proxy is tiny because Coherence is highly optimized for network speed. You are not required to use a proxy unless your clients are .NET or C++, but there are advantages of isolating client performance from impacting the cache.
Near cache will improve retrieval performance dramatically if there a number of frequently retrieved items for a client since they will be found in-process.
POF offers performance improvements based on faster serialization/deserialization and more compact storage. It is always best to try with test data based on your real production data and measure the difference yourself. Coherence does not support protobuf out of the box.

Resources