Design approach to decouple DB write to reduce application latency - amazon-dynamodb

I want to use DynamoDB to save application operation data at the end of a session for auditing purposes without affecting the actual application latency. Should I decouple the DB write activity with a SQS? So, the Db write delay will have no effect on the actual application? or what would be the best design technique for dealing with these scenarios?

Related

Corda Performance to call a Flow several times

We built our application based on cordapp-template-kotlin, and used the client folder infrastructure to provide rest services to a consumer who is a web application.
The services receive a list of data that serve as parameters to the flows. And we call the streams chained to each iteration in the data list. For example, the web provides a list of names, and with that list, we create accounts on corda.
We want to improve the performance of our scenario as a whole. And one of the aspects that I thought could improve, would be to parallelize the flow calls. But when I paralleled the creation of accounts, for example, I did not have any performance gain. I would like to know why.
Creating accounts one by one in a linear way had the same performance of doing the creation in a parallel way. Is this the expected behavior or is there a problem in my development?
Corda Opensource doesn't have a multi-threaded flow state machine. Hence you won't be able to benefit from parallel flow execution on Corda Opensource.
A better approach for this with Corda Opensource would perhaps be batching of accounts with a single transaction on flow, rather than creating the accounts with multiple flow invocation.
With Corda Enterprise, however, you could benefit with multi-threaded flow state machine. But I would still recommend exploring the batching approach and find an optimal solution with maximum efficiency.

Where to use CosmosDB?

CosmosDb has a good feature of Globally Distributed which gives Faster Response of data. This will be useful for Mobile Applications directly accessing CosmosDb where Users are spread across the Globe.
However I am using ASP.NET Web Application hosted in Azure. Here my Application to Database communication will be of Fixed Distance always.
Can I benefit from CosmosDb in this case?
This is for Azure hosted ASP.NET Application
You can utilize CosmosDB when you know noSQL concept and so is your code, it has different implementation for read and write processes or you are planning to do microservices or you have other projects that depends/communicate on your Webapp project and your using the same database
There are some points you need to take into account before choosing CosmosDB as the database.
Pricing model! CosmosDB is not a cheep database and pricing model is based on the provisioned throughput. Requests that exceed the provisioned throughput will be rejected by the database. So first make sure you completely understand how things work.
Like other document based databases, if you wanna keep a graph of objects in a document, you should consider how to handle concurrent updates to the documents (if that is the case in your app). Hope you know well the difference between document based and relational databases.
But regarding the benefits:
It has a great a integration support with other PaaS services in Azure
It scales very well if you have a good partitioning strategy

ASP.NET Session limit best practice

We're running a PaaS ASP.NET application in an Azure App Service with 3 instances and managing session data outproc in a SQL Server database.
The application is live and we've noticed a large amount of session data for some users when following certain paths e.g. some users have session data upwards of 500k (for a simply site visit with no login the average session is around the 750 - 3000 mark which is what I'd expect).
500k sounds excessive but was wondering what is normal in large enterprise applications these days and the cons of holding so much data in session.
My initial thoughts would be,
No affect on Web App CPU (possible decrease in fact) because not constantly doing queries,
No affect on Web App Memory because we running outproc,
Large spikes in DTU on Sql Server session database when garbage collection runs,
Application may be a bit slower because it takes longer to read and write session data between requests,
May not be ideal for users with poor internet connections,
Possible increase in memory leaks if objects aren't scoped correctly.
Does my reasoning make sense or have I missed something?
Any thoughts and advice would be appreciated,
Many thanks.
I totally agree with your reasoning behind using out-proc session management in Azure App instances.Using IN-PROC sessions in the cloud is a strict no. The reason to host to cloud is to have high availability which is done by having a distributed environment.
Understanding from your point, i assume that speed is a concern to you or if matters to most of the web application , To overcome this , you might think of using Azure redis cache.
Here is the article for configuring session management using Azure redis cache:
Refer the documentation here: https://learn.microsoft.com/en-us/azure/redis-cache/cache-aspnet-session-state-provider

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

How can I use caching to improve performance?

My scenario is : WebApp -> WCF Service -> EDMX -> Oracle DB
When I want to bind grid I fetch records from Oracle DB using EDMX i.e LINQ Query. But, this degrades performance as multiple layers take place between WebApp & Oracle DB. Can I use caching mechanism to improve the performance? But as far as I know cache is shared across the whole application. So, if I update cache other user might receive wrong information. Can we use caching per user? Or is there any other way to improve performance of the application?
Yes, you can definitely use caching techniques to improve performance. Generally speaking, caching is “application wide” (or it should be) and the same data is available to all users. But this really depends on your scenario and implementation. I don't see how adding the extra caching layer will degrade performance, it's a sound architecture and well worth the extra complexity.
ASP.NET Caching has a concept of "cache dependencies" which is a method to notify the caching mechanism that the underlying source has changed, and the cached data should be flushed and reloaded on the next request. ASP.NET has a built-in cache dependency for SQL Server, and a quick Google search revealed there’s probably also something you can use with Oracle.
As Jakob mentioned, application-wide caching is a great way to improve performance. Generally user context-agnostic data (eg reference data) can be cached at the application level.
You can also cache user context data by storing data in the user's session when they login. Then the data can be cached for the duration of that users session (HttpContext.Session)
Session data can be configured to be stored in the web application process memory, in a state server (a special WCF service) or in a SQL Server database, depending on the architecture and infrastructure.

Resources