Which kind of service will be the best to store data and calculate sth (injected to controller) between requests? - asp.net

I'm the new in web development. I undestend every kind of dependency injestion: scoped, transient, singleton, but no one meets my expectations. My controller is required to calculate arithmetic average, so I send GET request with next number to calculate avg. So, I need service (injected to controller) to store these numbers and calculate the avg every request.
If I use transient, new instance of service will be created, so I can't store actual sum of values. Does someone have any idea?

In order to store the previous numbers (or last result with weigt) you can use the cookies directly, or you can use tempdata. https://learn.microsoft.com/ens/aspnet/core/fundamentals/app-state?view=aspnetcore-2.2#tempdata
I would not dismiss the session mechanism unless you have multiple instances.

Related

which way is best and worst to find and display customer lived in newyork? stateful or stateless or singleton?

I am trying to get the number of customer who lived in newyork and display on my page. But some doubt in my mind, which way is best to worst, Like stateless, stateful and singleton ejb? Any idea, which way i have to implement my application.
In this case it would be Stateless.
As long as there is no state accross several invocations there is no need for a Stateful Bean.
A singleton could be a bottleneck, as i would not use a method like getCustomersinNewYork() but getCustomers(City city), and a Singleton is only one instance which should be used to synchronize. But it is possible to have the method concurrent and store a Map with results for each City - but consider you need to handle the concurrency.
From my point of view I would keep the application stateless and let the StatelessBean calcuate the number of customers per city for each request.
If there is a need to enhance the performance because the requests are repeated I would use a cache like Infinispan to store that - i.e. with expiration to recalculate the number from time to time or let it drop it not used for longer.
Make sense?

How to have sequence number field for a parent in Google Datastore

I have a entity in Datastore that looks something like this:
public class UserEntry {
#Parent
private Ref<User> parent;
#Id
private String id;
private String seqNumber;
private String name;
}
I am trying to maintain a sequence number for each user. i.e first entry for user should have seqNumber as 1 the next as 2 and so on. What is the best way to achieve this?
i.e:
1) how can i get the seqNumber for the last entry for a user
2) How do i ensure while writing that another process has not written an entry for the user with the same seqNumber. I cannot make seqNumber the id for the entry.
I am afraid that the only way to achieve this is to use datastore's support for transactions. Note, however, that this solution comes with a considerable contention risk. And with a risk of skipping some values in the sequence when done incorrectly. Let me start with a naive approach to illustrate the basic idea.
A straight forward solution (NAIVE APPROACH):
You could create a dedicated entity, let's call it Sequence, which would have a single property, let's call it value. At the beginning, the value property would contain 0 (or 1, depending where you want the sequence to start). Then, prior to creating any new UserEntry you would have to execute a transaction which would:
obtain the current value,
increment value by one (within the same transaction).
The fact that you would be using transactions would prevent concurrent requests from obtaining the same sequential id. Note, however, that there would have to be exactly one "instance" of the Sequence entity kind stored in the datastore. Updating this entity too rapidly could lead to contention issues. Also, this approach uses non-idempotent transactions which could lead to skipping some values from the sequence.
Contention risk:
Beware that the straight forward solution described above would limit throughput of your application. Your application wouldn't be able to handle creating more than one UserEntry per second for an extended period of time. This is because creating a UserEntry would require updating the Sequence entity. And "one write per second" is an approximate limit for writing into a single entity, see https://cloud.google.com/datastore/docs/concepts/limits
Danger of non-idempotent transactions:
Datastore can occasionally throw an error claiming that a transaction failed even though it did not, see https://cloud.google.com/datastore/docs/concepts/transactions If you would retry the transaction after such "non-error", you would end up executing the transaction twice. In your scenario, you would end up incrementing value twice for creation of a single UserEntry, thus skipping one value from the sequence (or more if you would be extremely unlucky and got the "non-error" several times in a row).
This is why Google suggests to make your transactions idempotent, meaning that executing the transaction a thousand times should have the same effect on the resulting state of the underlying data as executing it once. A good example of an idempotent transaction is renaming a user. If you tell someone to be renamed to "Carl" a thousand times, he will end up being called... well, "Carl". If, on the other hand, you tell our value counter to be incremented a thousand times... You get the picture.
Better solutions:
If you are ok with the above mentioned risks of the straight forward solution, you are ok to go. But here are some tips how to avoid these issues:
Avoiding contention:
You could use a task queue to postpone the assignment of seqNumber. By making sure that the queue won't send requests more than once per second, you would easily avoid possible contention issues. Obvious downside of this solution is that there would be some delay before the seqNumber property would be assigned to the newly created UserEntry. I don't know if this is acceptable for you.
Design transactions to be idempotent:
Here is a simple modification which would make the transactions idempotent: Instead of using the value property to hold the actual counter value, use it to store id of the lastly created UserEntry. Then, when deciding what the seqNumber for the next UserEntry should be, retrieve the lastly added UserEntry, use its seqNumber to calculate the next value, and than update the Sequence entity (as many times as you want) telling it, your value property is now equal to "some particular id".
Final note:
You are very correct in NOT using the seqNumber as id of the entity. Using monotonically increasing values as entity ids is another well known contention trap, see https://cloud.google.com/datastore/docs/best-practices
Hope this helps.

Periodical MongoDB operations with Meteor

I am building a voting system with Meteor where items can be up- or downvoted. To sort the voting scores more precisely later on, each item holds the fields dailyScore, monthlyScore and alltimeScore, which get incremented or decremented after a vote. I also need to mention, that both registered and unregistered users can vote every 24h (there are two "voters"-arrays containing the userIds of the registered voters and the IP-addresses of the unregistered voters to keep track of the voters and preventing them to vote more than once a day).
The problem I am facing right now is about finding a way to reliably reset
the dailyScore every new day (let's say at UTC-0)
the monthlyScore every new month (in addition to (1.) apparently)
the two voters-arrays on a daily basis (to the same point of time as (1.))
My thoughts so far:
I could store a servers-side global variable which always contains the lastUpdate-date of any collection. By using the onConnection-callback I can check if(currentTime.getDate() != lastUpdate.getDate()) on the server. If true, I can start the operations performing 1.-3. from above.
Using onConnection might be "too heavy".
Is some kind of cronjob possible to perform 1.-3. every 24h at UTC-0?
I don't think a onLogin-hook is sufficent, because unregistered users can vote as well.
Is there a common pattern or best practice for that? Doing periodical database operations (like every fixed 24h or every new onConnection at a new day) should be a well-known problem.
percolate:synced-cron package works quite well for this kind of scheduled jobs.
Beware, SyncedCron probably won't work as expected on certain shared hosting providers that shutdown app instances when they aren't receiving requests (like Heroku's free dyno tier or Meteor free galaxy).

Cost of creating dbcontext per web request on ASP.Net

I am using Unit of work and Repository pattern along with EF6 in my asp.net web application. DbContext object is getting created and destroyed on every request.
I am thinking that it is costly creating the new dbcontext on every request(I have not done any performance bench marking).
Is this cost of creating DbContext on every request can be ignored ? Does anybody done some bench marking?
Creating a new context is ridiculously cheap, on the order of about 137 ticks on average (0.0000137 seconds) in my application.
Hanging onto a context, on the other hand, can be incredibly expensive, so dispose of it often.
The more objects you query, the more entities end up being tracked in the context. Since entities are POCOS, entity framework has absolutely no way of knowing which ones you've modified except to examine every single one of them in the context and mark it accordingly.
Sure, once they're marked, it will only make database calls for the ones that need updated, but it's determining which ones need updated that is expensive when there are lots of entities being tracked, because it has to check all the POCOS against known values to see if they've changed.
This change tracking when calling save changes is so expensive, that if you're just reading and updating one record at a time, you're better off disposing of the context after each record and creating a new one. The alternative is hanging onto the context, such that every record you read results in a new entity in the context, and every time you call save changes it's slower by one entity.
And yes, it really is slower. If you're updating 10,000 entities for example, loading one at a time into the same context, the first save will only take about 30 ticks, but every subsequent one will take longer to the point where the last one will take over 30,000 ticks. In contrast, creating a new context each time will result in a consistent 30 ticks per update. In the end, because of the cumulative slow-down of hanging onto the context and all the tracked entities, disposing of and recreating the context before each commit ends up taking only 20% as long (1/5 the total time)!
That's why you should really only call save changes once on a context, ever, then dispose of it. If you're calling save changes more than once with a lot of entities in the context, you may not be using it correctly. The exceptional case, obviously, is when you're doing something transactional.
If you need to perform some transactional operation, then you need to manually open your own SqlConnection and either begin a transaction on it, or you need to open it within a TransactionScope. Then, you can create your DbContext by passing it that same open connection. You can do that over and over, disposing of the DbContext object each time while leaving the connection open. Usually, DbContext handles opening and closing the context for you, but if you pass it an open connection, it won't try to close it automatically.
That way, you treat the DbContext as just a helper for tracking object changes on an open connection. You create and destroy it as many times as you like on the same connection, where you can run your transaction. It's very important to understand what's going on under the hood.
Entity Framework is not thread safe, meaning, you cannot use a context across more than one thread. IIS uses a thread for each request sent to the server. Given this, you have to have a context per request. Else, you run a major risks of unexplained and seemingly random exceptions and potentially incorrect data being saved to the database.
Lastly, the context creation is not that expensive of an operation. If you are experiencing a slow application experience (not on first start, but after using the site), your issue probably lies somewhere else.

Spring destroy temporary objects after defined period of time

I have a spring mvc form that uses a kaptcha in order to prove that the user is a human and not a machine. After entering the requested information, a random value is generated which must be available in the servlet context for 120 seconds. The user could take that value and entered it as a key from another another application (add-on) in order to complete the registration process. That is the reason why that random value must have a global scope and not session scope. If no request has been received by another application during those 120 seconds i would like to destroy that random string/object. I am looking here for the best practices i.e. one way i can think of is been implemented in Spring is the following example an example.
1) create a collection that can hold multiple obeject of that random value type
2) save the radom value in the application context (not session)
3) define those object with a scope of global-session
4) when a request come in get the HttpServletRequest extract the value sent by the user
5) iterate trouh teh application context and if such values is found proceed further if not end the process immediately.
now that might theoretically work but how do i make sure that the value that i have generated in step 1 is destroyed after 120 second if no request object came within the defined period of type i.e. 120 seconds. I want to be sure that the memory freed.
What would be the best practice in order to implement such construct within Spring. Shell i use TaskScheduler or something else? Is the best place to store that random value the application context?

Resources