When is the session abandoned during a typical session? [closed] - asp.net

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to figure out an issue I'm having with sitecore. I'm wondering if my issue is basically a problem with their reliance on Session.Abandon():
For performance reasons Sitecore only writes contact data to xDB (this is mongo) when
the session ends.
This logic seems somewhat flawed (unless I misunderstand how sessions are managed in Asp.Net).
At what point (without explicitly calling Session.Abandon()) is the session flushed in this model? i.e. When will the session_end event be triggered?
Can you guarantee that the logic will always be called or can
sessions be terminated without triggering an Abandon event? for example when the app_pool is recycled.
I'm trying to figure this out as it would explain something that I'm experiencing, where the data is fine in session but is written intermittently into the mongoDb

I think that strategy for building the data in session and then flushing the data to MongoDb fits for xDb.
xDb is designed to be high volume so it makes sense for the data to be aggregated rather than constantly being written into a database table. This is the way DMS worked previously and doesn't scale very well.
The session end in my opinion is pretty reliable, and Sitecore give you various option for persisting session (inproc, mongo, SQL server), MongoDb and SQL Server are recommended for production environments. You can write Contact data directly to MongoDb by using the Contact Repository api but for live capturing of data you should use the Tracker api. When using the tracker api, as far as I am aware, the only way to get data into MongoDb is to flush the session.
If you need to flush the data to xDb for testing purposes then Session.Abandon() will work. I have a module here which you can use for creating contacts and then flushing the session, so you can see how reliable the session abandon is by checking in MongoDb.
https://marketplace.sitecore.net/en/Modules/X/xDB_Contact_Creator.aspx

Related

Is H2O R package safe to use for secured ( Patient ) data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The H2O R package is a great resource for building predictive models. But I am concerned with the security aspect of it.
Is it safe to use patient data with H2O in terms of security vulnerabilities ?
After data ingestion into H2O-3, the data lives in-memory inside the java server process. Once the H2O process is stopped, the in-memory data vanishes.
Probably the main thing to be aware of is your data is not sent to a SaaS cloud service or anything like that. The H2O-3 java instance itself handles your data. You can create models in a totally air-gapped, no-internet environment.
So the short answer is, it’s perfectly safe if you know what threats you are trying to secure against and do the right things to avoid the relevant vulnerabilities (including data vulnerabilities like leaking PII and software vulnerabilities like not enabling passwords or SSL).
You can read about how to secure H2O instances and the corresponding R client here:
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/security.html
(Note if you have a high-value use case and want detailed personal help with this kind of thing, H2O.ai the company offers paid enterprise support.)

ASP.Net MVC and Database Connections [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I currently have an ASP.NET MVC application that has a static class to connect to the database. In some tests, I noticed that when I start using the application from several different sessions, it is getting extremely slow. If I only have one session, it works in an acceptable way, but from 3 sessions the usage starts to become unfeasible.
The static connection is starting in the Application_Start of Global.asax, I believe that this slowness is due to all of them competing for the same connection, right?
Given this, I decided to change the operation of the same, but I have two approaches that I think to follow, but I would like an opinion to know which would be the best:
1) Establish a session connection started in Global.asax, however I am afraid that due to certain actions that the application executes almost simultaneously, this approach is also slow at a given time.
2) Establish a connection for each query action to the database, instantiating the connection, opening the connection, executing the action and closing the connection. But again, due to the high number of actions that the application performs when loading some pages, I'm afraid to pop the connection pool by working this way.
Can you help me? Do you have a vision of another approach that can be used?
Currently, we are using ADO.Net and in some tests, we did with NHibernate even with just one user the gigantic slowness.
I thank the attention.
(Translated Post)
You can try to use Entity Framework.
http://www.entityframeworktutorial.net/what-is-entityframework.aspx
https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/creating-an-entity-framework-data-model-for-an-asp-net-mvc-application
Assuming that you have the application(ASP.NET MVC) and database already created you need to opne Visual Studio, select the Models folder>Add>New Item an than you need to choose ADO.NET Entity Data Model. After that you will see the Entity Data Model Wizard. From that window choose the first option, Ef Designer from Database. Click next, add the db connection parameters(db server, database name), click next, the the objects from db you want to use(tables, stored procedure) and click Finish
Hope it helps !

Multiple simultaneous threads using SQLite in R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My question is basically whether it is safe, when using parallel processing in R, to have multiple threads accessing an SQLite database simultaneously.
I understand that SQLite is a file level dbs, so every connection gets access to the whole db. So, it is possible to have multiple connections going simultaneously (e.g., via the SQLite3 front end and, in R, via RSQLite's dbConnect() and via dplyr's src_sqlite()). I guess that this is OK so long as there is a single user who can assure that commands submitted one way are completed before other commands are submitted.
But with multithreading, it would seem possible that one thread might submit a command to an SQLite db while a command submitted by another thread might not have completed.
Does the underlying SQLite engine serialize received commands so that it is assured that one command is completed before the next one is processed, so as to avoid creating an inconsistent status of the database?
I have read the SQLite documentation on locking and "ACID," and as I understand this documentation, the answer appears to be "Yes."
But I want to be sure that I have understood things correctly.
Another question is whether it is safe to have separate threads submitting commands simultaneously that actually change the database.
Since one can't control the exact timing by which the two threads submit their commands, I assume that using parallel processes that might change an SQLite data table in an inconsistent way would not be a good idea -- e.g., having one thread insert a record into a table and another thread doing a SELECT on the same table.
It is okay if it is reading the database, but writing to the database locks the database for at least a few milliseconds. If you try to read while it is writing (or write while it is writing), an error will be returned, which can be used in order to determine whether you should retry the read/write operation. If this is for a relatively simple process, you should be fine with sqlite3. Source

Life cycle of business objects in a web application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
This is quite a general question regarding software design in a web application supposed to run on an intranet of a company. The basic design is simple: I have a backend (PHP) on a server, where a database (MySQL) is holding all data for the application. The application is used in a company and shall reflect employee related tasks. The frontend webpage (HTML, CSS) shows a UI for manipulating employee data etc.
A basic business object is here the employee, which is represented by a PHP class. When the frontend asks to show data for an employee, the query goes to the php class, which loads the employee data from database, instantiates an employee object and sends the employee data back to the client. Further queries from the client involve some logic that is done in the employee object, so I wanted to keep the just instantiated employee object in memory and use it again. I wanted to avoid the database access to load the employee data, instantiate object every time the client does a request.
So I created an object manager in PHP on server side which should store already instantiated business objects and provide them on demand or load objects required at the moment they are needed (lazy loading). But I realized that (of course) the server does not keep instances in memory between different http requests from the client, so my object manager does not work.
In a desktop application this would work, but not in a web application. Is this a bad approach for web application design? Is it normal that business objects are loaded and instantiated on every client request?
Another possibility is to describe and instantiate employee objects in javascript class and do this logic on the client side where objects can be kept in memory, isn't it? But I thought it is better to do business logic on the server to not stress the client to much.
I wanted to avoid the database access to load the employee data, instantiate object every time the client does a request.
You are headed for a great deal of complexity if this is really a requirement in your environment. I would benchmark the actual cost of just re-reading the data and judge if the added complexity is really needed.
You are trying to implement an object cache. If you solve the issue of keeping the object in memory between HTTP requests (it can be done), you will soon discover questions of concurrency (multiple clients acting on objects that you have cached) and the issue of transactionality (changes to the object cache must be written to the database in a transactionally consistent manner, among other issues.
If you really need this functionality, look into existing object cache implementations.

What are the common issues and best practices when using ASP.NET session state? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
For example, I make extensive use of the session in my ASP.NET application but have heard somewhere that objects stored in session can be removed by the system where server memory runs low. Is this true? Is there any session 'callback' functionality to allow you to re-populate scavenged objects?
More generally, what other potential problems of using session state are there, and what are the suggested best practices for dealing with them?
No matter which precautions you use, always assume your Session may disappear and double check:
Dim sessionObj As Object = CType(Session("SessionKey"), Object)
If sessionObj Is Nothing
sessionObj = ReCreateObj()
Session("SessionKey") = sessionObj
End If
object sessionObj = Session["SessionKey"] as object ;
if(sessionObj == null)
{
sessionObj = ReCreateObj();
Session["SessionKey"] = sessionObj;
}
A Session wrapper works well for this so you don't have the check everywhere you access your Session vars.
I've had good luck with custom Session-State Providers. A couple useful tweaks include:
When using a database-backed Session, fetch all of your Session vars at the beginning of the request and store them all at the end versus fetch per access/store per set. This is useful if you're using more than a couple Session vars as it cuts down on round trips to an external server.
Implement a two-step memory/database store. On save, store your var in memory on the local server and save to the database as well. When accessed, check memory first (if you find it here, you save a network hop), then the database.
I'd definitely see if one of three standard Session modes works before implementing something you'll have to support. Custom Sessions are useful once you know more about your app's quirky needs.
And every time the Application pool is restarted as well, for example, create a page that sets a session variable and show it in the page, now, update you BIN, APPCode, Global or Local Resources folder and you will see that the session will be erased.
And in Shared Hosting environments that will be trickier cause they block the session timeout to normally 20 minutes, no matter what Session.Timeout value you override.
For this and more reasons I do use SQL Sessions, the web application is a little bit slowly cause everything (sessions) are in a SQL Server instead in memory, but you have much more control on it and you can set the session timeout to whatever you want! Very good in my case because I want them to continue working while I'm updating the application (was very bad that every time I updated the APP Code folder all the user would loose what they were working and had to login again) and I want to extend the shared hosting 20 minutes to 45.
Read more about SQL Session State
Sorry, I don't know about the removal of items from session state, but in response to your general question, the most important question is whether you anticipate running your web app on just one web server machine or many.
Typically, you are keeping the session state in the memory of one single machine. Your load balancer that fronts your multiple web servers must be configured to make things "sticky", so that the same user keeps being routed to the same machine.
Alternatively, you could store session state in a central location, including SQL Server, but then you pay the performance hit for the network and IO interaction with SQL Server. You give up the speed of local, in memory state.
Since session state is 20 minutes by default, if you need more than that, but your hosting company won't let you override it, you would have to have some ajax or hidden iframe javascript code running in your web pages that would ping the server every (less than) 20 minutes to keep the session aliv

Resources