Aside from creating SQL SERVER tables, is there a light-weight technology or method for adding persistent data to an ASP.NET website which works with LINQ and preferably doesn't require much in terms installing installation/packages to a project nor learning large frameworks?
Session state is one option but only if it is run out of process and configured for SQL Server which doesn't fit my needs.
Options to satisfy the question:
1.
Session State -> Only if configured for out-of-process and SQL Server.
2.
NoSql Database Solutions -> MonogoDb, RavenDB, Sqlite.org
3.
SQL Server Key/Value Singleton -> Create a table and store key/value pairs as a single entry in the table or create a generic key/value table. Keys will need to be unique and values will need to scalars only or multiple values crammed into one key using a deliminator. A generic key/value table will need to store all keys as strings and rely on type conversion either implicit to the program or stored as an extra column.
See below
http://en.wikipedia.org/wiki/Entity-attribute-value_model
How to design a product table for many kinds of product where each product has many parameters
4.
Create an XML file or other flat file and store/write key/values to it. May require special permissions.
I will likely go with option #3 because it satisfies my current requirements best but will explore the NoSQL solutions for future projects.
Related
Background info before question
I use state session a lot to store my complex objects, I am also using at max like 8 tables. On those 8 tables I am using like about 25 SP to join users based on user id and some key values that the user selects. All this is done on SQL Server database.
zip codes spatial values
male or female
has pictures
approved profile
registered account(paying for services)
I store Images in a file system on my application server. I store the path on my db table.
Use Case
Dating website, unique payloads most of the time such as searching based on a certain criteria, updating and fetching personal profile with photos. I am using asp.net MVC, and this is a website only. (separate web pages for responsive designs for other devices)
Question
Can I just use Redis as my primary data store, instead of using SQL server Database based on my use case?
Key Points
I don't on plan on having more than like 10-12 total tables in the future. The data input are mostly strings. When I want to persist a complex object like profile information and image paths I use a Session State. I love what I read about the speed of Redis, and I see it being counter productive to duplicating updates to both Redis and a DB if I stack them.
I don't think you can easily replace your database with Redis because you are missing things like FK, indexes, constraints (Redis is a NoSQL so you don't have any relational attributes). So you will end up building those yourself, especially the indexes for your 25 stored procs which can become pretty complex stuff. Also the fact that you have 25 stored procs for your 8 tables kind of tells me you have quite some logic here which will be even harder to move to Redis or your application layer.
Sure, adding Redis to your stack is not easy and it will make your application more complex, so you must weight the benefits and the drawbacks. And Redis because it keeps all the stuff in memory it is best suited for a cache layer.
I have developed a CRM for my company. Next I would like to take that system and make it available for others to use in a hosted format. Very much like a salesforce.com. The question is what type of database structure would I use. I see two options:
Option 1. Each time a company signs up, I clone the master database for them.
The disadvantage of this is that I could end up with thousands of databases. Thats a lot of databases to backup every night. My CRM uses cron jobs for maintanance, those jobs would have to run on all databases.
Each time I upgrade the system with a new feature, and need to add a new column to the database, I will have to add that column to thousands of databases.
Option 2. Use only one database.
At the beginning of EVERY table add "CompanyID". In EVERY SQL statement add "and
companyid={companyid}".
The advantage of this method is the simplicity of only one database. Just one database to backup each night. Just one database to update when needed.
The disadvantage is what if I get 1000 companies signing up, and each wants to store data on 100,000 leads, that 100,000,000 rows in the lead table, which worries me.
Does anyone know how the online hosted CRMs like salesforce.com do this?
Thanks
Wouldn't you clone a table structure style to each new database id all sheets archived in master base indexed client clone is hash verified to access specific sheet run through a host machine at the front end of the master system. Then directing requests as primary role. Internal access is batched to read/write slave systems in groups. Obviously set raid configurations to copy real time and scheduled. Balance requests for load to speed of system resources. That way you separated the security flawed from ui and connection to the retention schema. it seems like simplified structures and reducing policy requests cut down request rss in the query processing. or Simply a man in the middle approach from inside out.
1) Splitting your data into smaller groups in a safe, unthinking way (such as one database per grouping) is almost always best if you want to scale. In this case, unless for some reason you want to query between companies, keeping them in separate databases is best.
2) If you are updating all of the databases by hand, you are doing something wrong if you want to scale. You'd want to automate the process.
3) Ultimately, salesforce.com uses this as the basis of their DB infrastructure:
http://blog.database.com/blog/2011/08/30/database-com-is-open-for-business/
I am planning on using sequential guids as primary keys/uuids as detailed in the post below
What are the performance improvement of Sequential Guid over standard Guid?
I am wondering if there are any gotchas as far as generating these guids across multiple web servers in a web farm environment. I suspect the chances of collision are impossibly low but the fact that the mac address of the web server/timestamp would
doubtless be involved in generating these guids gives me pause. I wonder if the possibility exists in a high traffic website the ordering would be messed up and the benefit of using sequential guids might be lost.
Please let me know what your experience is.
For what it is worth, my environment is ASP.NET 3.5, IIS 7 using Oracle 11g.
Incidentally, what is the data type I should use for guids in Oracle? I know that Sql Server has "uniqueidentifier"
Thanks for your advice
-Venu
Because I was the creator of the post you're referring to I can answer to it.
We're using the C# code shown in the post (without the modification of ordering detailed in one of the reply, that I feel could improve performance another little bit) in web farms with from 2 to 8 application servers and never had problems of concurrency, I believe that the SequentialGuid function implemented in the Windows core DLLs already takes care of creating different guid on different machines.
In database operation having different machines inserting different Guids means that each different application server in the web farm will write data that will reside on specific regions of the database (i.e. an application server will write guid starting with 12345 and the other one with guid starting with 62373) and so the update of indexes still works efficiently because page splits do not happens very frequently (or never).
So, from my experience, no specific problem happens if you use the same strategy to generate Guids that I outlined in my original message also if you're working in web farm enviviroment if you use the proper method to generate the Guids.
I would avoid in any way to create Guid from code and also to create Guid in a central manner.
Regarding data type we used char(36) because we like to waste a lot of space! Joke aside we decided to use a long and verbose way to write data because having data in a clear format ease a lot the maintenance, but you can use Oracle GUID or simply a RAW(16) data type (they're basically the same) and spare 20 bytes of for each row. To make browsing and editing of data easier you can provide your customer a couple of function to code and decode raw guid data so that the textual representation of the guid is seen.
Guid for Oracle
You might want to take a look at how NHibernate's Guid Comb generator works. I have never heard of a collision.
To ensure that you have unique GUIDs only 1 server can be the creator of said GUIDs.
If memory serves, Oracle doesn't support the creation of MS's "Guid for OLE" but you should be able to generate something highly similar utilizing this: RAWTOHEX(SYS_GUID())
Alternatively, you could have a separate application residing on a single server that is solely responsible for generating GUIDs (for example, call a web service located at a specific server, whose sole purpose is to generate and return GUIDs.
Finally, GUIDS are not sequential. Even if you generate one right after another, they won't increment in the exact same fashion as an integer (i.e. that last digit won't go from C-D in one step). Sequencing requires integers or some other numeric data type.
I'm looking for a good way to visualize ASP.NET session state data stored in SQL server, preferably without creating a throwaway .aspx page. Is there a good way to get a list of the keys (and serialized data, if possible) directly from SQL server?
Ideally, I'd like to run some T-SQL commands directly against the database to get a list of session keys that have been stored for a given session ID. It would be nice to see the serialized data for each key as well.
Can you elaborate slightly, is there no reference to a HttpContext available (You can use this from backend code as well FYI), which prevents you from utilizing the built in serialization and keys dictionary?
EDIT, in response to your update. I believe the ASPState database creates and destroys temporary tables as needed, it does not have permanent tables you can query, take a look at the stored procedures and you should find one along the lines of "TempGetItem", you can either use this sproc directly, or examine its source for more insight.
When you run asp.net application with sql server session mode, it creates two tables, dbo.ASPStateTempApplications and dbo.ASPStateTempSessions. You can find your application from first table and use it to query open sessions from second table. The ASPStateTempSessions table stores two columns SessionDataShort and SessionDataLong. All session information is binary. You need to know object types being stored in session if you want to deserialize them back again and view the contents.
I have tried this recently and it works fine. In fact, for a complex application it is worth having some tools to view and parse session data to make sure we dont store unwanted objects and leave it in database for long - it has potential of slowing things down.
Does a new SessionFactory and Session object have to be created for each database? I have a data store for my application data, and a separate data store for my employee security, which is used to validate users. Do I have to create a new SessionFactory ans Session object for calls to the 2 different databases?
ok so this doesn't answer your question directly but it might offer an insight as to why you should create multiple session objects for each datastore.
This article explains how you can implement a thread safe lazy singleton for each type of Session you need so that you only have one session per datastore but it's shared across the entire application. So at most you're only ever going to have 2 session objects.
To directly answer your question however, you will need 1 session object per database.
General case
The general case answer is no, you need at least different sessions for the general case.
You may use a single session factory by using the OpenSession overload taking an opened connection as argument, allowing you to switch database for the session requiring it.
This has some drawbacks, like lack of connection auto-releasing after transactions, disabling of second level cache, ... Better have two session factories in my opinion, rather than supplying your own connection on session opening.
Database specific cases
Depending on the database server you use, you may be able to use a single connection string for accessing both with NHibernate. If you can use a single connection string, then you can use a single session factory and use the same session for accessing your entities split between two databases.
Simplest case
Using SQL Server, you may have your two databases on the same SQL Server. In such case, you can use a single connection string and adjust the catalog attribute on your <class> mappings for telling in which database the table is to be found. (schema can be used too, by appending a dot. It is available in NHibernate since longer, so with an old version you may only have schema.)
Of course, the connection credentials must be valid for accessing both databases.
Other cases
Still using SQL Server, if the second database is on another server, you may use a linked server. You would adjust again the catalog attribute on classes requiring it for specifying the appropriate linkedServerName.DbName.
Maybe other databases could have similar solutions.