I am planning on using sequential guids as primary keys/uuids as detailed in the post below
What are the performance improvement of Sequential Guid over standard Guid?
I am wondering if there are any gotchas as far as generating these guids across multiple web servers in a web farm environment. I suspect the chances of collision are impossibly low but the fact that the mac address of the web server/timestamp would
doubtless be involved in generating these guids gives me pause. I wonder if the possibility exists in a high traffic website the ordering would be messed up and the benefit of using sequential guids might be lost.
Please let me know what your experience is.
For what it is worth, my environment is ASP.NET 3.5, IIS 7 using Oracle 11g.
Incidentally, what is the data type I should use for guids in Oracle? I know that Sql Server has "uniqueidentifier"
Thanks for your advice
-Venu
Because I was the creator of the post you're referring to I can answer to it.
We're using the C# code shown in the post (without the modification of ordering detailed in one of the reply, that I feel could improve performance another little bit) in web farms with from 2 to 8 application servers and never had problems of concurrency, I believe that the SequentialGuid function implemented in the Windows core DLLs already takes care of creating different guid on different machines.
In database operation having different machines inserting different Guids means that each different application server in the web farm will write data that will reside on specific regions of the database (i.e. an application server will write guid starting with 12345 and the other one with guid starting with 62373) and so the update of indexes still works efficiently because page splits do not happens very frequently (or never).
So, from my experience, no specific problem happens if you use the same strategy to generate Guids that I outlined in my original message also if you're working in web farm enviviroment if you use the proper method to generate the Guids.
I would avoid in any way to create Guid from code and also to create Guid in a central manner.
Regarding data type we used char(36) because we like to waste a lot of space! Joke aside we decided to use a long and verbose way to write data because having data in a clear format ease a lot the maintenance, but you can use Oracle GUID or simply a RAW(16) data type (they're basically the same) and spare 20 bytes of for each row. To make browsing and editing of data easier you can provide your customer a couple of function to code and decode raw guid data so that the textual representation of the guid is seen.
Guid for Oracle
You might want to take a look at how NHibernate's Guid Comb generator works. I have never heard of a collision.
To ensure that you have unique GUIDs only 1 server can be the creator of said GUIDs.
If memory serves, Oracle doesn't support the creation of MS's "Guid for OLE" but you should be able to generate something highly similar utilizing this: RAWTOHEX(SYS_GUID())
Alternatively, you could have a separate application residing on a single server that is solely responsible for generating GUIDs (for example, call a web service located at a specific server, whose sole purpose is to generate and return GUIDs.
Finally, GUIDS are not sequential. Even if you generate one right after another, they won't increment in the exact same fashion as an integer (i.e. that last digit won't go from C-D in one step). Sequencing requires integers or some other numeric data type.
Related
My question is regarding aggregated data for fast access across several servers on Amazon EC2. In an ASP.NET application, I would probably store that data in Application["somevar"] variable so it can be accessed quickly (in memory) by all users.
The problem starts when I want that aggregated data to be gathered and its value equal on all servers. If I chose to deploy two servers, the user might be transmitting data to different servers every time (the servers are under a load balancer or ElasticBean), and if for example I count the number of times the user asked for the page, each server's Application var will have different value
For example:
Server 1:
Application["counter1"] = 120
Server 2:
Application["counter1"] = 130
What I want is a variable that be the same on all servers. The reason I want the data in Application-like variable is that I want that data in memory for fast access, then I might write that data to the database.
What I want to know is how can I achieve this. I though about using Amazon ElasticCache so even if I have 10 server under the load balancer, I can access the ElasticCache variable via API and it doesn't matter from which server I access the memcache variable, it will get/set the same variable, and therefore I can achieve my goal in keeping a cross-server global variable.
I wanted to know if it's a good practice and wherever there is a better way to implement such feature.
I am developing my application in ASP.NET C# and with MySQL. Also take into consideration that some of the aggregated data should be written to the database, and I do that to prevent a lot of writes at the same time, and write data after it reaches 20 writes for example and then the data will be written to the database.
Just to clear up a few things. First lets make sure that we understand how to use ElasticCache. The API for ElasticCache doesn't give us any CRUD operations on the cache cluster, the API from Amazon is strictly for managing the servers and configuration. You will need to use a memcached library for .NET to connect to the cluster. Using a cache like memcached is a good solution for you're first problem. It will easily and quickly store simple application variables in a distributed environment. Using a cache is generally a good practice even with smaller applications.
I'm not sure how many users you have or how many you expect to have but one thing I've learned in my years programming is that over optimization is usually a bad idea. Over optimization is when you start to optimize you're code before it's really necessary. Take you're proposed optimization for example. We know that making 1 write on the database is quicker than making 20 writes, generally speaking of course. However, unless your database is the bottleneck in your application to implement such a feature you introduce a significant amount of complexity for no immediate benefit. If a memcached cluster server crashes, which it will, then the data waiting to be written to the database is lost. If you really do have a lot of users then you have to start thinking about concurrency and locks on the memcached items.
Without knowing more about your application i can't make any real recommendations except to say that make sure your optimization are required before you spend time increasing the complexity of your application for nothing.
I am using a AES encryption/decryption class that needs a key value and vector value encrypt and decrypt data in an MVC3 application.
On saving the record I am encrypting the data then storing in a database. When i retrieve the record i am decrypting in the controller and passing the unencrypted value to the view.
The concern is not protecting data as it traverses the network but to protect the database should it be compromised.
I have read many posts that say dont put the keys for encryption in your code.
Ok so where should they be kept? File system? Another Database?
Looking for some direction.
Common sense says, if an intruder gets access to your database, they will most likely also have access to your file system. It really comes down to you. For one, you can try to hide it. In configuration files, in plain files somewhere in file system, encrypt it with another key that is within the application ... and so on and so forth.
Configuration files are a logical answer, but why take a chance - mix it. Feel free to mix keys with multi-level encryptions - one requiring something from the record itself and being unique to every record, other one requiring a configuration value, third one requiring an application-specific value, and perhaps a fourth one from a library hidden well within your application's references? This way, even if one layer somehow gets compromised, you will have several others protecting it.
Yes, it adds overhead. Yes, it is relatively expensive. But is it worth it if you have sensitive data like user credit card details? You bet it is.
I'm using similar encryption and hashing techniques in one of my personal pet projects that is highly security focused and carefully controlled. It depends how much data you need to display at any one time - for example, mine will ever fetch only 10 records at a time, most likely even less.
... To specify what I mean by mixing: Encrypt once. Then encrypt that data again with different key and suggestedly different algorithm.
I would use Registry Keys protected by ACL, so only the account under which your app pool is running can read them.
In order to improve speed of chat application, I am remembering last message id in static variable (actually, Dictionary).
Howeever, it seems that every thread has own copy, because users do not get updated on production (single server environment).
private static Dictionary<long, MemoryChatRoom> _chatRooms = new Dictionary<long, MemoryChatRoom>();
No treadstaticattribute used...
What is fast way to share few ints across all application processes?
update
I know that web must be stateless. However, for every rule there is an exception. Currently all data stroed in ms sql, and in this particular case some piece of shared memory wil increase performance dramatically and allow to avoid sql requests for nothing.
I did not used static for years, so I even missed moment when it started to be multiple instances in same application.
So, question is what is simplest way to share memory objects between processes? For now, my workaround is remoting, but there is a lot of extra code and I am not 100% sure in stability of this approach.
I'm assuming you're new to web programming. One of the key differences in a web application to a regular console or Windows forms application is that it is stateless. This means that every page request is basically initialised from scratch. You're using the database to maintain state, but as you're discovering this is fairly slow. Fortunately you have other options.
If you want to remember something frequently accessed on a per-user basis (say, their username) then you could use session. I recommend reading up on session state here. Be careful, however, not to abuse the session object -- since each user has his or her own copy of session, it can easily use a lot of RAM and cause you more performance problems than your database ever was.
If you want to cache information that's relevant across all users of your apps, ASP.NET provides a framework for data caching. The simplest way to use this is like a dictionary, eg:
Cache["item"] = "Some cached data";
I recommend reading in detail about the various options for caching in ASP.NET here.
Overall, though, I recommend you do NOT bother with caching until you are more comfortable with web programming. As with any type of globally shared data, it can cause unpredictable issues which are difficult to diagnosed if misused.
So far, there is no easy way to comminucate between processes. (And maybe this is good based on isolation, scaling). For example, this is mentioned explicitely here: ASP.Net static objects
When you really need web application/service to remember some state in memory, and NOT IN DATABASE you have following options:
You can Max Processes count = 1. Require to move this piece of code to seperate web application. In case you make it separate subdomain you will have Cross Site Scripting issues when accesing this from JS.
Remoting/WCF - You can host critical data in remoting applcation, and access it from web application.
Store data in every process and syncronize changes via memcached. Memcached doesn't have actual data, because it took long tim eto transfer it. Only last changed date per each collection.
With #3 I am able to achieve more than 100 pages per second from single server.
We are building an extranet loan status check website using ASP.NET MVC with a WCF backend. Its a pretty standard design with the MVC site using a WCF service reference to get customer objects. The ervice uses an Oracle backend + http binding, and won't be hosted on the same server as the MVC site (so we can't use tcp binding to reduce latency).
The problem we encountered is that every call to the service is resulting in a 7-8s response time which is unacceptable for an extranet site and much higher than the 2s magic mark. The service method(s) call 12 stored procedures to create the customer object. The database is, unfortunately, denormalized (we can't change it as its also used by other inhouse production systems) so most of the calls are basic select statements which populate the customer object and its associated objects. The service proxy is properly opened and closed/disposed in the MVC actions so there are no instances of any service connection leaks. A new client proxy is created for every request (i.e., we are not using the singleton pattern for the service).
Any ideas how we can speed this up ?
Thanks
It sounds like you already know where the problem is - it's the database.
I've never heard of a WCF operation taking more than a fraction of a second to set up and tear down, excluding any logic inside. So even if you could shave off 1-2 seconds of latency (which is probably an optimistic estimate), that doesn't really help if the database operation takes 5-6 seconds by itself.
Honestly? Running 12 stored procedures to create a customer is completely off-the-wall. The purpose of a stored procedure is to encapsulate all of the logic necessary to perform a complex database operation. The very first thing you need to do is change this to be one stored procedure - then if it's still slow, profile the database to see what's taking so long and fix it accordingly. Usually poor database performance is due to one or more missing indexes.
Until you accurately measure what is really happening, don't be too quick to assume where the bottleneck is.
You really need to do an Oracle extended SQL trace to see where that slowness is coming from. Anything other than that is mostly guesswork. Here is a paper from Cary Millsap (of Method R and formerly of Hotsos) that you can download that details doing this:
http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap
So I have a challenge to build a site that people online can use to interact with organizations.: Asp.NET MVC Customer Application
One of the requirements is financial processing and accounting.
I'm very comfortable using SQL Transactions and stored procedures to do this; i.e. CreateCustomer also creates an entity, and an account record. We have a stored procedure to do this, that does a begin transaction, creates some setup records we need, then does a commit. I'm not seeing a good way to do this with an ORM, and after reading some great blog articles I'm starting to wonder if I'm going down the wrong path.
Part of the complexity here is the data itself:
I'm querying x databases (one per existing customer) to get some of my data, though my app has its own data store as well. I need to query the x databases, run stored procedures on the x databases, and also to my own datastore.
I'm not seeing strong support for things like stored procedures and thereby transactions, though it does seem to be present.
Maybe I'm just trying to make my app a nail here, cause the MVC hammer is sooo shiny. I'm plenty comfortable with raw ADO.NET of course, but I'm in love with the expressive feel to writing Linq code in C# and I'd rather not give up on it.
Down to the question:
Is this a bad idea? Should I try to use Linq / Entity Framework, or something like nHibernate... and stick with the ORM pattern or should I trash it and use raw ADO.NET data access?
Edit: a note on scale; from a queries per second standpoint this app is not "huge". But, from a data complexity perspective, it does need to query against 50+ databases (all identical, or close to it) to read data from an external application and publish data back to that application. ORM feels right when dealing with "my" data store, but feels very wrong for accessing the data from the external application.
From a certain size (number of databases) up, you have to change the paradigm. Are you at that size?
When you deploy what ultimately is a distributed application and yet try to controll it as an ordinary local application you are going to run into a set of fundamental issues around availability, scalability and correctness. If you use concepts like 'distributed transactions', 'linked servers' and 'ORM', your are down the wrong path. True distributed applications will use terms like 'message', 'queue' and and 'service'. Terms like Linq, EF, nHibernate are all fine and good, but none will bring you anything extra from what a simple Transact-SQL SELECT statement brings. In other words, if a SELECT solves your issues, then the client side various ORM will work. If not, they won't add any miraculos value.
I recommend you go over the slides on the SQLCAT: High Performance Distributed Applications in Real World Deployments which explain how a site like MySpace manages to read and write into a store of nearly 500 servers and thousands of databases.
Ultimately what you need to internalize is this: one database can have 95% availability (uptime and acceptable service response time). A system consiting of 10 databases with 95% availability has 59% availability. And a system of 100 databases each with 99.5% availability has 60% availability. 1000 databases with 99.95% availability (5 min downtime per week) have 60% availability. And this is for an ideal situation. In reality there is always a snowball effect caused by resource consumption (eg. threads blocked on trying to access an unavailable or slow resource) that makes things far worse.
This means that one cannot write a large distributed system relying on synchronous, tightly coupled operatiosn and transactions. Is simply impossible. You always rely on asynchronous operations (usually messaging and queues), which is something completely different from your run-of-the-mill database application.
use TransactionScope object available in System.Transaction.
What I have chosen is to use Entity Framework to allow access to the application's main data store, and create a custom DAL for access to external application data and for access to stored procedures within the application.
Here's hoping Entity Framework 4.0 fixes the issue. For now, I'm using the concept listed here.
http://social.msdn.microsoft.com/forums/en-US/adodotnetentityframework/thread/44a0a7c2-7c1b-43bc-98e0-4d072b94b2ab/