how to generate unique id per user? - asp.net

I have a webpage Default.aspx which generate the id for each new user after that the id will be subbmitted to database on button click on Default.aspx...
if onother user is also entering the same time the id will be the same ... till they press button on default.aspx
How to get rid of this issue...so that ... each user will be alloted the unique id ...
i m using the read write code to generate unique id ..

You could use a Guid as ids. And to generate an unique id:
Guid id = Guid.NewGuid();
Another possibility is to use an automatically incremented primary column in the database so that it is the database that generates the unique identifiers.

Three options
Use a GUID: Guid.NewGuid() will generate unique GUIDs. GUIDs are, of course, much longer than an integer.
Use intelocked operations to increment a shared counter. Interlocked.Increment is thread safe. This will only work if all the requests happen in the same AppDomain: either process cycling on a refresh of the code will create a new AppDomain and restart the count.
Use an IDENTITY column in the database. The database is designed to handle this, within the request that inserts the new row, use SCOPE_IDENTITY to select the value of the identity to update in memory data (ORMs should handle this for you). (This is SQL Server, other databases have equivalent functionality.)
Of there #3 is almost certainly best.

You could generate a Guid:
Guid.NewGuid()
Or you could let the database generate it for you upon insert. One way to do this is via a Sequence. See the wikipedia article for Surrogate Keys
From the article:
A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.
The Sequence/auto-incremented column option is going to be simpler, and easier to remember when manually querying your DB (during debugging), but the DBA at my work says he's gotten 20% increases in performance by switching to Guids. He was using Oracle, and his database was huge, though :)

I use a utility static method to generate id's, basically use the full datetime(including seconds) and generate a random number of say 3 or 4 characters and return the whole thing, then you can save it to the database.

Related

How to introduce a new column in dynamo DB running in production?

I have a use case where DynamoDB is running in production and I need to add a new column IDUpdatedAt which will also be serving as a sort key for one of the GSIs.
I tried a thing in test where my application adds the new rows with IDUpdatedAt, it's working fine but what about the existing rows? How to add the values for those?
Also the new rows will not be added without IDUpdatedAt, but how will the search be impacted for older rows?
PS: IDUpdatedAt is being used as a filter in the application, i.e., user can search for specific ID and can get results sorted by date. That's why IDUpdatedAt is also a part of GSI (sort key).
Please help.
You've got the right idea by adding the field to new items. After all, DynamoDB does not enforce a particular schema outside of the primary key.
This also happens to be a very useful feature, especially when defining a GSI on that attribute; if the atttibute exists on the item, it ends up in the index! For example, imagine modeling an email inbox in DDB where each item represents an email. You could include an attribute 'is_read' and define a GSI using that atttibute.
If the 'is_read' attribute exists on the item, it's in the index. Otherwise, it's not. A cool way to use GSIs to implement filtering.
Pretty neat stuff!
However, there is no way to retroactively update all items with a new attribute other than manually updating each item (or in batches). The equivalent in SQL databases is defining a new column. Unfortunately, an analogous operation in DDB does not exist.

How does String id = db.collection("myCollection").document().getId() gives document Id without hitting Firestore database?

I read somewhere that db.collection("mycollection").document().getId(); gives document Id in mycollection without hitting cloudstore database. But how it is possible to create unique Id without knowing document id of already existing doucments or hitting couldstore?
The auto-ID that is generated when you call document() is a fairly basic UUID (universally unique identifier). Such identifier are statistically guaranteed to be unique. In my words: there is so much random information in there, that the chances of two calls generating the same value are infinitesimally small.
So Firestore doesn't actually call the server to check whether the ID it generates is unique. It instead relies on the mathematical properties of picking a single value out of a sufficiently large and random set to be very certain it is unique.

MVC3 routes - replace id with object name

I'm looking for a fast & elegant way of converting my object IDs with descriptive names, so that my autogenerated routes look like:
/products/oak-table-25x25-3-1
instead of
/products/5bd8c59c-fc37-40c3-bf79-dd30e79b55a5
In this sample:
uid = "5bd8c59c-fc37-40c3-bf79-dd30e79b55a5"
name = "Oak table (25x25) 3/1"
I don't even know how that feature could be named, so that I might google for it.
The problem that I see so far is the uniqueness of that "url-object-name", for example if I have two oak tables 25x35 in the db, and their names differ too little to be uniquely url-named but enough to fool the unique constraint in the db.
I'm thinking of writing that function for name-transform in SQL as an UDF, then adding a calculated field that returns it, then unique-constraining that field.
Is there some more mainstream way of achieving that?
One method is that employed by stackoverflow.com which in your case would be:
/products/5bd8c59c-fc37-40c3-bf79-dd30e79b55a5/oak-table-25x25-3-1
This ensures uniqueness, however the length of the UUID may be a deterrent. You may consider adding a sequential int or bigint identity value to the products table in addition to the uniqueidentifier field. This however would require an additional index on that column for lookup, though a similar index would be required for a Url having only a descritive string. Yet another method would be to use a hash value, seeded by date for instance, which you can compose with the descriptive name. It is simpler to rely on a sequential ID value generated by a database, but if you envision use NoSQL storage mechanisms in the future you may consider using an externally generated hash value to append.
Identity should have 2 properties: it should be unique and unchangable. If you can guarantee, that /products/oak-table-25x25-3-1 will never change to /products/oak-table-25x25-3-1-1 (remember, user can have bookmarks, that shouldn't return 404 statuscode)- you can use name as url parameter and get record by this parameter.
If you can't guarantee uniqueness or want to select record more faster - use next:
/products/123/oak-table-25x25-3-1 - get record by id (123)
/products/123/blablabla - should redirect to first, because blabla no exists or have anoher id
/products/123 - should redirect to first
And try to use more short identities - remember, that at web 2.0 url is a part of UI, and UI should be friendly.
MVC routing (actions) will handle spaces and slashes in a name. It will encode them as %20, and then decode them correctly.
Thus your URL would be /products/oak%20table%2025x25-3%2F1
I have done something very similar in an eCommerce platform I am working on.
The idea is that the URL without the unique ID is better for SEO but we didn't want the unique ID to be the product name that can change often.
The solution was to implement .NET MVC "URL slug only" functionality. The product manager creates "slugs" for every product that are unique and are assigned to products. These link to the product but the product ID and name can be changed whenever.
This allows:
domain.com/oak-table-25x25-3-1
to point to:
/products/5bd8c59c-fc37-40c3-bf79-dd30e79b55a5
(The same functionality can be used on categories too so domain.com/tables can point to domain.com/category/5b38c79c-f837-42c3-bh79-dd405479b15b5)
I have documented how I did this at:
http://makit.net/post/3380143142/dotnet-slug-only-urls

Using IPrinciple.Identity.Name as a key in a dataBase to identify user's rows

I'm writing a small intranet app that uses Windows Authentication and Asp.Net MVC.
I need to store various bits of data in a db against each user.
As far as I can tell the IPrinciple object does not seem to have something like a unique id. So I was thinking I could just use User.Identity.Name as a unique value to identify rows in my db.
Is this a bad idea? Is there an alternative to this approach?
Thanks for any help.
I would create a User table that included an identity column as the id. When a person accesses the site, I would check the user table for that individuals unique id, and read it if it exists, or insert a new row if the user is new.
Login names can be long, and that could affect your indexes depending on the expected size of your data.

Efficeintly maintaining a cache of distinct items in a huge DB table

I have a very large (millions of rows) SQL table which represents name-value pairs (one columns for a name of a property, the other for it's value). On my ASP.NET web application I have to populate a control with the distinct values available in the name column. This set of values is usually not bigger than 100. Most likely around 20. Running the query
SELECT DISTINCT name FROM nameValueTable
can take a significant time on this large table (even with the proper indexing etc.). I especially don't want to pay this penalty every time I load this web control.
So caching this set of names should be the right answer. My question is, how to promptly update the set when there is a new name in the table. I looked into SQL 2005 Query Notification feature. But the table gets updated frequently, very seldom with an actual new distinct name field. The notifications will flow in all the time, and the web server will probably waste more time than it saved by setting this.
I would like to find a way to balance the time used to query the data, with the delay until the name set is updated.
Any ides on how to efficiently manage this cache?
A little normalization might help. Break out the property names into a new table, and FK back to the original table, using a int ID. you can display the new table to get the complete list, which will be really fast.
Figuring out your pattern of usage will help you come up with the right balance.
How often are new values added? are new values added always unique? is the table mostly updates? do deletes occur?
One approach may be to have a SQL Server insert trigger that will check the table cache to see if its key is there & if it's not add itself
Add a unique increasing sequence MySeq to your table. You may want to try and cluster on MySeq instead of your current primary key so that the DB can build a small set then sort it.
SELECT DISTINCT name FROM nameValueTable Where MySeq >= ?;
Set ? to the last time your cache has seen an update.
You will always have a lag between your cache and the DB so, if this is a problem you need to rethink the flow of the application. You could try making all requests flow through your cache/application if you manage the data:
requests --> cache --> db
If you're not allowed to change the actual structure of this huge table (for example, due to huge numbers of reports relying on it), you could create a holding table of these 20 values and query against that. Then, on the huge table, have a trigger that fires on an INSERT or UPDATE, checks to see if the new NAME value is in the holding table, and if not, adds it.
I don't know the specifics of .NET, but I would pass all the update requests through the cache. Are all the update requests done by your ASP.NET web application? Then you could make a Proxy object for your database and have all the requests directed to it. Taking into consideration that your database only has key-value pairs, it is easy to use a Map as a cache in the Proxy.
Specifically, in pseudocode, all the requests would be as following:
// the client invokes cache.get(key)
if(cacheMap.has(key)) {
return cacheMap.get(key);
} else {
cacheMap.put(key, dababase.retrieve(key));
}
// the client invokes cache.put(key, value)
cacheMap.put(key, value);
if(writeThrough) {
database.put(key, value);
}
Also, in the background you could have an Evictor thread which ensures that the cache does not grow to big in size. In your scenario, where you have a set of values frequently accessed, I would set an eviction strategy based on Time To Idle - if an item is idle for more than a set amount of time, it is evicted. This ensures that frequently accessed values remain in the cache. Also, if your cache is not write through, you need to have the evictor write to the database on eviction.
Hope it helps :)
-- Flaviu Cipcigan

Resources