Alright, the reason I am asking is I am analyzing a software and they have used a certain type of key. I know I have seen such keys before so there must be a standard.
It looks like this:
ED19769F-D53C-49BB-B23F-CB67DF1A69B
I know this is a simple question but it is difficulty to formulate a google search string for this problem, it doesn't help that the key is totally unique.
Question: What is the standard called?
That looks to me like a Globally unique identifier, or GUID, and it is meant to be totally unique. They are sometimes used as database table primary keys when an auto-incrementing number is not appropriate. They are also used as identifiers in places like the Windows registry and as hardware device IDs.
Related
Can anyone actually explain, in layman's terms, what is a real-world use case for the set operation's options?
While I fully understand what set with merge does, as well as merge beeing a boolean and mergeFields being an array of fieldPaths, I cannot think of cases in which mergeFields might be of any use.
I also understand the fact that mergeFields basically acts like a mask for the object passed to the set operation, but I still cannot think of how is it so useful that it actually got implemented within the SDK.
Can someone shed some light?
After looking through the documentation, there seem to be two reasons why you might want to use one vs the other:
mergeFieldPaths/mergeFields trigger an error when passing in field values that don't currently exist on the document while merge will add in those fields if they don't exist. The error is good for safety purposes if you're concerned about typos/writing to incorrect field paths.
This one is just a guess, but the documentation indicates mergeFieldPaths/mergeFields ignores AND leaves fields untouched while merge ONLY leaves other fields untouched. It's possible there's some performance advantage to using mergeFieldPaths/mergeFields esp for documents with a ton of fields. The difference might be direct access vs still needing to look at unspecified fields to identify the matches in some way.
SetOptions Reference
Using Entity Framework Core (Code First) and SQLite, Guid is stored as binary but Decimal and Date fields are stored as text with Microsoft's provider.
I can understand they might not want the imprecision of DOUBLE for currency amounts and thus use text.
What happens if I need to sort? Is Entity Framework Core smart enough to make sorting work as expected (but slower because it needs to parse everything!), or will it sort alphabetically instead of sorting by number? I don't want it to return 100 before 2.
I'll have to do things like "give me the latest order" so what's the best approach for that? I want to make sure it's going to work.
Am I better to switch to System.Data.SQLite provider to store dates in UNIX format (this is not supported by the Microsoft's provider)? and then would I have to do the parsing back and forth myself or it could take care of it automatically?
I am still learning system.data.sqlite myself, but I am aware that you can create and assign a custom collation to your column. The collation can either be assigned to the table column, or only to a particular view or query using standard sqlite SQL syntax and the COLLATE keyword.
This is not a complete example/tutorial, but for starters visit the Microsoft.data.sqlite docs. Also see this stack overflow answer. These are just hints, but provide a consistent method to do this. Remember that sqlite is an in-process DB engine, so it should still be rather efficient and still allow working with the database in a normal fashion without having to constantly inject custom logic between queries. Once you have the custom collation defined and properly registered, it should be rather seamless with perhaps the only extra requirement to append e.g. COLLATE customDecimal to the ORDER BY clauses.
The custom collation function would convert the string value to an appropriate numeric type and return the comparison. It's very similar to the native .Net IComparer and IComparison interfaces/implementations.
It appears that SQLite, apparently as a "compatibility feature", parses double quoted identifiers as string literals if no matching column is found.
I understand that it does so for people who write improper sql, and for backwards compatibility with legacy projects created by such people, but it makes debugging very difficult for those of us writing proper sql on brand new projects.
For example,
SELECT * FROM "users" WHERE "usernme" = 'joe';
returns a query with 0 rows, since the string 'usernme' does not equal the string 'joe'.
This leaves me scratching my head wondering why i'm not getting joe's row even when i know there's a user by that name until I painstakingly backtrack my code and realize that I left out an a.
Is there any "strict mode" PRAGMA or API option to enforce quoting rules and treat all double-quoted strings as identifiers so that it will inform me immediately if one is misspelled?
(And please, no answers telling me not to quote identifiers if I don't need to, because any such answer is basically telling me that in order to get proper debugging, you have to write bad code in the first place.)
This is hardcoded in the SQLite parser and cannot be changed from the outside.
I also asked in the SQLite channel and someone there was kind enough to look through the source code and create a patch, and even started a thread on the mailing list describing the patch:
http://www.mail-archive.com/sqlite-users#sqlite.org/msg73832.html
It's not an answer that works for the official builds, but it may be someday. For the moment, I'm just going to recompile it myself with this patch.
Ten years later, and this doesn't completely meet your criteria about "strict mode" kinds of things, but here's a trick I used to make some queries safer, if you can remember to use it. It's to give your table an alias and reference it:
SELECT t."nosuch_column" FROM some_table t;
I suppose in this form, it's clear to SQLite that a literal isn't desired.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have been searching for some best practice guidance when using the QueryString in ASP.NET and haven't really found any.
I have found a helpful optimization article: http://dotnetperls.com/querystring
But I am more interested in answering the following questions:
Case rules? All lowercase? Pascal Case? Camel Case?
My personal preference is all lowercase, but consistency is most important.
Avoiding special characters in parameter names?
Should parameters and values be obfuscated for security purposes?
etc...
Any more guidelines would be appreciated!
Whatever is in your query string is viewable and changeable by the end user. This means they have the potential to change it to view or access data they shouldn't, or to influence the behavior of your site/app. So it goes without saying that you trust nothing on the query string, and check everything before you use it. When you check it, don't check for things that are wrong with it (that could be an infinite list), instead check for things that are correct. If even one of your checks fails then you should discard the query string data, or treat it as suspect. If you have encrypted or encoded the data on the query string it can still have unintended side effects if the user messes with it and you blindly trust it, even if the user's changes were nonsensical due to the encoding.
The one approach I take with storing sensitive data in the query string is to not do it; instead I will store the sensitive data server side (in the Session, Cache or a table in the database), and then I will have a randomly generated key (usually a GUID) in the query string to identify it, so the URL would look like this:
http://myurl.com/myPage.aspx?secretKey=73FA4A5A85A44C75ABB5E323569628D3
It is rather difficult to brute force a GUID and the chances of a GUID collision are infinitesimally small, so if the end user messes with the query string then they end up getting nothing.
This approach also works well when I need to store many things and the querystring starts to become too long - the data needing to be tracked can be kept in an object which is then stored in Session or Cache, and once again a GUID is used as its key.
My 5 cents:
if you have a page that can be called by other people, like
http://myurl.com/myPage.aspx?secretKey=73FA4A5A85A44C75ABB5E323569628D3
then you don't want them to experience problems when they misspell the K in secretKey by not making it a capital letter.
So here my rules:
Do it all lowercase. Never uppercase, because there are some small letters that don't have a corresponding uppercase letter such as the German double s.
QueryString["mykey"].ToLower().Equals("73FA4A5A85A44C75ABB5E323569628D3")
is a bad idea, because QueryString["mykey"] might be NULL (Exception NULL reference).
No complicated things like if string.IsNullOrEmpty() if else if object.equals(querykey, "comparison"). Simply use StringComparer.OrdinalIgnoreCase.Equals(key, "73FA4A5A85A44C75ABB5E323569628D3"), this works on NULL, returns false, no additional null/emtpy check needed.
I think there is no better answer between slugster and Stefan. Best to do both referring to guids and using lower case, so the example above would actually read http://myurl.com/mypage.aspx?secretkey=73fa4a5a85a44c75abb5e323569628d3
I know there are already some questions on this topic on the site...
I am just trying to understand if it's safe to use ASP.NET Profile Provider with a website with huge traffic?
The way I see it, it's laid out inefficiently. You store property name (which is a string) and property value (which is a string too). If you are just trying to store even age in the profile, you are unnecessarily storing the string "age" in the database over and over whereas with a self-created table, you could just add a column titled age, and no redundancy?
(I am just trying to make sure I am not missing something about it, because I am fairly new to it.)
The profile provider uses an EAV (Entity-Attribute-Value) design deliberately, because profiles in general very commonly have a sparsely populated schema - that is, there are many potential attributes, but only a few will be used for a given single entity, and the few that are used varies widely from one entity to the next.
Let's use a totally arbitrary example - let's say only one in 10 of your users want to provide their age. Making that a column now seems more like a waste, no?
But what if your application makes age mandatory? OK, that column gets populated for everyone. But what if you need to make a note in the profile "user doesn't want to see this obscure dialog anymore". Do you really want a column for every single dialog in your application whether a user wants to see it? Probably not. When you get into the little one-off details of an application of any significant scope, EAV actually becomes the more economical choice.
In the general, it scales quite well (far better than you probably think). In the specific, it doesn't matter - as always, use what works and fix performance problems when they come up. Whatever the scalability limitations of the profile provider are, you'll know when you hit them. I guarantee two things - (1) you'll have to fix a lot of other performance problems you didn't expect before you have to fix that; and (2) if your site is getting enough traffic to break the profile provider, it's a good problem to have.
I agree with Rex M, unless you have a need to do things like sort all your users by age or do other procedures with aggregate profile data. Then you could consider rolling your own. But for just storing properties that you access here and there on a user-by-user basis, Rex M is right.
I do know what you mean. Wouldn't it make sense to supplment the profile provider's table with another table that has columns with mandatory fields? or do you think the overhead of join would not make it not worth it?