I'm creating some url rewriting for asp.net. Now I am tobbing if I should include the id in url or just the title. Do you guys know if it's a significant performance hit to lookup an item by title instead of id?
If you can, lookup by the primary key, which is probably ID in your case.
However, if your titles are unique and you have an index on Title, the performance difference should be minimal.
Edit : Since is URLwriting, the title probably has better SEO mileage, FWIW
It depends on how many rows you have in your table and many other factors but generally if you have an index on your title column it shouldn't be too much of a performance hit. Ultimately the only real way to see if it's a problem in your scenario is to try it and run some tests.
The most important factor is to make sure you have have index on the column you are attempting to do the lookup on. So another way to say it is put an index on the columns in you where clause.
Enjoy!
That depends.
If the Id is the used for clustered id (default form PK) so the difference can be significant,
because in simple words, If you are using a clustered index to retrieve the data you do less operation.
The numeric type vs character. That also depend of the size that You have declared for this type. NUMERIC(20) is slower than VARCHAR(5).
Related
I have a table (key=username, value=male or female) and an index on the values.
After I add an item to the table, I want to update the counts of males and females. However, after a successful write, as the index is a Global Secondary Index, the count query is not consistent.
Is there a way (dynamo db Streams, Lambda, ...) to monitor when the index is up to date?
Note that Im not looking for a solution that involves something else (keep count of increments in redis or ...), what I describe here is a simplified problem to especially ask a question about how can I monitor an index in dynamo.
Thanks!
I am not sure if there is any mechanism currently provided to check this but, you can easily solve this problem by adding a single line of code to your query.
ConsistentRead = True
DynamoDB has a parameter when set as true will make sure that you get latest updated value.
Now, when you add/update the item and then query the data add ConsistentRead option in it, this will ensure that you will have latest count value.
Here is the reference link.
If you are able to accomplish using other technique then please do share it.
Hope that helps.
In this page, you can see what I mean,
http://codex.wordpress.org/Database_Description#Table:_wp_users
It only creates a normal index for the user_login column while I think an unique index should be created for it.
Well since the unique index (primary key) is the ID column, the coder probably didn't see the need to define another column in the table as unique. As you point out, though, user_login column is indexed so that gives the performance advantages when querying that table.
Don't know wp, but maybe it needs to allow for multiple user_logins with different user_status?
I've got a form in ax 2009, showing filtered records of a table (about 5.000.000 records total, about 1000 shown filtered).
Selecting a couple of those records in the form and deleting them via form-control (alt+f9) is very slow.
One record is deleted immediately, selecting about 20 takes several minutes!
There is only one deleteAction on the table - any idea what could thwart the operation?
edit:
The regarding table has two indices, both don't allow duplicates. First one is an index on an integer field, second one is a combined one of three fields.
createRedIdIndex is not activated.
The filter makes use of one column ( employeeID ) in a queryBuildRange.
deleteAction: another table (B) references the id ( indexed ) of the mentioned table (A). A has a deleteAction on B. setting is "cascade"
The two tables are related via id-field.
The relations can be resolved by an index.
And it's only an amount of about 20 records I want to delete - so I don't go in line with the idea, that the "to-delete-data-amount" is too big!
Also have a look on this:
http://blogs.msdn.com/b/emeadaxsupport/archive/2010/07/12/forms-with-a-high-number-of-records-take-a-significant-time-to-show.aspx
Consider adding
grid.autoSizeColumns(false);
as suggested in the article.
Do diagnose database performance issues in AX, enable SQL tracing in Tools\Setup in the SQL tab page.
Use the code profiler to see where the time is used.
I'm looking to create a table for user preferences and can't figure the best way to do it. The way that the ASP.NET does it by default seems extremely awkward, and would like to avoid that. Currently, I'm using one row per user, where I have a different column for each user preference (not normalized, I know).
So, the other idea that I had come up with was to split the Preferences themselves up into their own table, and then have a row PER preference PER user in a user preferences table; however, this would mean each preference would need to be the exact same datatype, which also doesn't sound too appealing to me.
So, my question is: What is the best/most logical way to design a database to hold user preference values?
Some of the ideas that I try to avoid in database work, is data duplication and unnecessary complication. You also want to avoid "insert, update, and deletion anomalies". Having said that, storing user preferences in one table with each row = one user and the columns, the different preferences that are available, makes sense.
Now if you can see these preferences being used in any other form or fashion in your database, like multiple objects (not just users) using the same preferences, then you'll want to go down your second route and reference the preferences with FK/PK pairs.
As for what you've described I see no reason why the first route won't work.
I usually do this:
Users table (user_id, .... etc.)
.
Options table (option_id, data_type, ... etc.)
(list of things that can be set by user)
.
Preferences table (user_id, option_id, setting)
I use the new SQLVARIANT data type for the setting field so it can be different data types and record the data type of the option as part of the option definition in the Options table for casting it back to the right type when queried.
If you store all your user preferences in a single row of a User table you will have a maintenance nightmare!
Use one row per preference, per user and store the preference value as a varchar (length 255 say, or some value large enough to meet your requirements). You will have to convert values in/out of this column obviously.
The only situation where this won't work easily is if you want to store some large binary data as a User preference, but I have not found that to be a common requirement.
Real quick, one method:
User(UserID, UserName, ...)
PreferenceDataType(PreferenceDataTypeID, PreferenceDataTypeName)
PreferenceDataValue(PreferenceDataValueID, PreferenceDataTypeID, IntValue, VarcharValue, BitValue, ...)
Preference(PreferenceID, PreferenceDataTypeID, PreferenceName, ...)
UserHasPreference(UserID, PreferenceID, PreferenceDataValueID)
Is there a way, in Axapta/Dynamics Ax, to create an Extended Data Type of type integer which only allows enering values in a specified range (i.e., if the extended data type is meant for storing years, I should be able to set a range like 1900-2100), or do I have to manage the range using X++ code?
And if I need to use X++ code to manage the range, which is the best way to do it?
I suggest you use the ''validateField'' of the corresponding table.
Search for the method in AOT\Data Dictionay\Tables to see many examples.
You can can't specify the range on the extended data type itself. If the type is used for a table field, you can add code to the insert and update methods of the table, in order to validate the value whenever the record is updated. This approach could however have a cost in terms of performance.
You can also choose to just add code the the validateWrite method of the table, if you are satisfied with the validation only taking place when the value is modified from the UI.