I want to have a column specific SqlCacheDependency.
The Row Specific SqlCacheDependency is valid but i dont know how can i make Column Specific SqlCacheDependency
Example:
The query:
SELECT
[Extent1].[Price] AS [Price]
FROM [dbo].[Products] AS [Extent1]
where [Extent1].[ID] = 31167
causes the notification if the Row with ID = 31167 changes.
But the problem is that the Cache becomes invalid if any of the column of that row get changed but i want the cache becomes invalid only if Price of the ID 31167 get changed
I googled it for long but don't get any help.
Thanks
Any help is appreciated.
SqlDependency (which is used by SqlCacheDependency) does not offer column-level control. The semantics of the change notifications is that they are sent if a row returned or used by the query "might have changed."
If this is an important feature for you, you would need to implement it yourself, probably using triggers and either Service Broker to queue and deliver the change notifications, or through polling as with the old-style table-based notifications in .NET.
Another possibility might be to pull your columns of interest off into a separate table (either an explicit copy or one maintained with triggers) or a new table that you then join to the original one with views or SPs. Each row would then only contain the column you're interested in (plus the PK).
Related
I have a use case where DynamoDB is running in production and I need to add a new column IDUpdatedAt which will also be serving as a sort key for one of the GSIs.
I tried a thing in test where my application adds the new rows with IDUpdatedAt, it's working fine but what about the existing rows? How to add the values for those?
Also the new rows will not be added without IDUpdatedAt, but how will the search be impacted for older rows?
PS: IDUpdatedAt is being used as a filter in the application, i.e., user can search for specific ID and can get results sorted by date. That's why IDUpdatedAt is also a part of GSI (sort key).
Please help.
You've got the right idea by adding the field to new items. After all, DynamoDB does not enforce a particular schema outside of the primary key.
This also happens to be a very useful feature, especially when defining a GSI on that attribute; if the atttibute exists on the item, it ends up in the index! For example, imagine modeling an email inbox in DDB where each item represents an email. You could include an attribute 'is_read' and define a GSI using that atttibute.
If the 'is_read' attribute exists on the item, it's in the index. Otherwise, it's not. A cool way to use GSIs to implement filtering.
Pretty neat stuff!
However, there is no way to retroactively update all items with a new attribute other than manually updating each item (or in batches). The equivalent in SQL databases is defining a new column. Unfortunately, an analogous operation in DDB does not exist.
I am creating a leave tracker app where I want to store the user ID along with the from date and to date. I am using Amazon's DynamoDB as the database, and the user enters a leave through a custom command.
Eg: apply-leave from-date to-date
I want to avoid duplicate entries in the database. For example, if a user has already applied for a leave between 06-10-2019 to 10-10-2019 and applies for a leave between the same dates again, they should get a message saying that this already exists and a new record should not be created for the same.
However, a user can apply for multiple leaves and two users can take a leave between the same dates.
I tried using a conditional statement as follows:
table.put_item(
Item={
'leave_id': leave_id,
'user_id': user_id,
'from_date': from_date,
'to_date': to_date,
},
ConditionExpression='attribute_not_exists(user_id) AND attribute_not_exists(from_date) AND attribute_not_exists(to_date)'
)
where leave_id is the partition key. However, this does not work and a new row is added every time, even if it is the same dates. I have looked through similar other questions, but haven't been able to understand how to get this configured correctly.
Any ideas on how I should go about this, or if there is a different design that I should follow?
If you are calling your code with the leave_id that doesn't yet exist in the table, the item will always be inserted. If you call your code with leave_id that does already exist in your table you should be getting An error occurred (ConditionalCheckFailedException) when calling the PutItem operation: The conditional request failed error message.
I have two suggestions:
If you don't want to change your table, you can create a secondary index with user_id as the partition key and then query the index for all the items where the given user has some from_date and to_date attributes.
Like this:
table.query(
IndexName='user_id-index',
KeyConditionExpression=Key('user_id').eq(user_id),
FilterExpression=Attr('from_date').exists() & Attr('from_date').exists()
)
Then you will need to check for overlapping leave requests, etc. (eg. leave request that starts before the one that is already in place finishes). After deciding that the leave request is a valid one you will call put_item.
Another suggestion and probably a better one would be to create a composite primary key on your table with user_id as a partition key and leave_id as a sort key. That way you could execute a query for all leave requests from a particular user without the need to create a secondary index.
I have two tables in my app's schema: Event and Game (one-to-many). Games are ordered by datetime field. But sometimes there can be games played in parallel (same datetime), but the user should be able to set their relative order.
I've added innerOrder (int) field with simple idea: it should have autogenerated value that can be changed on reorder (exchange with neighbor record). But I can't achieve this behavior with Doctrine: GeneratedValue can't be used twice / with separate field (just don't work this way).
On the next attempt I've tried to do it without autogeneration. But I need some initial value on insert, for example: MAX(innerOrder) (better - to set it automatically of course).
I can't do it in prePersist or similar methods - don't have access to repository class. And don't want to do it with additional query in controller - not only because of additional code I should insert each time (get max value from table, set inner order), but I'm afraid of possible conflicts (when two users are adding Games in parallel).
How should I achieve expected behavior (maybe, I'm totally wrong here)?
There is no need in achieving this behavior with Doctrine, you can manage this value from aggregate root. I.e when you attach the Game to the Event you can update it innerOrder value according to maximum of currently attached games + 1. Conflicts could be easily avoided with different kind of locks on Event you edit (i.e fetcing it with doctrine write lock or some kind of shared locks or mutex (see symfony/lock))
After it you can specify your relation confiration to fetch it with given order using this documentation
https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/tutorials/ordered-associations.html
My two cents: when creating/modifying an event, you can check if there's one already at the same time (default innerOrder is 0, or even count(*) of the events at the same time). You can issue a warning when there's another event, ask for the order, or take to a form where you can manually reassign the order of the events.
I have a very large (millions of rows) SQL table which represents name-value pairs (one columns for a name of a property, the other for it's value). On my ASP.NET web application I have to populate a control with the distinct values available in the name column. This set of values is usually not bigger than 100. Most likely around 20. Running the query
SELECT DISTINCT name FROM nameValueTable
can take a significant time on this large table (even with the proper indexing etc.). I especially don't want to pay this penalty every time I load this web control.
So caching this set of names should be the right answer. My question is, how to promptly update the set when there is a new name in the table. I looked into SQL 2005 Query Notification feature. But the table gets updated frequently, very seldom with an actual new distinct name field. The notifications will flow in all the time, and the web server will probably waste more time than it saved by setting this.
I would like to find a way to balance the time used to query the data, with the delay until the name set is updated.
Any ides on how to efficiently manage this cache?
A little normalization might help. Break out the property names into a new table, and FK back to the original table, using a int ID. you can display the new table to get the complete list, which will be really fast.
Figuring out your pattern of usage will help you come up with the right balance.
How often are new values added? are new values added always unique? is the table mostly updates? do deletes occur?
One approach may be to have a SQL Server insert trigger that will check the table cache to see if its key is there & if it's not add itself
Add a unique increasing sequence MySeq to your table. You may want to try and cluster on MySeq instead of your current primary key so that the DB can build a small set then sort it.
SELECT DISTINCT name FROM nameValueTable Where MySeq >= ?;
Set ? to the last time your cache has seen an update.
You will always have a lag between your cache and the DB so, if this is a problem you need to rethink the flow of the application. You could try making all requests flow through your cache/application if you manage the data:
requests --> cache --> db
If you're not allowed to change the actual structure of this huge table (for example, due to huge numbers of reports relying on it), you could create a holding table of these 20 values and query against that. Then, on the huge table, have a trigger that fires on an INSERT or UPDATE, checks to see if the new NAME value is in the holding table, and if not, adds it.
I don't know the specifics of .NET, but I would pass all the update requests through the cache. Are all the update requests done by your ASP.NET web application? Then you could make a Proxy object for your database and have all the requests directed to it. Taking into consideration that your database only has key-value pairs, it is easy to use a Map as a cache in the Proxy.
Specifically, in pseudocode, all the requests would be as following:
// the client invokes cache.get(key)
if(cacheMap.has(key)) {
return cacheMap.get(key);
} else {
cacheMap.put(key, dababase.retrieve(key));
}
// the client invokes cache.put(key, value)
cacheMap.put(key, value);
if(writeThrough) {
database.put(key, value);
}
Also, in the background you could have an Evictor thread which ensures that the cache does not grow to big in size. In your scenario, where you have a set of values frequently accessed, I would set an eviction strategy based on Time To Idle - if an item is idle for more than a set amount of time, it is evicted. This ensures that frequently accessed values remain in the cache. Also, if your cache is not write through, you need to have the evictor write to the database on eviction.
Hope it helps :)
-- Flaviu Cipcigan
We are using the standard aspnet security features, we have need to set the order of the roles, purely for display purposes.
We could just have a sequence number onto the end of the aspnet_roles table, but that feels kinds of hacky to me. Also if in future versions of asp the security get changed we will be in trouble.
Is there a better way to do this that won't make me loose sleep each time a new version is pushed out?
If you don't want to change the existing schema, store the sequence data in another table and use a custom stored procedure to join the two together and return the values. Use the result of this stored procedure to populate the drop-down.
Don't forget to add a "special case" in the stored procedure to place roles that aren't contained in the second table at a "default" position in the returned set, rather than ignoring them, unless that's desired =)