I have a Java Spring application, where I use DynamoDB, so far without transactions. However, I have recently come across a new use case, which requires persisting together related objects. I am new to this topic, and have therefore read about DynamoDB transactions, but cannot find a way to use the API in a proper object-oriented fashion. What am I missing?
For now, when I need to update an object, I proceed as follows, building an UpdateItemRequest:
Map<String, AttributeValueUpdate> updates = new HashMap<>();
// fill updates
Map<String, ExpectedAttributeValue> expected = new HashMap<>();
// fill expectations
UpdateItemRequest request = new UpdateItemRequest()
.withTableName(TABLE_NAME)
.withKey(key)
.withAttributeUpdates(updates)
.withExpected(expected);
dynamoDBClient.updateItem(request);
However, the documentation for building a transaction recommends the following syntax:
Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
expressionAttributeValues.put(":new_status", new AttributeValue("SOLD"));
expressionAttributeValues.put(":expected_status", new AttributeValue("IN_STOCK"));
Update markItemSold = new Update()
.withTableName(PRODUCT_TABLE_NAME)
.withKey(productItemKey)
.withUpdateExpression("SET ProductStatus = :new_status")
.withExpressionAttributeValues(expressionAttributeValues)
.withConditionExpression("ProductStatus = :expected_status")
.withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD);
If I were to follow the documentation, then I would need to build an update expression - but what I persist can be quite complex, and this would require some heavy string manipulation, which is not satisfactory (parser to make, would be slow and prone to errors).
Is there a a way to obtain this update expression from an UpdateItemRequest? Or any recommended way to build such complex expressions, e.g. as a serialized form of something? Or even better, some object-oriented way to use transactions, passing a map of update objects rather than a big string?
Thanks.
Related
I would like to get the gyroscope value of a smartphone and send it to another one. I managed to set a value and retrieve it, however the result is very laggy, is the following method correct?
If no what can I change?
If yes is their another way to set a value and retrieve it in realtime in Unity?
//UPDATING THE VALUE
reference = FirebaseDatabase.DefaultInstance.RootReference;
Dictionary<string, object> gyro = new Dictionary<string, object>();
gyro["keyToUpdate"] = valueToUpdate;
reference.Child("parentKey").UpdateChildrenAsync(gyro);
// RETRIEVING THE VALUE
FirebaseDatabase.DefaultInstance
.GetReference("parentKey")
.Child("keyToUpdate")
.GetValueAsync().ContinueWith(task => {
if (task.IsFaulted) {
Debug.Log("error");
}
else if (task.IsCompleted) {
DataSnapshot snapshot = task.Result;
float valueUpdated = float.Parse(snapshot.Value.ToString());
Debug.Log(valueUpdated);
}
});
Firebase is fundamentally slower than you think it is. This is within its performance boundaries.
With any asynchronous calls, you can never be sure how fast or slow you may receive a response. Keep in mind that Firebase is routed through a system which is layered with elements for dealing with things like authentication and decentralized data.
If you continue to use Firebase, you'll need to make sure your code and UI is set up to allow for possibly long delays which are out of your control. Or you could spend lots of time building your own infrastructure, as DoctorPangloss mentioned.
I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)
I have a primary node in my database called 'questions', when I create a ref to that node and bring it into my project as a $asObject(), I can modify the individual questions and $save() the collection without any problems, however as soon as I try to limit the object, by priority, the $save() deletes everything off of the object!
this works fine:
db.questions = $firebase(fb.questions).$asObject();
// later :
db.questions.$save();
// db.questions is an object with many 'questions', which I can edit and resave as I please
but as soon as I switch my code to this:
db.questions = $firebase(fb.questions.startAt(auth.user.id).endAt(auth.user.id)).$asObject();
// later :
db.questions.$save();
// db.questions is an empty firebase object without any 'questions!'
Is there some limitation to limited objects (pun not intended) and their ability to be changed and saved?? The saving actually saves updates to the questions to the database, but somehow nukes the local $firebase object...
First line of synchronized arrays ($asArray) documentation:
Synchronized arrays should be used for any list of objects that will be sorted, iterated, and which have unique ids.
First line of synchronized objects ($asObject) documentation:
Objects are useful for storing key/value pairs, and singular records that are not used as a collection.
As demonstrated, if you are going to work with a collection and employ limit, it would behoove you to use a tool designed for collections (i.e. $asArray).
If you were to recreate the behavior of $save using the Firebase SDK, it would look like this:
var ref = new Firebase(URL).limit(10);
// ref.set(data); // throws an error!
ref.ref().set(data); // replaces the entire path; same as $save
Thus, the behavior here exactly matches the SDK. You cannot, technically, call set() on a query instance and this doesn't make any sense, really. What does limit(10) mean to a JSON object? If you call set, which 10 unordered keys should be set? There is no correlation here and limit() really only makes sense with a collection of data, not a list of key/value pairs.
Hope that helps.
I am using ASP.net Identity 2.0 with a user id of an integer. Performing a password update is an incredibly expensive database operation with 2 (unneeded) queries both averaging 128,407 db time units, or about a 7 in the query plan, based on the amount of data I have.
Code I am calling (either async or sync are the same)
var result = await UserManager.ChangePasswordAsync(userId, oldPassword, newPassword);
// or
var result = UserManager.ChangePassword(userId, oldPassword, newPassword);
In the database this causes two large sql calls which contain in their guts
AspNetUserRoles ... WHERE ((UPPER([Extent1].[Email])) = (UPPER(#p__linq__0))) ...
query 2:
AspNetUserRoles ... WHERE ((UPPER([Extent1].[UserName])) = (UPPER(#p__linq__0))) ...
From my perspective
There is no reason to call this sql at all - a lookup by the int ID is fast, and the sql it is calling is looking up role data.
using "Upper" is probably what makes it slow, and if there is no other better solution I can add a computed index (System.Web.Providers.DefaultMembershipProvider having performance issues/deadlocks)
At a high level my question is - is there a work around for this, or can someone from the Identity team fix the code (if it is indeed broken).
Update
The same behavior can be observed for the following calls (and probably many others)
UserManager.ResetPasswordAsync
UserManager.CreateAsync
Well you can certainly do it yourself instead of calling that method.
Imagine you have, in scope, a UserManager<ApplicationUser> called userManager and a DbContext called context.
var user = await context.Users.Single(u => u.Id == knownId);
// You can skip this if you don't care about checking the old password...
if (userManager.PasswordHasher.VerifyHashedPassword(user.HashedPassword, "myOldPassword") == PasswordVerificationResult.Failed) { return; };
user.HashedPassword = userManager.PasswordHasher.HashPassword("myNewPassword");
await context.SaveChangesAsync();
If that's still not optimized enough for you, DbContext exposes a Sql method, so you could just use the password hasher to get a hash, then issue a single Update() query.
I wouldn't do either of these things unless this is really a bottleneck or you have some other good reason, though. I'm doing something similar in my application because, at least in ASP.NET Identity v1, there's no method to just change the password without checking the old one.
I believe the answer provided by #emodendroket correctly and efficiently answered my original question. Due to the implications of needing to modify numerous parts of the API in the short term I am going with a simpler (but not as good solution)
Add a computed and persisted index on email and usernames
ALTER TABLE dbo.aspnetusers ADD UpperFieldEmail AS UPPER(email) PERSISTED
CREATE NONCLUSTERED INDEX IX_aspnetusers_UpperFieldEmail ON dbo.aspnetusers(UpperFieldEmail)
ALTER TABLE dbo.aspnetusers ADD UpperFieldUsername AS UPPER(username) PERSISTED
CREATE NONCLUSTERED INDEX IX_aspnetusers_UpperFieldUsername ON dbo.aspnetusers(UpperFieldUsername)
Is there a way to determine the arguments to a workflow prior to executing it?
I've developed an application that rehosts the designer, so end users can develop their own workflows. In doing this, a user is able to add their own arguments to the workflow.
I'm looking for a way to inspect the workflow prior to execution, and try to resolve the arguments. I've looked at the WorkflowInspectionServices class, but I can't seem to ask for a particular type of item from it.
Ideally, I'd like to construct a workflow from metadata stored in the database using something like:
var workflow = ActivityXamlServices.Load(new XamlReader(new StringReader(xamlText)));
var metadata = SomeUnknownMagicClass.Inspect(workflow);
var inputs = new Dictionary<string, object>()
forreach(var argument in metadata.Arguments)
{
inputs.Add(argument.Name, MagicArgumentResolver.Resolve(argument.Name));
}
WorflowInvoker.Invoke(workflow, inputs);
I might be missing something, but WorkflowInspectionServices doesn't seem to do this. It has the method CacheMetadata which sounds promising when you read the MSDN docs, but basically turns up with nothing.
Thanks for any help.
I guess that when you talk about metadata stored in the database you're referring to the XAML from the designer.
You can load that XAML as a DynamicActivity like this:
using (var reader = new StringReader(xamlString))
{
var dynActivity =
ActivityXamlServices.Load(reader) as DynamicActivity;
}
Then you've access to all its arguments through DynamicActivity.Properties.