I have a database with exercises, and each exercise has a field userId which references to the user that the exercise belongs to. I would want to retrieve every exercise that belongs to the current user, so I have assigned a priority to each exercise which corresponds to the userId, I can then retrieve every exercise for a specific user like this:
collection: function(user_id) {
return angularFireCollection(new Firebase(FBURL+'/exercises').startAt(user_id).endAt(user_id));
},
Which is called from a controller like this:
$scope.findExercises = function() {
$scope.exercises= Exercises.collection($scope.auth.id);
}
The problem is, when I update an exercise, the priority seems to be removed since it no longer turns up in the collection. The exercise is updated via the realtime binding angularFire(). How can I make updates and maintain the priority, or reset the priority?
Related
Is it possible to include a user friendly ID field into cosmos db documents? This doesn't need to override the default id field that generates when adding a document but can be a custom one that is simple for an end user to know and search for.
Example document, the ref field is what I want to generate as a simple human readable identifier.
{
"id": "57275754475457-5445444-44420478",
"ref": "45H7GI",
"userId": "48412",
"whenCreated": "D2021-11-09T21:56:31.630",
"tenantId": "5566HH"
}
I'm looking at building a ticketing system and would like a simple ID field for a user to be sent and who can reference when updating/ searching for.
Any help with this would be appreciated.
For your own purposes, you can choose to either use id (which is guaranteed to be unique within a partition) or your own property (such as ref as you defined in your example). For any property other than id, you'd need to add a unique-key constraint when creating the container (and at that point, ref would be unique within any partition, just like id).
Really your choice whether you store your custom id's in id or ref. Just know that, if you ever want to do a direct-read (instead of a query), you can only do a direct-read against an id, not against any other property.
I have a dynamodb table that I declared using dynamoose as below:
const schema = new dynamoose.Schema({
"email": String,
"name": String,
"vehicleMotor": {
"type": Number,
"default": 0
},
"vehicleMotorId": String,
"vehicleMotorImage1File": String,
"vehicleMotorImage2File": String,
}, {
"saveUnknown": true,
"timestamps": true
});
From my understanding, when I have "timestamps": true declared, it should have both createdAt and updatedAt field.
So when I run my code that looks like this
if (new){
const newSeller = new Seller({
"email": email,
"name": name
})
var saveResult = await newSeller.save();
}else{
var updateResult = await Seller.update( { "email": email, sellerType: 1 }, {
"name": name
})
}
and when I checked the inserted/updated data inside Amazon DynamoDB Management Console, there's no createdAt, only updatedAt. By right I should also have createdAt too right? If not, how to make sure createdAt will always be there?
Based on the comment from the original poster. It looks like this is only occurring for the update call.
There isn't quite enough information in the original question for me to give a concrete answer as to what I believe is the best solution. So I'm going to make a few assumptions, and give a lot of high level detail about how Dynamoose handles this situation.
Just a little bit of the behind the scenes which will help make my answer more clear. From Dynamoose's perspective, it has no idea if the document/item already exists in the database or not. That leads to a situation where createdAt is difficult to get 100% accurate. You are running into one of these situations. For the update call, Dynamoose assumes that the document already exists, and therefore doesn't set the createdAt timestamp. This makes sense because createdAt doesn't really match with an update call. However, DynamoDB & Dynamoose technically allows for using update to create a new document/item. But Dynamoose has no way of knowing which it is, so we use the behavior of assuming update means not creating a new document for this context.
As for a possible solution. You have a new variable. I'm curious how you are defining that variable. One option would be to check the table using a get call and see if the document already exists. If you do that as your new variable, it should work fine. Because it will save if it doesn't exist, and if it already exists, it should have the createdAt variable already. Major downside to this is that you have to always do a read operation before writing. Which increases the latency of the application, and slows things down. But it will achieve what you want.
Now. In the event you have documents in your table that don't have the createdAt timestamp (ex. you created it outside of Dynamoose, or you created it before adding the timestamp option), the above solution won't work. This is because even checking to see if it exists, will cause the update method to be run, which Dynamoose assumes to be an update not a creation. In this case, any solution is really dependent on what your application wants to do. The item already exists in the table, so it's impossible to know when the true createdAt timestamp was (unless you keep logs and all that). You could run a one time operation to go through and add the current timestamp to the createdAt field if it doesn't have it for each document (but again that won't be truly accurate). Or of course you could just ignore it, and not rely on that field always.
To summarize, the timestamps feature in Dynamoose is truly a client side feature. Dynamoose has limited insight into the state of the data, and DynamoDB doesn't provide this functionality built in. This means Dynamoose has to make assumptions about how to handle these situations. However, if you follow Dynamoose's patterns for timestamps (ex. update won't add the timestamp and should only be used for updating existing items, creating all items in Dynamoose, etc), it will be totally accurate, and you won't run into any of these pitfalls.
If you have any creative solutions for how to improve Dynamoose's knowledge here, feel free to create a pull request on the repo or create an issue to discuss your ideas.
Using firebase real time database i want to move points from user to another but to keep conflicts away ( may user get coins from multi other users at the same time ) i have to use transactions.
My data structure :
{
uid-1:
{
points: 30
},
uid-2:
{
points:60
}
}
So i need two transactions one substracts uid-1 and second increases uid-2
But I'm afraid of that if one transaction success and other one fails .. any sol to revert the operation or update both same time?
There is no secure way to implement conditionality between multiple transactions.
If both operations depend on each other they should be run as a single transaction. That means you have an optimistic lock on the entire "users", but in your current data structure and solution that is required.
An alternative is to not update the balance, but just keep a list of transactions. In that case you can ensure both the addition for the first user and subtraction for the second user are written atomically by using a multi-location update. In JavaScript this would look something like:
ref = firebase.database().ref("users");
var updates = {};
let transactionID = ref.push().key;
updates["uid1/transactions/"+transactionID] = 20;
updates["uid2/transactions/"+transactionID] = -20;
ref.update(updates);
The above write operation will either succeed completely, or fail completely. This ensures your database is always correct.
I've been working through Adrian Hall's book on integrating Xamarin and Azure Mobile Apps. In Chapter 3 he adds a User table to facilitate "Friends" data. In his implementation, the client authenticates the user and then makes a request to a custom endpoint that either adds the user to the database or updates their record. Here's an abridged version of the method in the custom controller:
[HttpGet]
public async Task<IHttpActionResult> Get()
{
// ...Obtain user info
User user = new User()
{
Id = sid,
Name = name,
EmailAddress = email
};
dbContext.Users.AddOrUpdate(user);
dbContext.SaveChanges();
// ...
}
The trouble is, the 2nd time the same user logs in to the app, this code throws an exception saying
Modifying a column with the 'Identity' pattern is not supported. Column: 'CreatedAt'. Table: 'CodeFirstDatabaseSchema.User'.
This StackOverflow Answer explains that this is because the AddOrUpdate() method nulls out any properties not set on the entity, including CreatedAt, which is an identity column. This leaves me with a couple of questions:
What is the right way to Add or Update an entity if the CreatedAt value cannot be edited? The same SO thread suggests a helper method to look up the existing CreatedAt and apply it to the entity before trying to save it. This seems cumbersome.
Why is this implemented as a custom auth controller that returns a new Auth token when it only needs to add or update a User in a database? Why not use a normal entity controller to add/update the new user and allow the client to continue using the Auth token it already has?
For the CustomAuthController.cs code, see here.
When you focus on what you are trying to do from SQL perspective it would be like:
update dbo.some_table set some_primary_key = new_primary_key where some_primary_key = ...
which would result in cannot update identity column some_primary_key which makes sense.
But if you do have a reason to update the PK you still can do it if you set the identity insert
SET IDENTITY_INSERT dbo.some_table ON;
Then after you made an insert you set it off using similar syntax.
But this is rather exceptional scenario.
Usually there is no need to manually insert PKs.
Now going back to EF.
The error you are getting is telling you that you cannot modify a column with PK, most likely user_id and/or some other columns if you have composite PK.
So, first time round a new user gets created. Second time round, because you are suing GetOrUpdate a user gets udpated but because you are passing PK it breaks.
Solution?
AddOrUpdate was meant to help with seeding the migrations only.
Given its destructive nature I would not recommend using GetOrUpdate anywhere near production.
You can replace GetOrUpdate with two operations Get and Update
Fetch user and then
if not exists then create a new one
or if it does exist then update it
I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.