Storing data in Riak with automatic ID? - riak

If I make a HTTP POST to riak, i.e http://localhost:8098/riak/mybucket along with JSON-encoded data {name: "John Doe"}, the object is saved as expected.
However, this data will be assigned to an id set by Riak automatically, something like WAqRNgxZl10FK0F3FLuorByNHgN.
Is it possible to make Riak return this id in the response of the HTTP POST?

According to the Riak documentation, it returns the new key/ID in the Location header.
In the output, the Location header will give the you key for that
object. To view the newly created object, go to
“http://127.0.0.1:8091/Location” in your browser.
You can see the docs here - Scroll down to "Store a new object and assign random key "

Related

Audit.EntityFramework.Core 16.2.1 not tracking Foreign Object changes

In POST request we sent a payload with its Foreign object data and audit GetEntityFrameworkEvent() show correct values.
But when we make a PUT request then Audit.EntityFramework.Core 16.2.1 does not track Foreign Object changes i.e Changes Array Field has same values in every New and Old fields.
That coould be because of the nature of the update operation.
If you don't explicitly retrieve the object before the update, there is no way for the EF ChangeTracker to know the previous values.
Please check https://github.com/thepirat000/Audit.NET/issues/53

Is an HTTP request considered idempotent if changes a record's last modified time?

Suppose that I have a table called persons and that a request to change any information about a person also updates that record's last_modified column. Would such a request still be considered idempotent? What I'm trying to find out is if auxiliary fields can be exempted from the criteria of idempotence.
If any information is changed on the database after a request (a POST request obviously, you would not alter a person record on a GET request) then it's not indempotent. By definition. Unless you only store stats (like logs).
Here it's not the last_modified column which is important, it's the change any information about a person.
A GET request is indempotent, you can take any uri and put it in an <IMG> in a web page, browsers will load it without asking, it must not alter anything in the database, or in the session (like destroying a session is not indempotent). An indempotent request can be prefetched, can run in any prioity (no need to care about the order of several indempotent queries,none of them can impact the other), etc.

Marketo REST API - what is "dedupeFields" for custom objects?

When it comes to creating/updating custom objects, can I use both dedupeFields or lookupField when pushing the data to Marketo?
What is the difference between the two?
I'm not sure what do you mean under lookupField, as there is no such input field described in the API documentation of the Sync Custom Objects endpoint. (That is the endpoint to create or update custom objects.)
In the other hand, you do not need such a standalone lookup field, as with the input array you provide the objects you want to create or update, with all their important values. Have a look at the sample payload in the docs.
When input is used together with the optional dedupeBy and action fields, you have full control over which object you want to create or update.
Also, the endpoint expects the name of the dedupe field under dedupeBy key, as opposed to dedupeFields. So the name is singular; you can provide a single field name use, and it does what you can expect: if the value in the field for a given record is not unique, an error will be returned for the individual record.

How to entirely skip validation in simple schema and allow incomplete documents to be stored?

I'm creating an order form and a schema defined for an Order (certain required fields such as address, customer info, items selected and their quantities, etc).
a. User visits site.
b. A unique ID is generated for their session as well as a timestamp.
var userSession = {
_id: createId(),
timestamp: new Date(),
};
var sessionId = userSession._id;
c. The userSession is placed in local storage.
storeInLocalStorage('blahblah', sessionObject);
d. An Order object is created with the sessionId as the only field so far.
var newOrder = {
sessionId: sessionId;
};
e. Obviously at this point the Order object won't validate according to the schema so I can't store it in Mongo. BUT I still want to store it in Mongo so I can later retrieve incomplete orders, or orders in progress, using the sessionID generated on the user's initial visit.
This won't work because it fails validation:
Orders.insert(newOrder);
f. When a user revisits the site I want to be able to get the incomplete order from Mongo and resume:
var sessionId = getLocalStorage('blahblah')._id;
var incompleteOrder = Orders.findOne({'sessionId', sessionId});
So I'm not sure how to go about doing this while accomplishing these points.
I want full simpleschema validation on the Orders collection when the user is entering in items on the forms and when the user is intending to submit a full, complete order.
I want to disable simpleschema validation on the Orders collection and still allow storing into the DB so that partial orders can be stored for resumption at a later time.
I can make a field conditionally required using this here but that would mean 50+ fields would be conditionally required just for this scenario and that seems super cumbersome.
It sounds like you want to have your cake, and eat it too!
I think the best approach here would be keep your schema and validation on the Orders collection, but store incomplete orders elsewhere.
You could store them in another collection (with a more relaxed schema) if you want them on the server (possibly for enabling resume on another device for the logged in user) , or more simply in Local Storage, and still enable the resume previous order behaviour you are wanting.
Only write to the Orders collection when the order is complete (and passes validation).
Here's a variation on #JeremyK's answer: add an inProgress key to your order of type [Object]. This object would have no deeper validation. Keep your in progress order data in there until the order is final then copy/move all the relevant data into the permanent keys and remove the inProgress key. This would require that you make all the real keys optional of course. The advantage is that the object would maintain its primary key throughout the life cycle.
I think this particular case has been solved; but just in case, you can skip Simple Schemma validations by accessing MongoDB native API via Collection#rawCollection():
Orders.rawCollection().insert(newOrder);
While this question is very old in the meantime there is a better solution. You probably use simple schema together with collection2. Collection2 has the ability to set multiple schemas based on a selector and then validate against the correct schema based on it.
https://github.com/Meteor-Community-Packages/meteor-collection2#attaching-multiple-schemas-to-the-same-collection
e.g. you could have a selector {state: 'finished'} and only apply the full schema to these documents while having another selctor, e.g. {state: 'in-progress'} for unfinished orders with a schema with optional fields.

Grails: DELETE in autobinding 1-N object arrays

In Grails you can have 1-N object relationship and you can manage the many side on the same page as the one side like this:
Author has many Books
Client side:
input name=authorName
input name=books[0].bookName, hidden name=books[0].id
input name=books[1].bookName, hidden name=books[1].id
Server side:
Author(params).save()
This will save (or update if id is not null) both the Author and the Book collection. Which is fantastic!
But is there a way to also issue a DELETE for the book if for example books[1] no longer exists or it's id has been set to null?
the best thing is to handle this on the client. dont send the empty records and re index the new records you need to save

Resources