Should I check DynamoDB map update path before sending - amazon-dynamodb

I am storing data in DynamoDB as a Map attribute type and implementing a restful PATCH endpoint to modify that data (RFC 6902).
In my validation routine, I am currently NOT making sure that the Map exists before translating the patch into an updating expression and sending it to DynamoDB.
This means that if the Map is not already set in DynamoDB, the update will fail (ValidationException since the document path does not exist).
My Question: is it appropriate/acceptable/OK to rely on DynamoDB rejecting the update in this way, or should I get a copy of the item and reject the patch in my own validation routines?
I have not been able to think of a reason not to allow DynamoDB the pleasure of rejecting the patch (and it saves me a GET call), but it makes me a little nervous to rely on a 3rd party validation like this (although we specify the API version now when accessing AWS so in theory this should always work...)

This seems like a highly subjective question, yet personally I don't see the benefit of doing your own validation by adding an extra round-trip to DynamoDB when, the way I see it, the worst that can happen is the update succeeding to DynamoDB. This is no different than treating a local database update that returns success as authoritative.
And if you're worried about long-term backwards compatibility (ie. DynamoDB API contract changing from underneath you) then hopefully you've written some functional/integration tests that are run periodically and specifically whenever you update your software dependencies.

Related

Does Firebase Realtime Database guarantees FCFS order when serving requests?

This is rather just a straight forward question.
Does Firebase Realtime Database guarantees to follow 'First come first serve' rule when handling requests?
But when there is a write-request, and then instantaneously followed by a read-request, is the read-request will fetch updated data?
When there is a write-read-write sequence of requests, does for sure read-request fetch the data written by first write?
Suppose there is a write-request, which was unable to perform (due to some connection issues). As firebase can work even in offline, that change will be saved locally. Now from somewhere else another write-request was made and it completed successfully. After this, if the first device comes online, does it modify the values(since it arrived latest)? or not(since it was initiated prior to the latest changes)?
There are a lot of questions in your post, and many of them depend on how you implement the functionality. So it's not nearly as straightforward as you may think.
The best I can do is explain a bit of how the database works in the scenarios you mention. If you run into more questions from there, I recommend implementing the use-case and posting back with an MCVE for each specific question.
Writes from a single client are handled in the order in which that client makes them.
But writes from different clients are handled with a last-write-wins logic. If your use-case requires something else, include a client-side timestamp in the write and use security rules to reject writes that are older than the current state.
Firebase synchronizes state to the listeners, and not necessarily all (write) events that led to this state. So it is possible (and fairly common) for listeners to not get all state changes that happened, for example if multiple changes to the same state happened while they were offline.
A read of data on a client that this client itself has changed, will always see the state including its own changes.

Database rollback on API response failure

A customer of ours is very persistent that they expect this "from any API" (meaning they don't want to pay for the changes). I seem to have trouble finding clear information on this though.
Say we have an API that creates an appointment for a calendar. Server-side everything was successful, data is committed to the database. API tries to send the HTTP 201 (Created) response, but something goes wrong there. Client ignores the response, or connection dropped, ...
They want our API to undo the database changes in that particular situation.
The question is not how to do this, but rather if this is something most APIs do? Is this standard behavior? Or something similar like refusing duplicate create requests?
The difficult part of course is to actually know if an API has failed to send the response, and as far as I am concerned with respect to the crux of the question, it is not a usual behavior implemented. If the user willingly inputs the data, you can go ahead and store it. If the response doesn't return properly due to timeouts (you are not responsible for user "ignoring" the response), then the client side code can refresh on failure and load fresh data. And the user can delete inputted data themselves(given you provide an endpoint for that)
Depending on the database, it is possible to make all database changes of an API reversible. For example, with SQL, you use [SQL transactions][1] using commit, rollback and savepoints. There is most likely a similar mechanism available for noSQL.

Would real world Meteor application use server side Methods almost exclusively?

I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!
The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.
In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.

Is PUT/DELETE idempotent with REST automatic?

I am learning about REST and PUT/DELETE, I have read that both of those (along with GET) is idempotent meaning that multiple requests put the server into the same state.
Does a duplicate PUT/DELETE request ever leave the web browser (when using XMLHttpRequest)? In other words, will the server be updating the same database record for each PUT request, or will duplicate requests be ignored automatically?
If yes, how is using PUT or DELETE different from just using POST?
I read an article which suggested that RESTful web services were the way forward. Is there any particular reason why HTML5 forms do not support PUT/DELETE methods?
REST is just a design structure for data access and manipulation. There's no set-in-stone rules for how a server must react to data requests.
That being said, typically a REST request of PUT or DELETE would be as follows:
DELETE /item/10293
or
PUT /item/23848
foo=bar
fizz=buzz
herp=derp
The requests given are associated with a specific ID. Because of this, telling the server to delete the same ID 15 times will end up with pretty much the same result as calling it once, unless there's some sort of re-numbering going on.
With the PUT request, telling the server to update a specific item to specific values will also lead to the same result.
A case where a command would be non-idempotent would typically involve some sort of relative value:
DELETE /item/last
Calling that 15 times would likely remove 15 items, rather than the same last item. An alternative using HTTP properly might look like:
POST /item/last?action=delete
Again, REST isn't an official spec, it's just a structure with some common qualities. There are many ways to implement a RESTful structure.
As for HTML5 forms supporting PUT & DELETE, it's really up to the browsers to start supporting different methods rather than the spec itself. If all the browsers started implementing different methods for form submission, I'm sure they'd be added to the spec.
With the web going the way it is, a good RESTful implementation is liable to also incorporate some form of AJAX anyway, so to me it seems largely unnecessary.
Does a duplicate PUT/DELETE request ever leave the web browser (when using XMLHttpRequest)?
Yeah, sure. Idempotence is only a convention and it's not enforced. If you make a request, duplicate or not, it will run through.
In other words, will the server be updating the same database record for each PUT request, or will duplicate requests be ignored automatically?
If it conforms to REST it should update the same database record twice, for example running UPDATE user SET name = 'John' twice. There is not guarantee what it will or will not do though, it depends on how it's implemented.
If yes, how is using PUT or DELETE different from just using POST?
It's just a convention. PUT and DELETE requests may or may not be handled differently from POST in the site's code.
I read an article which suggested that RESTful web services were the way forward. Is there any particular reason why HTML5 forms do not support PUT/DELETE methods?
I'm not really sure, to be honest. You can work around this by using a hidden <input> field called _method or similar and set it to DELETE or PUT, and then handle that server side.
PUT operation are idempotent but not safe operation. On success if PUT operation is repeated it will not insert duplicate records. Repeat PUT operation in case of NetworkFailure errors after verifying conditional headers like If-unmodified-since and/or if-match. Don't repeat in case of 4XX or 5XX error codes.
REST aims to establish a syntax convention regarding the HTTP method to use; each back end is free to implement anything they want, devs could break the convention but will cause unnecessary confusion if used by others not involved in the development.
For DELETE, if you delete some item with an ID, the server should responded it's deleted; if delete again, it's no more there so server responded "already removed", also good because your purpose is fulfilled. Same for PUT, because you provide the new status of your resource, the status yet-to-be; if it's already updated, mission complete and it's the same for the client.

How to implement locking across a server farm?

Are there well-known best practices for synchronizing tasks across a server farm? For example if I have a forum based website running on a server farm, and there are two moderators trying to do some action which requires writing to multiple tables in the database, and the requests of those moderators are being handled by different servers in the server farm, how can one implement some locking functionality to ensure that they can't take that action on the same item at the same time?
So far, I'm thinking about using a table in the database to sync, e.g. check the id of the item in the table if doesn't exsit insert it and proceed, otherwise return. Also probably a shared cache could be used for this but I'm not using this at the moment.
Any other way?
By the way, I'm using MySQL as my database back-end.
Your question implies data level concurrency control -- in that case, use the RDBMS's concurrency control mechanisms.
That will not help you if later you wish to control application level actions which do not necessarily map one to one to a data entity (e.g. table record access). The general solution there is a reverse-proxy server that understands application level semantics and serializes accordingly if necessary. (That will negatively impact availability.)
It probably wouldn't hurt to read up on CAP theorem, as well!
You may want to investigate a distributed locking service such as Zookeeper. It's a reimplementation of a Google service that provides very high speed distributed resource locking coordination for applications. I don't know how easy it would be to incorporate into a web app, though.
If all the state is in the (central) database then the database transactions should take care of that for you.
See http://en.wikipedia.org/wiki/Transaction_(database)
It may be irrelevant for you because the question is old, but it still may be useful for others so i'll post it anyway.
You can use a "SELECT FOR UPDATE" db query on a locking object, so you actually use the db for achieving the lock mechanism.
if you use ORM, you can also do that. for example, in nhibernate you can do:
session.Lock(Member, LockMode.Upgrade);
Having a table of locks is a OK way to do it is simple and works.
You could also have the code as a Service on a Single Server, more of a SOA approach.
You could also use the the TimeStamp field with Transactions, if the timestamp has changed since you last got the data you can revert the transaction. So if someone gets in first they have priority.

Resources