I'm doing the sync to mirror a sqlite DB to a server one.
I have a Master-Detail table, where the details must be send to the server ASAP. However, is possible that detail 3 arrive before detail 2. I need to mimic the steps made to the document and respect the order of the operations.
When a record is saved locally, I send a notification and then post the data. How I can guarantee a strict sequential order using AFNetworking?
By default, operations run concurrently, with no guarantee of order. The only way to ensure that actions play is to prevent more than one request operation from running at a given time, by setting the operationQueue.maximumConcurrentOperations property to 1 (or, if you're not using a manager, make sure to enqueue operations into an operation queue with the property set thusly).
Related
I need to persist my domain objects into two different databases. This use case is purely write-only. I don't need to read back from the databases.
Following Domain Driven Design, I typically create a repository for each aggregate root.
I see two alternatives. I can create one single repository for my AG, and implement it so that it persists the domain object into the two databases.
The second alternative is to create two repositories, one each for each database.
From a domain driven design perspective, which alternative is correct?
My requirement is that it must persist the AR in both databases - all or nothing. So if the first one goes through and the second fails, I would need to remove the AG from the first one.
If you had a transaction manager that were to span across those two databases, you would use that manager to automatically roll back all of the transactions if one of them fails. A transaction manager like that would necessarily add overhead to your writes, as it would have to ensure that all transactions succeeded, and while doing so, maintain a lock on the tables being written to.
If you consider what the transaction manager is doing, it is effectively writing to one database and ensuring that write is successful, writing to the next, and then committing those transactions. You could implement the same type of process using a two-phase commit process. Unfortunately, this can be complicated because the process of keeping two databases in sync is inherently complex.
You would use a process manager or saga to manage the process of ensuring that the databases are consistent:
Write to the first database and leave the record in a PENDING status (not visible to user reads).
Make a request to second database to write the record in a PENDING status.
Make a request to the first database to leave the record in a VALID status (visible to user reads).
Make a request to the second database to leave the record in a VALID status.
The issue with this approach is that the process can fail at any point. In this case, you would need to account for those failures. For example,
You can have a process that comes through and finds records in PENDING status that are older than X minutes and continues pushing them through the workflow.
You can can have a process that cleans up any PENDING records after X minutes and purges them from the database.
Ideally, you are using something like a queue based workflow that allows you to fire and forget these commands and a saga or process manager to identify and react to failures.
The second alternative is to create two repositories, one each for each database.
Based on the above, hopefully you can understand why this is the correct option.
If you don't need to write why don't build some sort of commands log?
The log acts as a queue, you write the operation in it, and two processes pulls new command from it and each one update a database, if you can accept that in worst case scenario the two dbs can have different version of the data, with the guarantees that eventually they will be consistent it seems to me much easier than does transactions spanning two different dbs.
I'm not sure how much DDD is your use case, as if you don't need to read back you don't have any state to manage, so no need for entities/aggregates
I am building a system that processes orders. Each order will follow a workflow. So this order can be, e.g., booked,accepted,payment approved,cancelled and so on.
Every time a status of a order changes I will post this change to SNS. To know if a status order has changed I will need to make a request to a external API, and compare to the last known status.
The question is: What is the best place to store the last known order status?
1. A SQS queue. So every time I read a message from queue, check status using the external API, delete the message and insert another one with the new status.
2. Use a database (like Dynamo DB) to control the order status.
You should not use the word "store" to describe something happening with stateful facts and a queue. Stateful, factual information should be stored -- persisted -- to a database.
The queue messages should be treated as "hints" on what work needs to be done -- a request to consider the reasonableness of a proposed action, and if reasonable, perform the action.
What I mean by this, is that when a queue consumer sees a message to create an order, it should check the database and create the order if not already present. Update an order? Check the database to see whether the order is in a correct status for the update to occur. (Canceling an order that has already shipped would be an example of a mismatched state).
Queues, by design, can't be as precise and atomic in their operation as a database should. The Two Generals Problem is one of several scenarios that becomes an issue in dealing with queues (and indeed with designing a queue system) -- messages can be lost or delivered more than once.
What happens in a "queue is authoritative" scenario when a message is delivered (received from the queue) more than once? What happens if a message is lost? There's nothing wrong with using a queue, but I respectfully suggest that in this scenario the queue should not be treated as authoritative.
I will go with the database option instead of SQS:
1) option SQS:
You will have one application which will change the status
Add the status value into SQS
Now another application will check your messages and send notification, delete the message
2) Option DynamoDB:
Insert you updated status in DynamoDB
Configure a Lambda function on update of that field
Lambda function will send notifcation
The database option looks clear additionally, you don't have to worry about maintaining any queue plus you can read one message from the queue at a time unless you implement parallel reader to read from the queue. In a database, you can update multiple rows and it will trigger the lambda and you don't have to worry about it.
Hope that helps
I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!
The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.
In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.
I have normal receive port using a WCF-Adapter for oracle that uses a polling query. Now the problem is that the receive port not only needs to run once the polling query has a hit, but also once per day, regardless of the polling-statement.
Is there a way to make it possible without creating the entire process again?
The cleanest way will be to use an additional receive location. So you will end up with one receive port that contains two receive locations, one for each query.
In the past I have done this with the WCF adapter when polling SQL Server. The use of two locations did require duplicating the schema, unfortunately, to account for the different namespaces. You will probably need two different (and essentially identical) schemas as well.
WCF-SQL polling locations require distinct InboundId values while WCF Oracle polling (as you have noted in the comments) requires different a PollingId for each receive location.
The ESB toolkit includes pipeline components to remove and add namespaces, if you need additional downstream applications work with only a single schema on the messages coming from both locations and/or do not also want to duplicate a BizTalk map.
Change your polling statement so that it has an OR CURRENT_TIME() BETWEEN ....
That way it will trigger at the time you want.
I have two workflow services (state machines) that should cooperate and exchange information to accomplish the desired behavior.
The problem I have (but I also had it with only one state machine) is that sometimes I try to send an operation which is not allowed by the current state.
There are two problems: 1) I have to wait the operation timeout to know that the operation was not allowed 2) I'm "masking" real timeouts due to other problems
By now, I found two possible solutions: 1) I can change signatures to return true (allowed) and false (not allowed) and add all operations to all states, (not allowed operations would trigger a self-transition) 2) I always add all transitions to all states (not allowed would trigger a self-transition) but for transitions not allowed I will send an exception
I would like to know which is the best option (and, of course, I'd appreciate other possible solutions too).
I would also like to know how I could reply to a request with an exception (maybe throwing it within a try/catch?).
Thanks
Another option here is to use the information in the workflow persistence store. One of the columns contains the active bookmarks and in the case of a Receive activity this is the SOAP operation. You could have a separate service that exposes that information for a given workflow instance.
You still need to cater of the fact that you might send messages to a workflow that is in a different state because the workflow persistence store isn't updated right away (unless you make it do so) and because multiple people might send messages to the same workflow instance. Still this basic technique works really well and I have used this to enable/disable buttons on the UI based on the state of a workflow.