Is there any way to control execution order of repeatable migration scripts in flyway?
I want to run a repeatable migration script after all other repeatable or versioned scripts on checksum change.
Repeatable scripts appear to be controlled by the name following the R__ suffix, first numeric, then alpha upper case, then alpha lower case.
Instead of being run just once, they are (re-)applied every time their checksum changes.
Within a single migration run, repeatable migrations are always applied last, after all pending versioned migrations have been executed.
https://flywaydb.org/documentation/migrations#repeatable-migrations
Maybe naming the scripts in the right order is not enough. If you name them R__A, R__B and R__C, it works for the first time, but when you later only change R__B, then only R__B will be executed. This may be a problem, if subsequent scripts should be re-executed but did not change. For example, R__B creates a table and R__C inserts some static data.
Related
I would like to manually trigger Airflow dag and pass parameters in this call. How to create DAG tasks based on the passed parameters?
Unfortunately, you can't dynamically create tasks in the sense you are asking now. Meaning, you can not add or delete tasks for a single run based on parameters that you give it. You always add the entire DAG, meaning also any consecutive DagRuns.
You have two options that might still get you to a satisfactory solution though.
Mapped tasks as explained here; https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#simple-mapping
Generate a DAG based on some external resource. Example; we have a library on which we call my_config.get_dag() which will then create the DAG based on a json file.
Our team is transitioning to Robot Framework and we have some conceptual questions. Our biggest at the moment is how to set up a database insert/ delete dynamically, depending on the test we are trying to run. For example, we may have a test suite where we want to test an endpoint like so:
Record exists, endpoint is valid
Record exists with data flaw, error is handled correctly
Record does not exist, error is handled correctly
...etc
Each of these requires a different kind of document to be inserted into our database, then deleted, in set up/ tear down.
Currently, we're setting up each test as a separate suite and explicitly setting the insert statements (in this case, the JSON, since we're using MongoDB) in each Suite Setup keyword. We have some factory-method resources to help reduce redundancy but it's still very redundant. We are copying and pasting entire .robot files and changing a value or two in each one.
We have tried the data-driver library but we haven't been able to get it working with the variable scoping. And we can't set these set up/ tear down steps as simple test steps since we need to be sure that each document is created/ destroyed before and after each test.
I'm looking to integration test my Couchbase implementation and i'm running into a problem with the eventually-consistent nature of Couchbase. In production, it's perfectly ok for my data to be stale, but at test time i'd like to insert some data and then verify that i'm getting it back via my various services. This doesn't work if the data is stale because my test expectations can't account for that.
I can work around this by setting staleState to false in the Couchbase client, but that means that every test i have is going to trigger a rebuild of the indexes and increases their running time.
Is there a way to force Couchbase to trigger a one-time rebuild the indexes for a design doc? Essentially, i'd like to upload all of my test data, trigger a rebuild and then execute my test cases.
Also if there's a better pattern for integration testing with Couchbase, i'd love to hear it.
Thanks,
M.
Couchbase will only rebuild the view indexes when stale=false is set if there is actually more data that needs to go into the index. Your first stale=false may take some time, but the rest of the calls should be fast even with stale=false set as long as you're not putting more data into your cluster.
For all of the subsequent calls there will a small (millisecond or smaller) delay due to the index checking to make sure it is up to date. If you don't want this you can just run the queries with stale=true and again as long as you're not inserting any more data you should get the correct results.
The last thing to note is that view index builds are incremental so they never rebuild the entire index.
We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case
I submitting jobs to the batch process one after the other.
How do i control such that the second batch job runs only when the first one is finished.
Right now both the jobs executes simultaneously which i dont want to happen
There are two options. You can do this through code, or just via manual setup. Manual method is fairly easy, just go to (Basic>Inquiries>Batch Job), create a new batch job and save it. Then click "View Tasks" and create a new task, where this will be your first batch task. Choose your class, description, batch group, etc., then save. Click "parameters" to setup the parameters.
After that, you can setup your dependent task. Make sure your tasks both have descriptions. Add your second batch task and save. Then in the lower left corner, you click on your task that you want to have a condition, then add a row there and setup your conditions so that one task won't go until the second has completed.
Via X++ code, you would create a BatchHeader where you setup basically the same thing we just did manually. You use the .addDependency to make one task dependent on the completion of the other. This walkthrough will get you started with a job to create the batch header, and you'll just have to play around to get the dependency working.