I use Doctrine's Migrations in my symfony 2 project to safely migrate my database. Most of the times it's all about adding columns, but sometimes I split columns (because of normalization) or I change releationships (one to one -> one to many).
The creation itself is simple. But deleting or renaming the old column/property is not. This can and may only happen if the data is transformed from the old format to the new one.
Example:
Current situation: Task entity with one tag
Future situation: Task entity with one to many Tags
I tried the preUp/postUp functions in the migration script.
Pseudo-code:
Fetch Tasks
Foreach Tasks as Task
$tag = new Tag();
$tag->setName($task->getTag());
persist
$task->setNewtag($tag);
persist
flush
In the second migration script I can safely delete the old Tag column and rename Newtag to Tag.
Problem:
When I --dry-run the script, the SQL commands are not executed (ofcourse) but the 'transformation' scripts are, whith an error as result because the columns does not exist.
How can I overcome this error? Can I intercept the result of the SQL-script and only execute the transformation if SQL result = true?
Related
I want to know the last update time of a Cache Intersystems DB table. Please let me know the relevant command. I ran through their command documentation:
http://docs.intersystems.com/latest/csp/docboo/DocBook.UI.Page.cls?KEY=GTSQ_commands
But I don't see any such command there. I also tried searching through this :
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_currenttimestamp
Is this not the complete documentation of commands ?
Cache' does not maintain "last updated" information by default as it might introduce unnecessary performance penalty on DML operations.
You can add this field manually to every table of interest:
Property LastUpdated As %TimeStamp [ SqlComputeCode = { Set {LastUpdated}= $ZDT($H, 3) }, SqlComputed, SqlComputeOnChange = (%%INSERT, %%UPDATE) ];
This way it would keep the time of last Update/Insert for every row, but still it would not help you with Delete.
Alternatively - you can setup triggers for every DML operation that would maintain timestamp in a separate table.
Without additional coding the only way to gather this information is to scan Journal files, which is not really intended use for these and would be slow at best.
I have an MLOAD job that inserts data from an Oracle database into a Teradata database. One of the things it does it drop the destination table and recreate it. Our production website populates a dropdown list based on what's in the destination table.
If the MLOAD script is not on a single transaction then it's possible that the dropdown list could fail to populate properly if the binding occurs during the MLOAD job. If it is transactional, however, it would be a seamless process because the changes would not show until the transaction is committed.
I checked the dbc.DBQLogTbl and dbc.DBQLQryLogsql views after running the MLOAD job and it appears there are several transactions occurring within the job, so it would seem that the entire job is not done in a single transaction. However, I wanted to verify that this is indeed the case before I make assumptions.
A transaction in Teradata cannot include multiple DDL statements, each DDL must be commited seperatly.
A MLoad is treated logically as a single transaction even if you see multiple transactions in DBQL, these are steps to prepare and cleanup.
When your application tries to select from the target table everything will be ok (unless it's doing a dirty read using LOCKING ROW FOR ACCESS).
Btw, there might be another error message "table doesn't exist" when the application tries to select. Why do you drop/recreate the table instead of a simple DELETE?
Another solution would be a loading a copy of the table and use view switching:
mload tab2;
replace view v as select * from tab2;
delete from tab1;
The next load will do:
mload tab1;
replace view v as select * from tab1;
delete from tab2;
And so on. Of course your load job needs to implement the switching logic.
I'd like to use flyway for a DB update with the situation that an DB already exists with productive data in it. The problem I'm looking at now (and I did not find a nice solution yet), is the following:
There is an existing DB table with numeric IDs, e.g.
create table objects ( obj_id number, ...)
There is a sequence "obj_seq" to allocate new obj_ids
During my DB migration I need to introduce a few new objects, hence I need new
object IDs. However I do not know at development time, what ID
numbers these will be
There is a DB trigger which later references these IDs. To improve performance I'd like to avoid determine the actual IDs every time the trigger runs but rather put the IDs directly into the trigger
Example (very simplified) of what I have in mind:
insert into objects (obj_id, ...) values (obj_seq.nextval, ...)
select obj_seq.currval from dual
-> store this in variable "newID"
create trigger on some_other_table
when new.id = newID
...
Now, is it possible to dynamically determine/use such variables? I have seen the flyway placeholders but my understanding is that I cannot set them dynamically as in the example above.
I could use a Java-based migration script and do whatever string magic I like - so, that would be a way of doing it, but maybe there is a more elegant way using SQL?
Many thx!!
tge
If the table you are updating contains only reference data, get rid of the sequence and assign the IDs manually.
If it contains a mix of reference and user data, you need to select the id based on values in other columns.
I'm learning migrations and I'm curious how migration tool figures out which changes to our model were made after the last migration was created.
For example, assume we created a migration M1 and apply it by issuing command Update-Database. After applying M1, if we add a new property P to a class C and create another migration M2 by issuing command Add-Migration M2, then migration tool will somehow be able to figure out that only change ( after M1 was created ) we made to the model was adding a new property P to class C. How does migration tool figure that out?
thank you
REPLY:
Migrations uses __Migrations table to figure which migrations have already been applied and which have yet to be applied, but I thought it doesn't use this table to also figure out what changed from one migration to another, since data in migrations table is a hash, which means it can't be decrypted, which I assume would be necessary so that current model metadata can be compared with latest metadata stored in the migrations table?!
Or are you implying that it is able to figure out just by comparing hash values ( of current and stored versions ) which properties have changed or were deleted or were added to an entity?
It stores your model versions in the database (migrations history table) and compares your current model with the model stored in your database.
The model is stored in .resx file under each mifration in the Target resource value. It is an encoded (serialized) model. It is used to compare your current model and generate the next migration.
Imagine this scenario:
I have an array of ids for some entities that have to be deleted from database (i.e. a couple of externals keys that identifies a record into a third table) and an array of ids for some entities that have to be updated/inserted (based on some criteria that, in this moment, doesn't matters).
What can I do for delete those entities ?
Load them from db (repository way)
Call delete() on the obtained objects
Call flush() onto my entity manager
In that scenario I can make all my operation atomical as I can update/insert other records before call flush().
But why have I to load from db some records just for delete them? So I wrote my personal DQL query (into repo) and call it.
The problem is that if I call that function into my repo, this operation is done immediatly and so my "atomicity" can't be guaranteed.
So, how can I "jump" over this obstacle by following the second "delete-option" ?
By using flush() you're making Doctrine to start transactions implicitly. It is also possible to use transactions explicitly and that approach should solve your problem.