My project working on Symfony 4.4 and doctrine migrations bundle 3.2.1.
I used the columnDefinition to write a special column:
columnDefinition="VARCHAR(50) GENERATED ALWAYS AS (IF(ISNULL(`cabinet_id`), 'null', `cabinet_id`)) VIRTUAL"
It works perfect, but now, every time when I call doctrine:migrations:diff, migration is trying to change the column for the same:
ALTER TABLE MonitoringReportUpdate CHANGE virtual_null `virtual_null` VARCHAR(50) GENERATED ALWAYS AS (IF(ISNULL(`cabinet_id`), \'null\', `cabinet_id`)) VIRTUAL
And even if I run this alter, and call doctrine:migrations:diff again, I'll see the same query to execute:
ALTER TABLE MonitoringReportUpdate CHANGE virtual_null `virtual_null` VARCHAR(50) GENERATED ALWAYS AS (IF(ISNULL(`cabinet_id`), \'null\', `cabinet_id`)) VIRTUAL
Did I use columnDefinition wrong or maybe it's just a bug? Or is it possible to ignore this column when I call generate migration?
This is something Doctrine is not able to analyze properly so he is not able to consider it "already done", so he is considering it missing each time you generate a diff.
I already had to try such "complex" doctrine column def and i did not found a good workaround.
My best answer would be : There is something bad in your code that generate that kind of needed behavior in your database : avoid it and adapt your code.
Related
I'm adding liquibase to an existing project where tables and data already exist. I would like to know how I might limit the scope of changelogSync[SQL] to a subset of available changes.
Background
I've run liquibase generateChangeLog to capture the current state and placed this into say src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__init01.yaml.
I've also added another changeset to cover some new requirements in a new file. Let's call it src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__new-feature.yaml.
I've added a main changelog file src/main/resources/db/changelog/db.changelog-master.yaml with the following contents:
databaseChangeLog:
- includeAll:
path: changes
relativeToChangelogFile: true
I now want to ensure that when I run liquibase changelogSync[SQL] against a particular version of the db that the scope is limited to the first changelog init01, thereby allowing from that point on a liquibase update or updateToTag et al, to continue with changes following init01.
I'm surprised to see that the changelogSync[SQL] commands don't seem to offer some way (that I can see from the docs for how to do this.
Besides printing the SQL and manually changing it, is there something I've missed? Any suggested approaches welcome. Thanks!
What about changelogSyncToTagSQL ?
Wouldn't it cover your needs?
Or maybe you could try changelogSyncSQL with additional parameters "label" or and "context" ?
changelogSyncToTagSQL
context
labels
As it stands, the only solution I've found is to generate the SQL and then manually edit its contents to filter the change sets which don't correspond to your current schema.
Then you can apply the sql to the db.
In PhpStorm, I can change global, project or directories settings with existing SQL Dialects, but is there a way to configure SQL Dialects as Symfony/DQL in PhpStorm or a way to detect that App:Panel is a valid entity, not a table? (App:Panel table name is te_panel)
I read this answer which explains that we have to add a Java plugin, because it's currently not possible to add a new SQL Dialect on PhpStorm.
As example, this is an error that PhpStorm is throwing:
The : between App and Panel is not understood. It cannot understand the table name provided (because I provide the name of the Symfony entity).
DQL is not supported.
https://youtrack.jetbrains.com/issue/WI-9948 -- watch this ticket (star/vote/comment) to get notified on any progress.
You may try and treat App:Panel as placeholder (similar to how it was described in that linked question). But I have no ideas if it will help (have not really worked with Symfony/DQL so cannot test it myself).
What I may suggest though -- threat the whole query as plain text. Yes, no syntax highlighting and stuff but will not show errors either.
How? One way by placing special comment just before the string, e.g.
->query(/** #lang text */'SELECT ...');
Or by disabling Language Injection rule for SQL altogether.
Alternatively try what has been suggested in this comment -- custom SQL detection syntax(?): https://gist.github.com/willemnviljoen/d20ad8ad0cc365a7e80744328246610f
I changed Change in persistence.xml
I also changed column definition (columnDefinition="XDB.XMLType") for xml fields
I checked OpenJpa(http://openjpa.208410.n2.nabble.com/Oracle-XMLType-fetch-problems-td6208344.html) site and IBM (http://www.ibm.com/support/knowledgecenter/SS7J6S_7.5.0/com.ibm.wsadapters.jca.jdbc.doc/env/doc/rjdb_problemsolutions.html)
My env is OpenJpa 2.0 and WAS 7
its throwing exception
org.apache.openjpa.persistence.PersistenceException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.XMLTYPE", line 169
Please suggest without changing OpenJpa2.0 as its part of IBM WebSphere Application Server V7.0 how can i handle sys.XMLTYPE data, i am migrating my application from db2 to Oracle in same environment.
Writing XML data can be tricky some times! Getting the correct drivers and things defined properly can have its challenges. I can not say exactly what you need to do given the lack of info on your domain model and such, but let me give some general things to look for. First, there is an XML test in the OpenJPA test framework if you want to make reference to it. It can be seen publicly here:
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/jdbc/oracle/
Or, another test using an "XMLValueHandler" (likely this is beyond the scope of what you are looking for):
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/xmlmapping/query/
Second, (stating the obvious) I assume you have a column in Oracle defined as "XMLTYPE". Also, I see you are using schema SYS. I'm sure you are aware but this is a system/admin schema......just for sanity sake you might want to first get things running using a non-system/admin schema just so we don't get hung up with any issues with your OpenJPA client not having the correct permissions.
Next, you need the following definition:
#Lob #Basic
#Column(name = "ANXMLCOLUMN", columnDefinition="XMLCOLUMN XMLType")
private String anXMLString;
The #Lob I think will be necessary if you are using data greater than 4000 chars (this was mentioned in one of the comments). To start I'd use a very small set of data (a couple characters), once that works, then experiment with > 4k.
Next, make sure to use the correct JDBC driver. The last time I experimented with an XMLType I used the Oracle JDBC 11.2.0.2 driver.
Finally, you might need to use the property "openjpa.jdbc.DBDictionary" with value "oracle(supportsSetClob=true,maxEmbeddedClobSize=-1)". Again, experiment with this AND look at the OpenJPA documentation on these properties to determine if they are necessary in your scenario. I think the supportsSetClob=true will only be necessary for older version (pre-2.2.x) of OpenJPA. You might also need to use property "openjpa.jdbc.SchemaFactory" with value "native". I would suggest you first try without either or these two properties. If that doesn't help, then experiment with these two properties. I know this is vague, but I don't know what your DDL or domain model looks like so I have to keep in vague.
Thanks,
Heath Thomann
Using Django 1.8 with a SQLite database.
I made a change in models.py that consisted of eliminating two out of four elements from a field choice and editing one of the remaining two:
Original value:
OPTION_INSPECTOR = (
('John Smith', 'John Smith'),
('Taylor', 'Taylor'),
('Hale', 'Hale'),
('James', 'James'),
)
New value:
OPTION_INSPECTOR = (
('John Smith', 'John Smith'),
('Greg Taylor', 'Greg Taylor'),
)
I then ran:
python manage.py makemigrations my_app
I saw nothing unusual, only the changes that I made, if I recall correctly. I do not have the actual output.
Then I ran:
python manage.py migrate my_app
Again, I don't remember anything unusual. But, the model hasn't changed the choices within the dropdown menu on the Admin site. Can anyone suggest a way to make this change successfully?
Thanks!
I am continually changing my app based on feedback from researchers. I have had a lot of issues like what you are describing. I have found that there are a lot of changes I can make to the models before the app total breaks and I have to go back and "remake" it with a fresh model.py file, but eventually, that is what I have had to do. In the meantime (before I remake the app), I make the changes to the model.py, then I run the makemigrations and then migrate. I look and see what the errors are and I make the changes to fix them directly in the database (in my case, MySQL). Typically, I have to do this to add or alter fields. Then, I rerun the make migrations and the migrate, and I get warnings about duplicate fields, but the app keeps working. Like I said though, eventually, I have to remake the app with a fresh model.py file to get rid of all the warnings.
I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.