Is there a way or a plugin to configure SQL Dialects as DQL in PhpStorm? - symfony

In PhpStorm, I can change global, project or directories settings with existing SQL Dialects, but is there a way to configure SQL Dialects as Symfony/DQL in PhpStorm or a way to detect that App:Panel is a valid entity, not a table? (App:Panel table name is te_panel)
I read this answer which explains that we have to add a Java plugin, because it's currently not possible to add a new SQL Dialect on PhpStorm.
As example, this is an error that PhpStorm is throwing:
The : between App and Panel is not understood. It cannot understand the table name provided (because I provide the name of the Symfony entity).

DQL is not supported.
https://youtrack.jetbrains.com/issue/WI-9948 -- watch this ticket (star/vote/comment) to get notified on any progress.
You may try and treat App:Panel as placeholder (similar to how it was described in that linked question). But I have no ideas if it will help (have not really worked with Symfony/DQL so cannot test it myself).
What I may suggest though -- threat the whole query as plain text. Yes, no syntax highlighting and stuff but will not show errors either.
How? One way by placing special comment just before the string, e.g.
->query(/** #lang text */'SELECT ...');
Or by disabling Language Injection rule for SQL altogether.
Alternatively try what has been suggested in this comment -- custom SQL detection syntax(?): https://gist.github.com/willemnviljoen/d20ad8ad0cc365a7e80744328246610f

Related

How to run liquibase changelogSyncSQL or changelogSync up to a tag and/or labels?

I'm adding liquibase to an existing project where tables and data already exist. I would like to know how I might limit the scope of changelogSync[SQL] to a subset of available changes.
Background
I've run liquibase generateChangeLog to capture the current state and placed this into say src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__init01.yaml.
I've also added another changeset to cover some new requirements in a new file. Let's call it src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__new-feature.yaml.
I've added a main changelog file src/main/resources/db/changelog/db.changelog-master.yaml with the following contents:
databaseChangeLog:
- includeAll:
path: changes
relativeToChangelogFile: true
I now want to ensure that when I run liquibase changelogSync[SQL] against a particular version of the db that the scope is limited to the first changelog init01, thereby allowing from that point on a liquibase update or updateToTag et al, to continue with changes following init01.
I'm surprised to see that the changelogSync[SQL] commands don't seem to offer some way (that I can see from the docs for how to do this.
Besides printing the SQL and manually changing it, is there something I've missed? Any suggested approaches welcome. Thanks!
What about changelogSyncToTagSQL ?
Wouldn't it cover your needs?
Or maybe you could try changelogSyncSQL with additional parameters "label" or and "context" ?
changelogSyncToTagSQL
context
labels
As it stands, the only solution I've found is to generate the SQL and then manually edit its contents to filter the change sets which don't correspond to your current schema.
Then you can apply the sql to the db.

aem-6-2-replication-issue-for-node-containing-invalid-jcr-names

I am working on AEM 5.6 to 6.2 upgrade project. There are some nodes in aem 5.6 environment which contains invalid character(as per JCR naming convention like rte[2] is one of the node name which doesn't follow the naming convention)but somehow we are able to replicate those nodes in 5.6 environment. After upgrade it to aem 6.2,it seems like JCR is more restricted and won't allow the nodes to replicate if it is having invalid characters.
Getting the below error in aem 6.2: error:
com.day.cq.replication.ReplicationException: Repository error during node import: Cannot create a new node using a name including an index
Is there any way we can configure AEM 6.2 to stop checking JCR node names?or any other solution?
JCR 2 does not allow [ as a valid character therefore you won't get an easy workaround for this. It's one of the limitations just like the same-named-sibling.
My recommendation will be to modify these nodes before the upgrade/migration to 6.2. This can be complicated and costly for business but 6.2 won't allow it.
As a background [ was allowed in older version due to twisted support for grammar syntax for same-named-siblings.
Assuming that these are all content nodes as nothing out-of-the-box in AEM 5.x follows this naming convention.
Some ways to fix it:
Write a custom servlet to query and rename the paths across all references. You will have to test your content for these renames.
Use Groovy console (https://github.com/OlsonDigital/aem-groovy-console) to rename the nodes.
In either case, you will need to modify the nodes before the migration as the structure is not oak compliant therefore you cannot use crx2oak commit hooks also. This can be done with both in-place upgrade and side-by-side migration. This is similar to the problem with same named siblings that must be corrected before the migration.
Some efficiency techniques that might help:
Write queries to find invalid node names on top-level nodes like /content/mysite-a, /content/mysite-b etc. Don't run root level queries on /content as it might downgrade to
traversal and halt the execution.
Ensure that all references are updated in same commit. If you are using custom servlet, call session.save() only after updating all the node names and it's corresponding references.
As i mention in the comment this replication failure causes because of the oak workspace restriction as the code snippet below
//handle index
if (oakName.contains("["))
{ throw new RepositoryException("Cannot create a new node using a name including an index");
}
and i feel you can't escape this constraint as it it required by the repository to maintain consistency
you can find nodes that ends with '[', by below query
SELECT [jcr:path] FROM [nt:base] WHERE ISDESCENDANTNODE('/content/path/') AND [jcr:path] like '%\['
and to modify the JCR/CRX nodes you can use CURL or SlingPostServlet method
Some helpful posts are blow.
https://github.com/paulrohrbeck/aem-links/blob/master/curl_cheatsheet.md
http://sling.apache.org/site/manipulating-content-the-slingpostservlet-servletspost.html
Can you try migrating using a tool like oak-upgrade and let us know if you are still facing this issue.
The tool is robust and you have the flexibility to configure specific sub-trees for migration using this tool.

Disabling Flyway Placeholder Validation

So what I understood after upgrading my flyway version because of some requirements is that flyway-core-2.2 introduced some validation for Flyway placeholders.
Now, the convention of placeholder syntax is ${name} uniform across most libraries. In our migration scripts, we are inserting a string in a mysql table column called stretchySql and that string holds some placeholder elements of our own which is meant to be interpreted at runtime by the application layer.
UPDATE `stretchy_parameter` SET `parameter_sql`='select r.id as report_id from stretchy_report r where r.report_category = \'${reportCategory}\'
I don't want flyway to interpret something embedded in a string as its own placeholder and throw an error. So basically, is there some way to switch off flyway placeholder validation(since we don't use it) without reverting back to an older version?
As of Flyway 3.2.1 (not sure when it was introduced) there is a new boolean setting called flyway.placeholderReplacement, which defaults to true, and you can use that to effectively disable the feature. On the API it can be accessed via the setPlaceholderReplacement(value) and isPlaceholderReplacement() methods of a Flyway object.
I know 2.2 probably didn't have that option and the accepted answer was probably the best way to solve it, but I decided this alternative could be useful to users of newer versions bumping into this page.
While you can't disable it, you can set the placeholder prefix or suffix to something that will never match. This will effectively achieve the same thing.
http://flywaydb.org/documentation/api/javadoc/org/flywaydb/core/Flyway.html#setPlaceholderPrefix(java.lang.String)

Plone Conditional - If Content Type is Versionable

Is there a simple way to check if a content-type, or a specific object, has Versioning enabled/disabled in Plone (4.3.2)?
For context, I am making some unique conditionals around portal_actions. So instead of checking path('object/##iterate_control').checkout_allowed(), I need to first see if versioning is even enabled. Otherwise, the action in question does not display for items that have versioning disabled, because obviously it isn't checkout_allowed.
I didn't have any luck with good ole Google, and couldn't find this question anywhere here, so I hope it's not a dupe. Thanks!
I was able to get this working by creating a new script, importing getToolByName, and checking current content type against portal_repository.getVersionableContentTypes(). Then just included that script in the conditional.
I was looking for something like this that already existed, so if anyone knows of one let me know. Otherwise, I've got my own now. Thanks again!
The first thing that checkout_allowed does is check if the object in question supports versioning at all:
if not interfaces.IIterateAware.providedBy(context):
return False
(the interface being plone.app.iterate.interfaces.IIterateAware:
class IIterateAware( Interface ):
"""An object that can be used for check-in/check-out operations.
"""
The semantics Interface.providedBy(instance) are a bit unfortunate for usage in conditions or TAL scripts, because you'd need to import the interface, but there's a reversal helper:
context.portal_interface.objectImplements(context,
'plone.app.iterate.interfaces.IIterateAware')

Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.

Resources