How do you delete classes from a Jade database - jade-platform

When I try to delete a class using the "Remove" option in the Jade class browser, I get the error:
"Class xxx cannot be deleted because:
Classes in an SDS Primary database cannot be deleted from the current schema context".
How can I remove a class?

In the context of an SDS environment, you need to version the schema before you can remove a class (using the 'Remove' option via the IDE for the latest schema version).
The re-org used to transition schema versions is then replayed on the SDS environment, as part of which its cached metadata is refreshed to reflect structural changes. I believe class removals is included with this (even if there's no persistent instances), because it'd need to discard the redundant class number.

You will want to use the Jade Schema Loader, with a command file.
According to the JADE Schema Load User's Guide, the syntax for the command file is:
JadeCommandFile
JadeVersionNumber 7.1.00
Commands
Delete Class ErewhonInvestmentsModelSchema::TenderSale
And you load it on your database server using:
jade.exe schema=RootSchema app=JadeSchemaLoader path=d:\jade\system ini=d:\jade\myjade.ini startAppParameters commandFile=d:\temp\DeleteClass.jcf loadStyle=currentSchemaVersion
Be sure to shutdown your database before running the command, or it will not run.

Related

Unable to compile SSDT Database Project with a view that has fully qualified name of table in view definition

We have a SQL 2019 database where all table names are fully qualified in views starting with the database name. We do NOT have the option of avoiding the fully qualified reference as the view definition is auto-generated (otherwise I would simply not fully qualify them). When views are defined by referencing tables within the same database as the view, the SSDT project complains that it has an unresolved reference.
Visual Studio does not allow adding a database reference to itself. The only way I can get it to compile is to create a DACPAC of the same database and then add that as a reference along with removing the database variable ($Name).
Is there any other method of providing fully qualified table names in views without having to create a DACPAC in SSDT project?
Only way I'm aware of would be to take the view code out of the project and handle in post-deploy scripts. This is done by design, because the database name may not be what was defined in the original code.
You can't use 3-4 part naming in SSDT normally. You can workaround this by using variables in the code. So let's say, that you have [localhost].[reports].[dbo].[your_table] you'll need to use [$(ReportServer)].[$(ReportDatabase)].[dbo].[your_table].
I have a DacPac project containing objects which use three part naming to refer to the containing database (hundreds of instances such as [thisDb].[dbo].[obj]* exist). I need compare and update this database, but the db project fails to build due to 200+ sql71561 errors.
I did not want to remove the unnecessary database name part or switch to using a database name variable. To successfully build (or compare, and then update) a database using three part naming or fully qualified naming to refer to itself, there is a way I found to pacify visual studio. It's not what I'd prefer, but it works.
Create a copy of the original db project.
In the copy db project, update all local database object references to use just two part names ([dbo].[obj]) instead of three part names (I used find & replace).
Make sure the copy db project targets the same SQL server version and builds successfully.
Reference the copy db project from the original db project (whether via database variable, database name only, or dacpac).
The original db project can now build because its references can be resolved. You'll end up with a dacpac for both the original and the copy, but at least the errors are gone and it compiles.

Quarkus Flyway Placeholders Configuration issue

I am having trouble getting quarkus.flyway.placeholders working in my Quarkus app.
I have this line defined in my application.properties file
quarkus.flyway.placeholders.myuser=my_user
in my sql file I have this line
GRANT DELETE, INSERT, SELECT, UPDATE ON survey.answers TO ${myuser};
the error I'm getting is
org.flywaydb.core.api.FlywayException: No value provided for placeholder: ${myuser}. Check your configuration!
Here are the things I've tried:
upgraded to Quarkus 1.13.6.Final
Tried setting
quarkus.flyway.placeholder-prefix=#[
quarkus.flyway.placeholder-suffix=]
As shown in the integration test:
https://github.com/quarkusio/quarkus/tree/main/integration-tests/flyway
Thank you
Matthew
I see you have a typo in your configuration or sql script.
With the following settings:
quarkus.flyway.placeholder-prefix=#[
quarkus.flyway.placeholder-suffix=]
In your sql file the placeholder should be defined as a #[myuser], not $[myuser]
You can also change the placeholder-prefix definition in your application.properties file to support prefix you already have in the sql file.
At one time I have two flyway users.
quarkus.flyway.owner this one has create privileges
quarkus.flyway.user this one has fewer privileges.
This item was not associated with the user role.
quarkus.flyway.placeholders.myuser=my_user
After changing it to quarkus.flyway.owner.placeholders.myuser=my_user
it started working.

Can EF Code First work with LocalDB in a ClickOnce application?

So, I'm trying out EF Code First, so that I can have the code drive updates to the database. I'm working on a ClickOnce app using LocalDB, so figured this may be the best solution for me, since otherwise changes to the MDF file will cause it to be overwritten on the client when deployed, thus losing everything entered before.
However, I'm now having my fair share of all new problems around Code First Migrations. I've followed through a Code First Migrations on MSDN, and I've managed to get the initial Configuration created, as well as the initial database creation.
The problems begin when I try to make my first actual migration. I added one single field to one of my models, and tried to make an explicit migration to handle that schema change for the next time I publish. Well...
PM> Add-Migration AddIsPercentField
Unable to generate an explicit migration because the following explicit migrations are pending: [201601052011180_InitialCreate]. Apply the pending explicit migrations before attempting to generate a new explicit migration.
Ok... I'll run update and try again:
PM> Update-Database
Specify the '-Verbose' flag to view the SQL statements being applied to the target database.
Applying explicit migrations: [201601052011180_InitialCreate].
Applying explicit migration: 201601052011180_InitialCreate.
Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending model changes to a code-based migration or enable automatic migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to true to enable automatic migration.
You can use the Add-Migration command to write the pending model changes to a code-based migration.
PM> Add-Migration AddIsPercentField
Unable to generate an explicit migration because the following explicit migrations are pending: [201601052011180_InitialCreate]. Apply the pending explicit migrations before attempting to generate a new explicit migration.
That's familiar, as that's the error (blatant lie?) it just told me earlier. Well, maybe if I undo my changes and update again, it will move to a valid state:
PM> Update-Database
Specify the '-Verbose' flag to view the SQL statements being applied to the target database.
Applying explicit migrations: [201601052011180_InitialCreate].
Applying explicit migration: 201601052011180_InitialCreate.
Running Seed method.
Ok, no warning this time. Should be golden. Field added back, project rebuilt. Here we go:
PM> Add-Migration AddIsPercentField
Unable to generate an explicit migration because the following explicit migrations are pending: [201601052011180_InitialCreate]. Apply the pending explicit migrations before attempting to generate a new explicit migration.
So... is there actually a working way to generate explicit migrations for any changes beyond the first?
EDIT: I made some forward progress on this, I believe. I did notice that the __MigrationHistory table was not generated in my .mdf after running Update-Database, even though it said everything completed just fine. I believe the issue is actually around how the local database works within the application. The connection string references AttachDbFilename=|DataDirectory|. What I think is going on is that it is deploying the .mdf temporarily, updating that temporary deployment, thus ultimately not committing the changes.
I'm working on a solution I have in mind, which is to have migrations work against a copy of the blank .mdf put in a static location, so that the static .mdf will be used to track and determine changes, while the blank .mdf will be what goes out to clients with the deployment.
I found that the root of my problem was that the console commands were not actually able to make changes, thus track migrations, to my data file. This was due to the connection string to the data file referencing a deployed location, so the file being updated was merely temporary.
This was in part a good thing, because the whole point of using Code First Migrations in my project was to avoid a hash signature change to my .mdf (which should have simply remained blank, as a placeholder) when publishing, so that the data from previous versions would never be overridden and discarded. However, that also introduced the obvious (in retrospect) problem that EF could not track changes due to there never being a __MigrationHistory table.
The solution upon which I arrived was to have two .mdf files. The blank one, for deployments, and a second one, to which I would interact with Code First Migrations. So, I have the initial MyData.mdf of Build Action Content, and a second MyDataDesignTime.mdf of Build Action None. (The "Design Time" migration database shouldn't be deployed.)
Using this approach, I found that I could now work successfully with migrations, calling Update-Database and Add-Migration, making sure to pass the -ConnectionString parameter with AttachDbFilename pointed to the full path to my design time database.
Later, getting lazy to supply a long -ConnectionString parameter on every migration command, I added the design time path to my config connection strings, and updated my DbContext so that it uses the design time path initially, but which I would change at the beginning of run-time to use my actual target data file:
public partial class MyData : DbContext
{
public const string DesignTimeConnection = "MyDataConnectionStringDesignTime";
public static string ConnectionName { get; set; } = DesignTimeConnection;
public MyData()
: base("name=" + ConnectionName)
{
}
...
}
And at application initialization:
MyData.ConnectionName = "MyDataConnectionString";
This works, and it makes things simpler on me. However, the one minor issue I'm left with is that I have a full static path which applies only to my environment left in the app.config file. Not currently an issue, as I'm the only dev on this project, but it's a code smell that I'm not happy with. Is there some path variable that I can use, such that it still points to the actual design time data (not any temporary, deployed file), but does so relative to the active, open project?

Can I run code at Alfresco startup?

I have an Alfresco module that I would like to have do some cleanup when a new version of it is installed.
In the current situation, an older version of the module created a folder node with custom properties at the root of the repository. We've since decided to have multiple such nodes, and none of them at that location. I'd like to put into the next version of the module code that would run at Alfresco startup, check for the existence of the old node, copy its properties into the appropriate new nodes, and delete the old node.
Is such a thing possible? I've looked at the Bootstrap configuration file, but that appears to only allow one to add things to the repository, not modify or delete them.
My suggestion is that you write a patch. That is a class that implements
org.alfresco.repo.admin.patch.AbstractPatch
Then you can do pretty much anything you want on bootstrap (except executing searches against solr since it wont be available).
Add some spring configuration, take a look at the file patch-services-context.xml for inspiration.
Yes you can do that, probably you missed the correct place in the documentation about that:
If you open Import Strategy you'll find a section Per BootstrapView, you should be using something like REPLACE_EXISTING or UPDATE_EXISTING for your ACP packaged content (if you're using ACPs as your bootstrap importing strategy).
Here is a more detailed description of the UUID Bindings values.
Hope that helps.
You can use patches.
When alfresco server starts it applies patches and executes database updates etc.
Definition :
A patch is a piece of Java code that executes once when Alfresco
Content Services starts. Custom patches can be implemented.
Documentation Link

How a bundle can provide "default data" i.e. pre-filled tables in Symfony 2?

I think I've a good understanding of Symfony and how bundle works.
However I've never found how to solve a simple problem: make a reusable bundle that provides data like tables/Doctrine entities pre-filled with (i.e.) all country names in the world, all provinces of Italy, tax rates history in England and so on.
Of course the purpose is to provide forms, services and controllers relying on this data source, without the need to copy and paste tables and entities across projects.
How would you do that?
Data fixtures IMHO are not an option because an obvious reason: you are going to purge your database while it's running.
A custom command reading from a static data-source (json, YAML) and performing inserts/updates?
First step is declaring a Doctrine entity in your Bundle. I think you should create DataFixtures to populate your datas into db.
You maybe should consider to use Seeds instead of Fixtures.
Fixtures are fake datas, used to test your application
Seeds are the minimal datas required for your application to work.
Technically, these are exactly the same thing, you declare it under the "DataFixtures/" folder and you import them with the "doctrine:fixtures:load" command.
You can create a folder "Fixtures/", and a folder "Seeds/" under the folder "DataFixtures", then load your seeds with the command
php app/console doctrine:fixtures:load --fixtures=/path/to/seeds/folder --append
It was suggested in the comments that it may be safer, especially in production environment, to create a custom Symfony2 command to force the "--append" mode. Without this mode, your database will be purged, and you could loose your production data.
This answer assumes you're using composer to install your bundles (and you really exclude fixtures as an option).
What you can do, is make an SQL export of the data you want, and make sure it uses INSERT IGNORE INTO, and get the correct unique constraints.
Then you save that file somewhere in your bundle, in a "data" or "fixtures" folder.
so your path to that file will be like:
"vendor/company/epicbundle/data/countries.sql"
What you then can do, is add post-insert and post-update commands in your composer.json, that looks like this:
"post-install-cmd": [
"php app/console doctrine:query:sql \"$(cat vendor/company/epicbundle/data/countries.sql)\""
]
If you only want it to run on install, you only add it there, if you sometimes update the sql file, you also add it to the post-update-cmd.
Please note that this solution only works if people don't temper with the table names, otherwise the queries will fail.
If you want a more save/stable solution, you can write your own post-install script in Symfony that uses the entity manager, and there you can use, for example, a csv file, and insert/update it row by row.
Basically, anything you could implement would surely rely on persistence mechanisms used in your ORM/ODM/whatever. So, you'll end up implementing a typical fixture loading mechanism, at least partially: you'd execute code that saves some provided data; if it's serialized you'd do XML/JSON/YAML parsing (but this is just a technicality) and persist the results into the database.
Thus, it's not bad to stick with Doctrine Fixtures. They are programmable and extensible (you can even fetch your data from the web upon loading).
As stated in #paul-andrieux's answer, if you are worried about data loss (e.g. your bundle's seeds are loaded when the end user's DB is already up), you should use doctrine:fixtures:load --append and let the constraints do their job (like, in a country names table you'd have a unique constraint on country name or even a 'slug') so that inserting duplicate rows silently fails inserting a single entity, in case if your bundle has updated the country list, and the end user had a previous version.
If you really worry about your end users' data you could write a wrapper for the doctrine:fixtures:load command that would have the --append flag always on and register it as a separate command. (You could run needed migrations there, too)
#lxg's hard-coded IDs problem is solvable, too. Try using natural keys where applicable (e.g. the countries table would have a slug primary key that would be great-britain for Grean Britain). This way your searches would be pretty easy: $em->find('\MyBundle\Country', 'great-britain');. If you cannot come up with a natural key, then maybe the entity is not really needed for the end user.
UPD. Here's an article that could be useful: http://www.craftitonline.com/2014/09/doctrine-migrations-with-schema-api-without-symfony-symfony-cmf-seobundle-sylius-example/
Generally speaking, the bundle embedded the entities that will be loaded via the ORM/ODM using their built-in commands (like doctrine:schema:update, doctrine:migration:diff, ...) and provides a custom command that load the required fixtures using the ODM/ORM
This command can read the fixtures in multiple way (parsing yaml, xml, raw sql, dql, ...), it is just a matter of taste. Tones of bundles, parser, ... exist for those tasks.
In your documentation, you just have to state in a clear way that the developer must run this command after your bundle installation and schema update.

Resources