how to generate sqlite database from CDM Model in PowerDesigner? - sqlite

I have a Conceptual Data Model in powerdesigner and I want generate Sqlite DB from it, how to generate sqlite database from CDM Model in PowerDesigner?

I have started a DBMS for SQLite 3

When you generate Physical Data Model, you can select, in the dropbox button for database selection, ANSI Level 2. It works flawlessly that way. Confirm your choice when you generate the script.
Just make sure to remove or comment the drop statements at the beginning of the resulting script and you should not have any error when running the script in a database client.

Use the Tools->Generate Physical Data Model... command and select an appropriate database from the list (probably the ODBC or ANSI options since SQLLite isn't an out-of-the-box option.
OR, you could first create a database XEM for SQLLite, but that's a pretty advanced task. I'd stick with the generic if possible.

Related

How to keep track of database changes

I'm working with Progress 11.6 appBuilder and procedure editor (and Data Dictionary).
Regularly we are doing modifications at the customer's database, there are two types of modifications:
Modifications of the structure: those are done, using interactive GUI of the data dictionary.
Modifications of the data: those are done, using the procedure editor
An example of a data modification in the procedure typically looks like this:
FOR EACH Table1 WHERE Table1.Field1 = <value>:
CREATE Table2.
Table2.Field1 = <value>.
Table2.Field2 = <some-other-value>.
END.
This is completely in contradiction with one of the basics of software delivery quantity, repeatability: there is no way to return to the previous situation!
Therefore I'm looking for ways to do this in an (automatable) repeatable way, hence my questions:
What can we use instead of the interactive GUI of data dictionary (without undo feature) in order to perform/undo database structure modifications?
What can we do in order to undo database data modifications? (Is there something like a Oracle redo log or a Oracle archive log in Progress?)
In case you say "What are you talking about? You can do "Undo transaction" in the data dictionary.", I mean the following:
I perform a transaction using the data dictionary, I leave the data dictionary and the day later the customer complains. When I open the data dictionary at that moment, the "Undo transaction" feature is disabled.
At a high level you should be creating "df files" (DDL scripts) and applying those to the customer database rather than manually making changes. There are many ways to create those files and you can automate the entire process with the appropriate tooling.
One of the most common ways to create a df file is to create whatever new schema you need in your development database and then use the "create an incremental df" facility in the data dictionary tool. This tool compares the development database schema to the target schema and builds a "df file" (DDL script) of the differences. You could connect directly to the target db for this process or you could have an empty skeleton db that you use for this.
How to create an incremental df file
(If you then reverse the comparison you can also create a reversing df file to undo the changes.)
Most df files consist of additions - new tables, new fields, new indexes. These can all be added online and that can all be completely scripted. And, of course, the individual df files and all of the supporting scripts can (and should) be stored in a repository (like git or whatever).
As for the data change scripts... there's no reason that those programs cannot be written as actual programs and saved in a repository. You can enclose the whole update in a transaction and UNDO it if that is appropriate. For what it is worth, I personally do not think that is a very good idea. Especially when large amounts of data are involved you really don't want to be creating monstrous multi-gigabyte undo logs. You're better off with a second "reversing transaction" script that will roll things back piecemeal. A side benefit is that you can still use that if you decide to back out the change a day or three afterwards.
The really gory details are going to depend on your development process and the customers change management process and the tooling available. It kind of sounds like there is not much process or tooling at either end of this relationship so you probably have a lot of adventures ahead of you!

Create backup of bigquery cluster table

I've a clustered partitioned table exported from GA 360. Attached is the image. I would like to create exact replica of the same. Using Web UI it's not possible. I created backup table using bq command line tool, still no luck.
Also, whenever we check preview it has a day filter. It looks like this:
Whenever data is appended to the backup table, I don't find this filter there even though this option is set to true while creating a table.
If you can give more context about handling this kind of table it would be beneficial.
Those are indeed sharded tables. As explained by #N. L they follow a time-based naming approach: [PREFIX]_YYYYMMDD. They then get grouped together. The explained procedure to backup them seems correct. Anyhow, I would recommend to use partitioned tables as it will be easier to backup them and they perform better in general.
This is not a cluster / partitioned table. This one is a sharded non-partitioned table having one common prefix. Once you start creating multiple tables with same prefix we can see them under the same prefix.
Ex:
ga_session_20190101
ga_session_20190102
both these tables will be grouped together.
To take backup of these tables you need to create a script to copy source to destination table with same name and execute that script using bq command line tool under the same project.

Reorganise Stored Procedures display in SQL Server Management Studio

I'm currently working with an Asp.NET web application which involves a lot of stored procedures.
Since I'm also using the Asp.NET Membership/Roles/Profile system, the list of stored procedures displayed in the Microsoft Sql Server Management Studio is really becoming something of a pest to navigate. As soon as I open the Programmability/Stored Procedures tree, I have a long list of dbo.aspnet_spXXX stored procedures with my procedures loitering at the end.
What I would like to do is shuffle all those aspnet stored procedures into a folder of their own, leaving mine floating loose as they are now. I don't want to dispense with the current organisation, I just want to refine it a little.
Is it possible to do this?
I think the best you can do in SSMS is to use a filter to exclude the aspnet stored procedures.
Right click the Stored Procedures folder
Select Filter -> Filter Settings
Filter by Name, Does not contain, 'aspnet_sp'.
I would recommend redgate's SQL search tool - handy for finding a particular proc, rather than scrolling through a large list. Allows you to double click and go to it:
http://www.red-gate.com/products/sql-development/sql-search/
Management Studio doesn't support the ability to sort these objects other than alphabetically.
I like the filter and 3rd party add-in ideas, but another idea you can explore is using a different schema for your objects. If you name the schema 'abc' or something more logical, they will always sort first and none of your users will have to apply the filter.
CREATE SCHEMA abc AUTHORIZATION dbo;
GO
ALTER SCHEMA abc TRANSFER dbo.proc1;
ALTER SCHEMA abc TRANSFER dbo.proc2;
ALTER SCHEMA abc TRANSFER dbo.proc3;
...
Of course you will need to update your code to reference this schema and you should also change all of your users' default schema.
This isn't really one of the primary purposes of schemas, but short of putting your objects in a different database, this is one way to visually separate them.

what is the best way to export data from Filemaker Pro 6 to Sql Server?

I'm migrating/consolidating multiple FMP6 databases to a single C# application backed by SQL Server 2008. the problem I have is how to export the data to a real database (SQL Server) so I can work on data quality and normalisation. Which will be significant, there are a number of repeating fields that need to be normalised into child tables.
As I see it there are a few different options, most of which involve either connecting to to FMP over ODBC and using an intermediate to copy the data across (either custom code or MS Acess linked tables), or, exporting to flat file format (CSV with no header or xml) and either use excel to generate insert statements or write some custom code to load the file.
I'm leaning towards writing some custom code to do the migration (like this article does, but in C# instead of perl) over ODBC, but I'm concerned about the overhead of writing a migrator that will only be used once (as soon as the new system is up the existing DB's will be archived)...
a few little joyful caveats: in this version of FMP there's only one table per file, and a single column may have multi-value attributes, separated by hex 1D, which is the ASCII group separator, of course!
Does anyone have experience with similar migrations?
I have done this in the past, but using MySQL as the backend. The method I use is to export as csv or merge format and them use the LOAD DATA INFILE statement.
SQL Server may have something similar, maybe this link would help bulk insert

Quickest way to delete all content in a database and rebuild from scratch?

I am designing a standard ASP.Net site with a SQL database. I have a database schema and During the tests I am changing data types amongst other tasks and the data contained inside really is not that important.
I keep getting errors as the old data does not match the new rules. This is not important and I am happy to clear everything but currently, I have to export/publish the database to a .sql file then import it from scratch - which is time consuming.
Is there a quick button / feature that I have missed that allows you to reset autonumbers / IDs to 1 and delete all content, or just speed up what I currently do?
There are a few options you could take, the "fastest" really depends on your database.
To firstly answer your questions on seeding, etc - TRUNCATE TABLE will delete all information in a table (very fast, as it is not logged) and will reset your identity column.
eg:
TRUNCATE TABLE dbo.table
http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx
The significant restriction here is that you cannot use it on a table that is referenced by another table. In this case you can use a standard delete and then use DBCC CHECKIDENT
eg:
DELETE FROM dbo.table
GO
DBCC CHECKIDENT(dbo.table, reseed, 0)
http://msdn.microsoft.com/en-us/library/ms176057.aspx
Remember with delete to make sure you delete information in the correct order (i.e. taking into account foreign keys).
Another approach I often use is simply writing a complete tear-down / rebuild script when I want to reset the database. The basic premise is to tear down, or drop all database objects at the beginning of the script and then recreate them. This is not necessarily a solution for all scenarios, but for basic tasks works well for me. To avoid errors I would usually add my drop statements in IF statements, eg:
IF EXISTS
(
SELECT *
FROM information_schema.tables
WHERE table_name = 'table' AND table_schema = 'dbo'
)
BEGIN
DROP TABLE dbo.table
END
Why don't you write some T-SQL code to delete (or truncate, even quicker) all your tables? Be careful to take into consideration your integrity rules while clearing the tables: allways clean the tables containing the foreign key before cleaning the one containing the primary key.
If you just need to clear out data then just write a script to truncate all the data in each table. The truncate command also resets any IDENTITY fields as well.
TRUNCATE TABLE myTable
For each table you have. Then just run that script each time.
Here'a a quick way to delete all of the data in a table:
TRUNCATE TABLE YourTableName
You could write a script that would truncate all of your tables.
The alternative is to just DROP the table and re-create it.
If you really want to drop all data, then you could detach the database and create a brand new one; it's a bit extreme, but possibly faster than dropping everything first.
As others have suggested I find it preferable to maintain a script that builds the database from scratch and can tear down the database prior to rebuilding it. Develop this script just as you'd develop the rest of the application. I find it easier to understand the database through a script than by building it through a GUI, especially where there are complex relationships, triggers and so on.
It's also useful if you have other developers, and perhaps quicker and less prone to errors than copying your working database and handing it to another developer.
On release you can freeze that script and then create delta scripts for the next release which has just the changes from the initial schema to the new. This could also tear down the new objects created in the delta before recreating them so it can be easily re-run without having to wipe the entire database.
if you use Visual Studio 2010 then
open the App_Data folder of the solution and double click on the MDF File.
right click on your table , in the menu select "Show Table Data".
select all rows and delete all them.

Resources