SQL Server Deploy Script - asp.net

I use sql server 2005 for an asp.net project. I want to run a SQL file that contains all DB changes from the last release to easily bring a DB up to the latest version.
I basically just have a bunch of alter table, create table, create index, alter view, call stored proc, etc statements. But I would like to wrap it in a transaction so if any part of it fails, none of the changes will go through. Otherwise it could make for some really messy debugging where it finished.
Also, if you know of a better way to manage DB deployment let me know!

I do something similar with a Powershell script using SMO.
Pseudocode would be:
$SDB = SourceDBObject
$TDB = TargetDBObject
ForEach $table in $SDB.Tables
{
Add an entry to a hash table with the name
and some attributes (rowcount, columns, datasize)
}
# Same thing for $TDB
# Compare the two arrays, and make a list of all the ones that exist in the source but not in the target, or that are different
# Same thing for Procs and Views
# Pass this list to a SMO.Scripter as an UrnCollection object, and it will script them out in dependency order (it's an option), with drops
# Wrap the script in a transaction and execute it on target server
# Use SQLBulkCopy class to transfer data server-to-server

What version of Visual Studio do you use? In Visual Studio 2010 and from what I can remember Visual Studio 2008 - in the menu under "Data" there are two options - "Schema Compare" and "Data Compare". That should move you in the right direction.

BEGIN TRANSACTION #TranName;
USE AdventureWorks2008R2;
DELETE FROM AdventureWorks2008R2.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION #TranName;
You should execute everything within a transaction

Note that some DDL statements have to be the first statement in a batch (batches are separate from transactions). (GO is the default batch separator in SSMS and SQLCMD).

Related

Which tool of Progress Developer Studio for OpenEdge can I use for testing ABL queries?

I've just written following erroneous ABL query:
FOR EACH T1 (WHERE T1.some-date = ' ') AND
(integer(T1.num1) >= 100) AND
(integer(T1.num1) <= 200) AND
(T1.some-other-date = 01/12/2021) AND
(T2.Name = itsme),
EACH T2 WHERE T2.num2 = T1.num2
BY T1.num1
As you can see, this is wrong because I've put the first bracket in front of "WHERE" instead of behind it. In top of that, my name "itsme", is not put between quotes, so the ABL query will never work.
I've been looking in my development environment ("Tools" menu), but I couldn't find an ABL query tester. I've also checked the directory "C:\Progressx86\OpenEdge\bin", but being a newbie I didn't find anything.
I have downloaded the "DataDigger" application, which contains a so-called "MCF Query Tester", but this only works on single table and only checks criteria, not entire ABL queries.
Does anybody know where I can find an ABL query tester, first for syntax checking (the bracket in front of the "WHERE") and (if possible) for data testing (01/12/2021, is that January 12th or December 1st?)?
Thanks in advance
Dominique
Create a new OpenEdge project in Progress Developer Studio for Openedge. Create a new ABL procedure under the project with the necessary database connection. Copy the above ABL code into the procedure file and you should be able to see the errors and warnings in your procedure file.
The ABL Scratch Pad view of Progress Developer Studio allows to execute ad-hoc queries.
https://knowledgebase.progress.com/articles/Knowledge/000055088
You don't mention which editor you're using, but in general terms, the ABL COMPILE statement performs syntax checking. There's no separate/independent executable that compiles/checks syntax. Generally you'll need to write a .P to perform the compilation.
You can use the PCTCompile Ant task if you'd like to use an Ant- or Gradle-based build system.
As to date formats, I thought that was in the doc, but no. Dates always have a MM/DD/YYYY format when hard-coded; decimals always use a . decimal separator (ie 123.45) when hard-coded.
Another way to test a query you're working on would be to use query-prepare which accepts a string.
DEFINE QUERY q-Test FOR T1, T2.
QUERY q-test:HANDLE:QUERY-PREPARE("your query string here").
One of my colleagues showed me an incredible easy solution:
Copy the query in a separate procedure window and add the results you want to see, something like this:
FOR EACH T1 (WHERE T1.some-date = ' ') AND
(integer(T1.num1) >= 100) AND
(integer(T1.num1) <= 200) AND
(T1.some-other-date = 01/12/2021) AND
(T2.Name = itsme),
EACH T2 WHERE T2.num2 = T1.num2
BY T1.num1
/* To be added for viewing results */
DISPLAY T1.Field1 T1.Field2 T2.Field5
END.
And launch this inside the programming environment (F2).
In case of syntax mistakes, the compiler shows the errors. In case the syntax is correct, a pop-up window is started, showing the results.

Marklogic - Delete Versioned Collections

I have around 43 million documents which is having the latest versioned document in LIVE collection and also have same versioned document in another version collection named as (/collection/versionNumber). I want to delete the versioned collections which is around 34 million. what is best approach to go for it to delete all in one go .
You could try using xdmp:collection-delete() to delete all documents in the collection in a single transaction.
If that doesn't work and it isn't able to delete in one shot, then I would look to utilize batch tools. For instance, a CoRB job.
An example job options file with properties needed, except for the XCC-CONNECTION-URI:
# Inline module to select all URIs from the collection
URIS-MODULE=INLINE-XQUERY|let $uris := cts:uris("",(),cts:collection-query("/collection/versionNumber")) return (count($uris), $uris)
# Inline module to delete the docs
PROCESS-MODULE=INLINE-XQUERY|declare variable $URI as xs:string external; xdmp:document-delete($URI)
THREAD-COUNT=10
I think your application is using DLS library for versioning. If yes, and if you never want any version to look into in future, then only delete the versioned documents. Use can use "dls:document-unmanage" API in that case.
Also, explore dls:purge and dls:document-purge before proceeding. I am not very sure of these two.
Anyways, even if it's not DLS, processing them in one go (single transaction) would not be a recommended way. Either process them in batches or set them all in different threads on task server through spawn.

File Exists and Exception Handling in U-Sql

Two questions
How to check file exists or not before EXTRACT?
we have scenario where new inputs file is generated every day for catalog data. we need to merge new input with d-1 file. before merge we what to make sure that new input file exists at source location
does u-sql supports try...catch block?
Regarding checking if a file exists. We recently released a compile-time IF statement that indeed can check for partition existence (and other objects such as files and tables are on the roadmap).
Once that feature is released (still one or two refreshs out at the time of this answer) it may look something like (syntax subject to change):
IF FILE.EXISTS("/mydir/myfile.csv") THEN
#data = EXTRACT ... FROM "/mydir/myfile.csv" USING ...;
...
#jobstate = SELECT * FROM (VALUES("job completed")) AS T(status);
ELSE
#jobstate = SELECT * FROM (VALUES("file not ready. Job not executed.")) AS T(status);
END;
OUTPUT #jobstate TO "/jobs/myjobstate.csv" USING Outputters.Csv();
You will be able to provide the name as a parameter as well. Please let me know if that will work for your scenario.
An other alternative is to use the file set syntax, especially if you want to use a dynamic value to determine the process. That would simply create an empty rowset:
#data = EXTRACT ..., date DateTime
FROM "/mydir/{date:yyyy}/{date:MM}/{date:dd}/data.csv"
USING ...;
#data = SELECT * FROM #data WHERE date == DateTime.Now.AddDays(-1);
... // continue processing #data that is empty if yesterday's file is not yet there
Having said that, you may want to check of your job orchestration framework (such as ADF) may be a better place to check for existence before submitting the job in the first place.
As to the try catch block: U-SQL itself is a script-level optimizable, declarative language where the plan gets generated and optimized at runtime over the whole script. Thus providing a dynamic TRY-CATCH is currently not available, since it would severely impact the ability to optimize the script (e.g., you cannot move predicates or column pruning outside of a try-catch block). Also TRY/CATCH can lead to some very hard to understand and debug code, especially if it is used to mimic procedural workflows in an otherwise declarative environment.
However, you can use try/catch inside your C# functions without problems if you need to catch C# runtime errors.
FILE.EXISTS() always returns True when executed locally. However, it works when executing against Azure Data Lake.
Tried MSDN example and the following returns True, True
DECLARE #filepath_good = "/Samples/Data/SearchLog.tsv";
DECLARE #filepath_bad = "/Samples/Data/zzz.tsv";
#result =
SELECT FILE.EXISTS(#filepath_good) AS exists_good,
FILE.EXISTS(#filepath_bad) AS exists_bad
FROM (VALUES (1)) AS T(dummy);
OUTPUT #result
TO "/Output/FileExists.txt"
USING Outputters.Csv();
I have Microsoft Azure Data Lake Tools for Visual Studio version 2.2.5000.0

Publish data with SSDT?

I have a SSDT-project. When publishing a new version I want to publish/initialize some data in the database as well. Can that be done using SSDT?
It can be done, but could be tricky. If you set up a variable in the project that can be used for "New" releases, you could put that in your post-deploy script as a section that would run a series of inserts, but only for that "New" type.
As David mentioned, the better way would likely be to use something like Red-Gate's data compare or run the scripts after creating the database. It's possible to do it in post-deploy scripts, but could prove tricky.
Something like this could work:
IF '$(DeployType)' = 'New'
BEGIN --"New" release scripts
PRINT 'Post-Deploy Scripts for release.'
:r .\InsertScript1.sql
:r .\InsertScript2.sql
--etc
END --"New" release scripts
This isn't possible in SSDT. The current guidance is to use a post-deployment script.
Redgate ReadyRoll provides many experiences familiar to SSDT users, but has improved static data management as well as many other improvements.
We include Merge-scripts automatically, when they are placed in a specific subfolder of the Project.
depending on what you do, table valued custructors might be something to have a look at:
SELECT *
FROM
(VALUES
(101, 'Bikes'),
(102, 'Accessories'),
(103, 'Clothes')
) AS Category(CategoryID, CategoryName);
These are easily transported and compared by SSDT.
For ore information see https://www.simple-talk.com/sql/sql-training/table-value-constructors-in-sql-server-2008/

Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.

Resources