uniqness insertion in native xml database - xquery

I am developing a project based on XML. I use the Sedna database to store my collection (which contains XML files, and their XSD schema files).
I define the primary/unique keys in those schemes, but till now I can insert duplicate values (via XQuery update insert command) into primary key field.

To guarantee uniqueness constraint you should create BEFORE INSERT FOR EACH NODE trigger on proper path. In the trigger action the $NEW transitive variable can be used to fetch the new key and check if it already exists in the document (see examples in the manual). To raise error fn:error function can be used.
Take the following note regarding triggers:
"It is prohibited to use prolog in statements of the trigger action" — Sedna Programmer's Guide, XQuery Triggers.
See also bug 51 (although, already closed).

Related

SQLite Importer will overwrite my database when I load my application?

I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.

Referenced table is not in dictionary

When I scaffold my database get the following error:
Referenced table `contentcategory` is not in dictionary.
Referenced table `contentcategory` is not in dictionary.
Referenced table `contenttype` is not in dictionary.
Referenced table `content` is not in dictionary.
I Use Mysql and Pomelo.EntityFrameworkCore.MySql
This is very likely to be a casing issue, where MySQL assumes the table name to be contentcategory for the reference, while it is actually something like ContentCategory.
We have a pull request for this open since April, that looks abandoned by the original contributor.
I will fix the PR and merge it, so that the workaround for this issue will be part of our nightly builds as of tomorrow.
The linked PR also contains the information of how this issue can arise:
Okay, that is in line with what I experienced as well. So manually (either by writing the name in the GUI or by using an ALTER TABLE statement directly) adding a reference with different casing (on a server with case insensitive file name handling) or disabling SQL Identifiers are Case Sensitive [in MySQL Workbench] can lead to this result.
Technically, this is a MySQL or Workbench bug, but we will implement a workaround for it anyway.

Cosmos DB - Bulk Import (single partition collections) with customized partition key

I am trying to migrate some data from a JSON file to Cosmos DB using Data Migration Tool, when I tried to define the partition key with either single or a combination of my column name, every time I am getting undefined partition key after migration, how could I correct this issue?
Note here I'll have to use Bulk import (single partition collections) options, because I need to execute my customized stored procedure for nested array import, I cannot use sequential record import as I know the same partition function works very well there.
So here I am setting my partition key to be "/item/vid":
After migration, my collection shows "_partitionKey" instead of "/item/vid" there:
If you use Bulk import with migration tool, the partition key setting is for more than collection scenario.Please see the statement in this link:
When you import to more than one collection, the import tool supports
hash-based sharding. In this scenario, specify the document property
you wish to use as the Partition Key. (If Partition Key is left blank,
documents are sharded randomly across the target collections.)
Back to your requirement,you could use Sequential Record Import.
You need to create collection first and set the partition key as /item/vid.
My test json file:
Result:

Is there any way to check the presence and the structure of tables in a SQLite3 database?

I'm developing a Rust application for user registration via SSH (like the one working for SDF).
I'm using the SQLite3 database as a backend to store the information about users.
I'm opening the database file (or creating it if it does not exist) but I don't know the approach for checking if the necessary tables with expected structure are present in the database.
I tried to use PRAGMA schema_version for versioning purposes, but this approach is unreliable.
I found that there are posts with answers that are heavily related to my question:
How to list the tables in a SQLite database file that was opened with ATTACH?
How do I retrieve all the tables from database? (Android, SQLite)
How do I check in SQLite whether a table exists?
I'm opening the database file (or creating it if it does not exist)
but I don't know the approach for checking if the necessary tables
I found querying sqlite_master to check for tables, indexes, triggers and views and for columns using PRAGMA table_info(the_table_name) to check for columns.
e.g. the following would allow you to get the core basic information and to then be able to process it with relative ease (just for tables for demonstration):-
SELECT name, sql FROM sqlite_master WHERE type = 'table' AND name LIKE 'my%';
with expected structure
PRAGMA table_info(mytable);
The first results in (for example) :-
Whilst the second results in (for mytable) :-
Note that type is blank/null for all columns as the SQL to create the table doesn't specify column types.
If you are using SQLite 3.16.0 or greater then you could use PRAGMA Functions (e.g. pragma_table_info(table_name)) rather than the two step approach need prior to 3.16.0.

How to create Indexes with SSDT

Creating a SQL Function in SSDT works smoothly. When I modify the function's SQL, I can generate the DACPAC, then publish the DACPAC and the database is automatically updated. My function SQL looks like:
CREATE FUNCTION [dbo].[foo]
(
...
)
RETURNS int
AS
BEGIN
...
END
This is a file called foo.sql with build action to Build.
When I need to add a database index, I add an Index file to my project and put in:
CREATE NONCLUSTERED INDEX [idxFoo]
ON [dbo].[tblFoo] ([id])
INCLUDE ([fooVal])
If I try to build it I get several SQL71501 errors.
I was forced to add all Indexes in a common file set to PostDeploy.
I have found numerous references to including a DACPAC reference to the project-which I did. This works for most items, but not Indexes. I have no idea why.
I needed to add Table definitions to the project of the "missing" referenced objects in the Indexes. In order to get the script to create the tables, I used the VS Sql Server Object Explorer. Right click on table and select View Code (includes the table's existing Indexes and other elements). NOT Script As->(table create sql only) If you don't SQLPackage.exe will delete the Indexes not defined in your project.
Please ensure that all referenced objects are defined in the project.
The definition of all referenced objects must be present in your database project. This is because the database project is intended to represent the database schema as a valid stand-alone entity. Having this allows the tools to verify that your objects are correct -- i.e. that any references contained in them refer to objects that exist.

Resources