FOR SYSTEM_TIME AS OF is not supported with table wildcards - firebase

I just need to bring the deleted tables from Firebase tables. The problem is that I don't know the tables names. So I am using,
SELECT * FROM `MyProject.MyDataSet.events_*`
FOR SYSTEM_TIME AS OF '2022-04-11 08:00:00-04:00'
But it is saying,
FOR SYSTEM_TIME AS OF is not supported with table wildcards
I know there are some tables deleted but don't know what are these, so how can I know which tables are deleted?

Related

Is there any way to check the presence and the structure of tables in a SQLite3 database?

I'm developing a Rust application for user registration via SSH (like the one working for SDF).
I'm using the SQLite3 database as a backend to store the information about users.
I'm opening the database file (or creating it if it does not exist) but I don't know the approach for checking if the necessary tables with expected structure are present in the database.
I tried to use PRAGMA schema_version for versioning purposes, but this approach is unreliable.
I found that there are posts with answers that are heavily related to my question:
How to list the tables in a SQLite database file that was opened with ATTACH?
How do I retrieve all the tables from database? (Android, SQLite)
How do I check in SQLite whether a table exists?
I'm opening the database file (or creating it if it does not exist)
but I don't know the approach for checking if the necessary tables
I found querying sqlite_master to check for tables, indexes, triggers and views and for columns using PRAGMA table_info(the_table_name) to check for columns.
e.g. the following would allow you to get the core basic information and to then be able to process it with relative ease (just for tables for demonstration):-
SELECT name, sql FROM sqlite_master WHERE type = 'table' AND name LIKE 'my%';
with expected structure
PRAGMA table_info(mytable);
The first results in (for example) :-
Whilst the second results in (for mytable) :-
Note that type is blank/null for all columns as the SQL to create the table doesn't specify column types.
If you are using SQLite 3.16.0 or greater then you could use PRAGMA Functions (e.g. pragma_table_info(table_name)) rather than the two step approach need prior to 3.16.0.

How to do a batch update in sqlite3 database

All:
I am pretty new to SQL, I wonder how can I update certain field value across multiple tables in SQLITE3 database?
For example:
the database is company.db, inside it, there are 50 tables, each table has a column called company_name, now some company's names changed, so I need to update that info in all tables, I wonder how to do it in SQL?
Thanks

Getting ids of inserted raws via doctrine dbal

I'm working on a symfony application, and i need to insert multiple raws at once, Doctrine ORM is not a good option because for each raw it will open a connection to execute the query, to avoid this and have one connection inserting all the raws i used prepared statement of doctrine dbal and it works fine, except i need to get the ids of the inserted raws, it seems the only available function is lastinsertedid which returns only the last id not all the last inserted ones, how can i achieve this?
any help would be appreciated!
This is actually not related to doctrine at all. If you want all inserted id's it must be possible in MySQL. "It's unlikely that if doctrine don't have batch insert it will support returning list of ids after batch insert :)"
Check answers related to MYSQL:
How can I Insert many rows into a MySQL table and return the new IDs?
MySQL LAST_INSERT_ID() used with multiple records INSERT statement
But it's possible in postgresql (since you didn't mention you DB):
Retrieving serial id from batch inserted rows in postgresql
You can actually generate IDs before inserting content into database. For example, using random UUIDs.
This library might be of use: https://github.com/ramsey/uuid
use Ramsey\Uuid\Uuid;
$uuid4 = Uuid::uuid4();
echo $uuid4->toString()

How do you remove old records from the BAMPrimaryImport TDDS_FailedTrackingData table?

How can you remove old records from the BAMPrimaryImport TDDS_FailedTrackingData table?
... not the TDDS_FailedTrackingData in the BizTalkDTADb database
Our production system has 2+ million records in the BAMPrimaryImport.dbo.TDDS_FailedTrackingData, and the various BizTalk SQL Agent jobs are running fine, but these records are still there.
UPDATE: We sorted the issue that was generating the fails (fingers crossed), so there are no new records.
This might be helpful for you as well:
http://www.codit.eu/blog/2014/07/03/maintaining-biztalk-bam-databases/
I'm not claiming this is an actual answer to your question, but it is about maintaining the BAM databases using NSVacuum.
Looks like it's a case of manually deleting the records (TRUNCATE TABLE or DELETE FROM) ...
I've used Red Gate's SQL Search and looked for TDDS_FailedTrackingData throughout the database ...
all objects and all databases
Found 8 references in the entire system ... see below
Records are removed from the [BizTalkDTADb].[dbo].[TDDS_FailedTrackingData] in two stored procedures ...
[dtasp_CleanHMData] does a TRUNCATE TABLE
[dtasp_PurgeTrackingDatabase_Internal] does a DELETE FROM for 100 records at a time
However the [BAMPrimaryImport] database only has one stored procedure that has any mention of the [BAMPrimaryImport].[dbo].[TDDS_FailedTrackingData] table ...
[BAMPrimaryImport].[dbo].[TDDS_InsertFailedTrackingData]
and it just inserts records, with the addition of current date & time from GETUTCDATE()
Found lots of posts about clearing down the [BizTalkDTADb] table, but very few on clearing down the [BAMPrimaryImport]
This on TechNet from a BizTalk MVP
And this on MSDN from another BizTalk expert.
You can manually perform a simple DELETE TSQL script :
DELETE FROM [BAMPrimaryImport].[dbo].[TDDS_FailedTrackingData]

SQL Delete taking too long

We have a table(say T1) that is referenced by about 16 other tables with foreign keys in our SQL Server database. The data is accessed through an ASP.NET application with LINQToSQL. When the user tried to delete a record from T1 the statement would time out. So we decided to first delete the records from the tables that reference T1 and only then delete the record in T1. The problem is that deletion from T1 does not work as fast as expected.
My question is: is it normal that deletion from a table referenced by many other tables to be so time-consuming even if the record itself does not have any 'children' records?
EDIT: Apparently the cause for the timeout was not the delete itself but another query that retrieved data from the same DataContext. Thank you for your suggestions, I have marked as answer the suggestion to add indexes for all foreign keys because it improved our script's execution plan.
I suspect that you may need to look into the indexing on your child tables.
It sounds as if you FKs are set to Cascade Deletes, so I would suspect that some of your tables do not have an index that includes the key to the parent as the first in the index.
In this way your delete will be full scanning the child tables - even if you've already deleted the child records it will still check as you've still got the Cascade set.
When you define a relationship in DB, you can set the Delete rule as Cascade in SQL server. In this way, when you delete the record from the parent table, it will be automatically deleted from the child tables.
Please see the image below:
If it taking long time, you may have set other constraint that will slow
down the process of deletion.
Linq does not do bulk deletes if you're having it operate directly on the record set -- instead, it is probably deleting one record at a time.
To improve performance, use a stored procedure instead for any bulk insert, update or delete operations.

Resources