I have created a .p which contain connect db statement after that I'm running a .p which is creating a temptable after that, database will be a disconnected. Will data be available in temptable after disconnecting from the database?
Yes, it will be available. Data that is collected in a temp-table is not linked to the DB and can be manipulated freely. Conversely, in order to make actual CHANGES to your db, you need to assign those values back, probably by setting another connection and doing that.
Related
I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.
I have a performance issue with multiple temporary tables that I'm trying to solve with RecordSortedList, but I'm getting strange results. I have a temporary table that has a couple hundred thousand records being inserted into it, and then used elsewhere for joins to other temporary tables. The problem is after trace parsing this solution the insert is taking too long for all the individual inserts and I was hoping to use a RecordSortedList to bulk insert into the staging table. However, I can't find a handle to the temporary table after the RecordSortedList.insertDatabase() call.
I've tried something like this:
RecordSortedList tmpTableSortedList;
MyTempTable myTempTable;
AssetTrans assetTrans;
int i = 1;
tmpTableSortedList = new RecordSortedList(tableNum(MyTempTable));
tmpTableSortedList.sortOrder(fieldNum(MyTempTable, LineNum));
//the real scenario has a much more complicated data gathering, but just for sample
while select * from AssetTrans
{
myTempTable.AssetGroup = assetTrans.AssetGroup
myTempTable.LineNum = i;
tmpTableSortedList.ins(myTempTable);
i++;
}
tmpTableSortedList.insertDatabase();
//strange things happen here
MyTempTable myTempTableCopy;
AnotherTmpTable anotherTmpTable;
tmpTableSortedList.first(myTempTableCopy); //returns a buffer, but not usable buffer in join.
//does not work, I imagine because the myTempTableCopy isn't actually pointing to the
//inserted records above; somehow the temp table is out of scope.
while select * from anotherTmpTable
join myTempTableCopy
where anotherTmpTable.id == myTempTableCopy.id
{
//logic
}
Is there a way to get a pointer to the temp table after the call to RecordSortedList.insertDatabase()? I've also tried linkPhysicalTable() and a few other things, but maybe RecordSortedList was not supposed to be used with tempDb tables?
Edit: Like Aliaksandr points out below this works with RecordInsertList instead of RecordSortedList
but maybe RecordSortedList was not supposed to be used with tempDb tables?
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not. I would think you could though because I'm not sure there's a huge difference in a TempDb table and a Regular table when they're both stored on disk?
If you wanted to use an InMemory table, look at \Classes\CustVendSettle specifically the variable rslTmpOverUnderReverseTax, which uses an InMemory table.
IF TempDb tables were allowed, you would use getPhysicalTableName() to get the handle combined with useExistingTempDBTable().
Or did I misread your question?
does not work, I imagine because the myTempTableCopy isn't actually pointing to the inserted records above; somehow the temp table is out of scope.
Method new of RecordSortedList has additional Common parameter where you should pass your tempDB table buffer.
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not.
Although the message says we can't use temporary tables for such operations, indeed we can. We just need to be careful because the code must be executed on the server.
RecordSortedList objects must be server-located before the insertDatabase method can be called. Otherwise, an exception is thrown.
I have a temporary table that has a couple hundred thousand records being inserted into it
There is no limit to the size of a RecordSortedList object, but they are completely memory-based, so there are potential memory consumption problems. So this may not be the best solution in your case.
What is the purpose of NODE_PROPERTIES table in the database and how do we get this table populated with key value pairs and how do we query? And how do we query data in other NODE tables like NODE_INFOS, NODE_NAMED_IDENTITIES , NODE_INFO_HOSTS? Is there any service level function available in CordaRPCClient to do that? We would like to store some extra properties for each node
The NODE_PROPERTIES table is used for internal purposes to store information that doesn't justify having its own table (currently, whether or not the node was in flow-drain mode when it was last stopped).
Feel free to store additional key-value pairs there, as long as they don't clash with keys used for internal purposes (a clash is unlikely, as we currently use long key-names to store information in this table).
You can get access to the node's database via the node's ServiceHub, which is available inside flows and services. The Flow DB sample shows an example of a service that connects, reads and writes directly to the node's database: https://github.com/corda/samples.
You can also connect directly to the node via JDBC (e.g. from a client or server). The node lists its JDBC database connection string at start-up. You can also set it in the node's configuration file, as shown here: https://docs.corda.net/corda-configuration-file.html#examples.
I have been using DBExpress connections to various databases (mostly MSSQL, Sybase SQL) with:
SQLConnection -> SQLDataSet -> DataSetProvider -> ClientDataSet.
I need to connect to the databases in a fashion that does NOT write changes back to the tables.
So, the DataSetProvider has ResolveToDataSet:=false, and the ClientDataSet has LogChanges:=false (for performance).
In use I connect the SQLConnection, open the ClientDataSet, and then close the SQLConnection.
I can then manipulate the ClientDataSet without fear of changing the underlying table.
I'm new to FireDAC (XE8), and I'm looking to establish the same sort of scenario - load data into memory from a SQL query, and safely manipulate this data in memory without accidentally updating the source table(s).
I'm currently using:
FDConnection -> FDQuery and a FDMemTable
The FDQuery has CachedUpdates := true and I perform:
FDQ.Open;
FDQ.FetchAll;
FDMemT.CloneCursor(FDQ,true,false);
FDQ.Close;
I think this is pretty much equivalent - I end up with the data in an FDMemTable such that editing the data will not be able to "write back" to tables.
One other issue - in the dbExpress scenario, I often add InternalCalc Fields to the ClientDataSet. It isn't clear to me that I can do that (and have persistent field names) if I'm performing a CloneCursor operation.
Is there a simpler way of ensuring the data never updates the database? Setting the FDQuery to read-only doesn't work - I often have to modify records (but do not wish to persist these changes).
TIA.
EdB
There is a much easier way. Use the FDMemTable's CopyDataSet method. This will copy both the data and the metadata. Changes to the FDMemTable will not be written to the underlying dataset, and internal calc fields (and calculated field) will be copied as well, though you'll have to wire up the OnCalcFields event handler.
FDMemTable1.CopyDataSet( FDQuery1, [coStructure, coRestart, coAppend]);
I'm reading in data from an SQLite database table into a data.frame with R's DBI. Often (as often as every 5 secs), new records get added into the database table externally, or existing ones updated/deleted, at which point I need to propagate these changes to my data.frame.
So the question is how can I hook onto and respond to these database events in R? I don't want to have to keep querying the database every 5 secs just to make sure nothing has changed. Is there some callback mechanism at my disposal?
If you have access to the C code that is writing your SQL data, then you can implement a callback:
http://www.sqlite.org/c3ref/update_hook.html
and then in your callback function you could update the timestamp of a file if the table being modified is one your R code cares about. Then your R code checks the timestamp of that file, and if its changed, only then does it need to query the SQLite database.
Now I don't know if you could add a callback to the SQLite connection held by R and expect to get a callback if another SQLite connection/process changes the database table. I doubt it, I suspect the callbacks are only triggered if the connection they are registered with does the update, because otherwise all sorts of asynchronous things happen, and there's no event handler.
Another idea is to use triggers to update a database table of modification times. Define triggers on all tables you care about so that they update a row in a "last modified" table. Then use the file modification time to check for any change to the database, and then your R only has to query the "last modified" table to see what specific table has changed since last check.