FireDAC equivalent of DBExpress briefcase model - delphi-xe8

I have been using DBExpress connections to various databases (mostly MSSQL, Sybase SQL) with:
SQLConnection -> SQLDataSet -> DataSetProvider -> ClientDataSet.
I need to connect to the databases in a fashion that does NOT write changes back to the tables.
So, the DataSetProvider has ResolveToDataSet:=false, and the ClientDataSet has LogChanges:=false (for performance).
In use I connect the SQLConnection, open the ClientDataSet, and then close the SQLConnection.
I can then manipulate the ClientDataSet without fear of changing the underlying table.
I'm new to FireDAC (XE8), and I'm looking to establish the same sort of scenario - load data into memory from a SQL query, and safely manipulate this data in memory without accidentally updating the source table(s).
I'm currently using:
FDConnection -> FDQuery and a FDMemTable
The FDQuery has CachedUpdates := true and I perform:
FDQ.Open;
FDQ.FetchAll;
FDMemT.CloneCursor(FDQ,true,false);
FDQ.Close;
I think this is pretty much equivalent - I end up with the data in an FDMemTable such that editing the data will not be able to "write back" to tables.
One other issue - in the dbExpress scenario, I often add InternalCalc Fields to the ClientDataSet. It isn't clear to me that I can do that (and have persistent field names) if I'm performing a CloneCursor operation.
Is there a simpler way of ensuring the data never updates the database? Setting the FDQuery to read-only doesn't work - I often have to modify records (but do not wish to persist these changes).
TIA.
EdB

There is a much easier way. Use the FDMemTable's CopyDataSet method. This will copy both the data and the metadata. Changes to the FDMemTable will not be written to the underlying dataset, and internal calc fields (and calculated field) will be copied as well, though you'll have to wire up the OnCalcFields event handler.
FDMemTable1.CopyDataSet( FDQuery1, [coStructure, coRestart, coAppend]);

Related

RecordSortedList and temporary table

I have a performance issue with multiple temporary tables that I'm trying to solve with RecordSortedList, but I'm getting strange results. I have a temporary table that has a couple hundred thousand records being inserted into it, and then used elsewhere for joins to other temporary tables. The problem is after trace parsing this solution the insert is taking too long for all the individual inserts and I was hoping to use a RecordSortedList to bulk insert into the staging table. However, I can't find a handle to the temporary table after the RecordSortedList.insertDatabase() call.
I've tried something like this:
RecordSortedList tmpTableSortedList;
MyTempTable myTempTable;
AssetTrans assetTrans;
int i = 1;
tmpTableSortedList = new RecordSortedList(tableNum(MyTempTable));
tmpTableSortedList.sortOrder(fieldNum(MyTempTable, LineNum));
//the real scenario has a much more complicated data gathering, but just for sample
while select * from AssetTrans
{
myTempTable.AssetGroup = assetTrans.AssetGroup
myTempTable.LineNum = i;
tmpTableSortedList.ins(myTempTable);
i++;
}
tmpTableSortedList.insertDatabase();
//strange things happen here
MyTempTable myTempTableCopy;
AnotherTmpTable anotherTmpTable;
tmpTableSortedList.first(myTempTableCopy); //returns a buffer, but not usable buffer in join.
//does not work, I imagine because the myTempTableCopy isn't actually pointing to the
//inserted records above; somehow the temp table is out of scope.
while select * from anotherTmpTable
join myTempTableCopy
where anotherTmpTable.id == myTempTableCopy.id
{
//logic
}
Is there a way to get a pointer to the temp table after the call to RecordSortedList.insertDatabase()? I've also tried linkPhysicalTable() and a few other things, but maybe RecordSortedList was not supposed to be used with tempDb tables?
Edit: Like Aliaksandr points out below this works with RecordInsertList instead of RecordSortedList
but maybe RecordSortedList was not supposed to be used with tempDb tables?
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not. I would think you could though because I'm not sure there's a huge difference in a TempDb table and a Regular table when they're both stored on disk?
If you wanted to use an InMemory table, look at \Classes\CustVendSettle specifically the variable rslTmpOverUnderReverseTax, which uses an InMemory table.
IF TempDb tables were allowed, you would use getPhysicalTableName() to get the handle combined with useExistingTempDBTable().
Or did I misread your question?
does not work, I imagine because the myTempTableCopy isn't actually pointing to the inserted records above; somehow the temp table is out of scope.
Method new of RecordSortedList has additional Common parameter where you should pass your tempDB table buffer.
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not.
Although the message says we can't use temporary tables for such operations, indeed we can. We just need to be careful because the code must be executed on the server.
RecordSortedList objects must be server-located before the insertDatabase method can be called. Otherwise, an exception is thrown.
I have a temporary table that has a couple hundred thousand records being inserted into it
There is no limit to the size of a RecordSortedList object, but they are completely memory-based, so there are potential memory consumption problems. So this may not be the best solution in your case.

Write to intended DB location with odbc::dbWriteTable() function

There's a bug-like phenomenon in the odbc library that has been a known issue for years with the older, slower RODBC library, however the work-around solutions for RODBC do not seem to work with odbc.
The problem:
Very often a person may wish to create a SQL table from a 2-dimensional R object. In this case I'm doing so with SQL Server (i.e. T-SQL). The account used to authenticate, e.g. "sysadmin-account", may be different from the owner and creator of the database that will house tables being created but the account has full read/write permissions for the targeted DB.
The odbc() call to do so goes like this and runs "successfully"
library(odbc)
db01 <- odbc::dbConnect(odbc::odbc(), "UserDB_odbc_name")
odbc::dbWriteTable(db01, "UserDB.dbo.my_table", r_data)
This connects and creates a table, but instead of creating the table in the intended location of UserDB.dbo.my_table, it gets created in UserDB.sysadmin-account.dbo.my_table.
Technically, .dbo is a child of the UserDB database. what this is doing is creating a new child object of UserDB called sysadmin-account with a child .dbo of its own, and then creating the table within there.
With RODBC and some other libraries/languages we found that a work-around solution was to change the reference to the target table location in the call to ".dbo.my_table" or in some cases "..dbo.my_table". Also I think running a query to use UserDB sometimes used to help with RODBC.
None of these solutions seems to any effect with odbc().
Updates
Tried the DBI library as a potential substitute to no avail
Found a work-around of sending the data to a global temp table, then using a SQL statement to copy from the temp table to the intended location

R - how to react to database inserts/updates/deletes?

I'm reading in data from an SQLite database table into a data.frame with R's DBI. Often (as often as every 5 secs), new records get added into the database table externally, or existing ones updated/deleted, at which point I need to propagate these changes to my data.frame.
So the question is how can I hook onto and respond to these database events in R? I don't want to have to keep querying the database every 5 secs just to make sure nothing has changed. Is there some callback mechanism at my disposal?
If you have access to the C code that is writing your SQL data, then you can implement a callback:
http://www.sqlite.org/c3ref/update_hook.html
and then in your callback function you could update the timestamp of a file if the table being modified is one your R code cares about. Then your R code checks the timestamp of that file, and if its changed, only then does it need to query the SQLite database.
Now I don't know if you could add a callback to the SQLite connection held by R and expect to get a callback if another SQLite connection/process changes the database table. I doubt it, I suspect the callbacks are only triggered if the connection they are registered with does the update, because otherwise all sorts of asynchronous things happen, and there's no event handler.
Another idea is to use triggers to update a database table of modification times. Define triggers on all tables you care about so that they update a row in a "last modified" table. Then use the file modification time to check for any change to the database, and then your R only has to query the "last modified" table to see what specific table has changed since last check.

Efficient way to load lists of objects from database to instantiate a single object

My situation
I have a c# object which contains some lists. One of these lists are for example a list of tags, which is a list of c# "SystemTag"-objects. I want to instantiate this object the most efficient way.
In my database structure, I have the following tables:
dbObject - the table which contains some basic information about my c# object
dbTags - a list of all available tabs
dbTagConnections - a list which has 2 fields: TagID and ObjectID (to make sure an object can have several tags)
(I have several other similar types of data)
This is how I do it now...
Retrieve my object from the DB using an ID
Send the DB object to a "Object factory" pattern, which then realise we have to get the tags (and other lists). Then it sends a call to the DAL layer using the ID of our C# object
The DAL layer retrieves the data from the DB
These data are send to a "TagFactory" pattern which converts to tags
We are back to the Object Factory
This is really inefficient and we have many calls to the database. This especially gives problems as I have 4+ types of lists.
What have I tried?
I am not really good at SQL, but I've tried the following query:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ....
However, this retreives as many objects as there are tagconnections - so I don't see joins as a good way to do this.
Other info...
Using .NET Framework 4.0
Using LINQ to SQL (BLL and DAL layer with Factory patterns in the BLL to convert from DAL objects)
...
So - how do I solve this as efficient as possible? :-) Thanks!
At first sight I don't see your current way of work as "inefficient" (with the information provided). I would replace the code:
SELECT * FROM dbObject p
LEFT JOIN dbTagConnection c on p.Id= c.PointId
LEFT JOIN dbTags t on c.TagId = t.dbTagId
WHERE ...
by two calls to the DALs methods, first to retrieve the object main data (1) and one after that to get, only, the data of the tags related (2) so that your factory can fill-up the object's tags list:
(1)
SELECT * FROM dbObject WHERE Id=#objectId
(2)
SELECT t.* FROM dbTags t
INNER JOIN dbTag Connection c ON c.TagId = t.dbTagId
INNER JOIN dbObject p ON p.Id = c.PointId
WHERE p.Id=#objectId
If you have many objects and the amount of data is just a few (meaning that your are not going to manage big volumes) then I would look for a ORM based solution as the Entity Framework.
I (still) feel comfortable writing SQL queries in the DAOs to have under control all queries being sent to the DB server, but finally it is because in our situation is a need. I don't see any inconvenience on having to query the database to recover, first, the object data (SELECT * FROM dbObject WHERE ID=#myId) and fill the object instance, and then query again the DB to recover all satellite data that you may need (the Tags in your case).
You have be more concise about your scenario so that we can provide valuable recommendations for your particular scenario. Hope this is useful you you anyway.
We used stored procedures that returned multiple resultsets, in a similar situation in a previous project using Java/MSSQL server/Plain JDBC.
The stored procedure takes the ID corresponding to the object to be retrieved, return the row to build the primary object, followed by multiple records of each one-to-many relationship with the primary object. This allowed us to build the object in its entirety in a single database interaction.
Have you thought about using the entity framework? You would then interact with your database in the same way as you would interact with any other type of class in your application.
It's really simple to set up and you would create the relationships between your database tables in the entity designer - this will give you all the foreign keys you need to call related objects. If you have all your keys set up in the database then the entity designer will use these instead - creating all the objects is as simple as selecting 'Create model from database' and when you make changes to your database you simply right-click in your designer and choose 'update model from database'
The framework takes care of all the SQL for you - so you don't need to worry about that; in most cases..
A great starting place to get up and running with this would be here, and here
Once you have it all set up you can use LINQ to easily query the database.
You will find this a lot more efficient than going down the table adapter route (assuming that's what you're doing at the moment?)
Sorry if i missed something and you're already using this.. :)
As far I guess, your database exists already and you are familiar enough with SQL.
You might want to use a Micro ORM, like petapoco.
To use it, you have to write classes that matches the tables you have in the database (there are T4 generator to do this automatically with Visual Studio 2010), then you can write wrappers to create richer business objects (you can use the ValueInjecter to do it, it is the simpler I ever used), or you can use them as they are.
Petapoco handles insert / update operations, and it retrieves generated IDs automatically.
Because Petapoco handles multiple relationships too, it seems to fit your requirements.

MS Access CREATE PROCEDURE Or use Access Macro in .NET

I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.
I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.
Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.
You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx
What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.

Resources