Qt , How to lock SQLite database for multiple operation - qt

I have a QML application which user intract with. There is a timer that listen to server for work order then insert all info to SQLite db in application.Also user make change on data (update,delete etc...) in SQLite.
My question is , How to prevent multi operation on SQLite table. Only one operation must take effect on SQLite(select,delete,insert,update....) I don't know but , Can Mutex.lock structure use for this. Or Is there a something wrong with multiple operation on SQLite

First thing you should do is read up on SQLite locking, they have a section in the docs about it: https://www.sqlite.org/lockingv3.html
The summarisation is that SQLite does locking on modifications such as an insert or update, but won't create a lock when reading. However if a lock exists whilst a modification is in progress the read won't be able to access the database.
I wouldn't worry too much about locking on reading, the state should be fine to be shared at that stage.

Related

Is there any good way to pass in bytecode directly to sqlite3?

I'm working on a tool that allows Python developers to write pythonic code to interact with a sqlite3 database, similar to sqlalchemy but without the "translation" phase. If I can generate a sqlite3 prepared statement, how can I directly pass it to the evaluation system?
As a rough example, here's how I roughly view a user being able to interact with my tool:
myTable = Table("field1", "field2", "field3")
mytable.insert("foo", "bar", "baz")
select = mytable.select("field1")
---------------
print(select)
>>> ["foo"]
There is no (public) API in SQLite3 that allows you to execute pre-built SQLite bytecode. The bytecode for an SQL statement can be viewed with the EXPLAIN SQL command, but this is meant for debugging and learning purposes, not for what you're trying to do.
And for most purposes, you shouldn't need this. If you feel that the time spent compiling a prepared statement will be a burden, sqlite3_stmt objects can be stored for the lifetime of the sqlite3 database connection it was created with. Prepared statements that have been executed can be reset, allowing them to be executed again. So as long as the database connection exists, you can compile the statement once and use it as many times as you need to.
But that's about it. There is no mechanism to persist a prepared statement beyond the lifespan of the sqlite3 connection. You can't extract the bytecode by any public API, and you can't use some bytecode you've obtained to reconstitute a prepared statement.
If you want persistence beyond the connection, then you need to store the SQL statement text in whatever place you want to be persistent, and then simply recompile the prepared statement when you reconnect to the database. That one recompilation (or many depending on how many you store) shouldn't be a particular burden, depending on the life span of your application.

Locking transactions (SQL Server + EF5)

I am trying to debug a performance issue in an ASP.NET application using .NET 4.5, EF5 (with a 2nd level cache and lazy loaded navigation properties) and SQL Server 2014. We are experiencing a number of wait locks in the SQL server. When I look at the locking transactions, they contain a very quick UPDATE, and then a very large SELECT. The UPDATE is ostensibly a necessary one, but I am confused as to why the SELECT is being run in the same transaction (and why anything is being selected at all). The fundamental issue is that the table referenced in the UPDATE statement is locked for the duration of the SELECT statement.
We use repository pattern for getting data from the db, and DbContext.SaveChanges() for committing changes. I cannot figure out how it is possible that EF produces a transaction where there is both a write and a read, and I am not getting relevant results when I try to search Google.
We have a number of interfaces into the system, and a couple of console applications working on the database as well, but they all go through the same setup/versions of .NET and EF.
I figure that it must be through SaveChanges, since this is (AFAIK) the only time that things are written to the database.
Does anyone here have a hint as to how these locking transactions might be produced?
The fundamental issue is that the table referenced in the UPDATE
statement is locked for the duration of the SELECT statement.
The answer is in your question:
the SELECT is being run in the same transaction
X lock is always held until the end of the transaction, i.e. until it commits or rolls back. So if after your quick update there is a long select, all that update locked in your table remains locked until your select ends.
You can separate your update and select if your business rules permit, you can add an appropriate index on the updated table to lock only some rows and not the whole table, or you can optimize your select to execute faster.

best practice for bulk update in document DB

we have a scenario where we need to populate the collection every one hour with the latest data whenever we receive the data file in blob from external sources and at the same time , we do not want to impact the live users while updating the collection.
So, we have done below
Created 2 databases and collection 1 in both databases
Created a another collection in different database( configuration database ) with property as Active and Passive and this will have the Database1 and Database2 as values for the above properties
Now , our web job will run every time it sees the file in blob and check this configuration database and identify which one is active or passive and process the xml file and update the collection in passive database as that is not used by the live feed and once it is done , will update the active database to current and passive to live
now , our service will always check which one is active and passive and fetch the data accordingly and show to user
As we have to delete the data and insert the newly data in web job , wanted to know is this is best design we have come up with ? Does deleting and inserting the data will cost ? Is there better way to do bulks delete and insert as we are doing sequentially now
wanted to know is this is best design we have come up with ?
As David Makogon said, as for your solution, you need to manage and pay for multiple databases. If possible, you could create new documents in same collection and control which document is active in your program logic.
Does deleting and inserting the data will cost ?
the operation/request will consume the request units, which will be charged. To know Request Units and DocumentDB Pricing details, please refer to:
What is a Request Unit
DocumentDB pricing details
Is there better way to do bulks delete and insert as we are doing sequentially now
Stored Procedure that provides a way to group operations like inserts and submit them in bulk. You could create the stored procedures and then execute the stored procedure in your Webjobs function.

How to transfer data from SQL Server to Informix and vice versa

I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. MichaƂ Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.

sqlite database schema version incrementing on disconnect/connect from sqlitestudio

I use the sqlite database schema version.
PRAGMA schema_version;
It helps me control upgrades and prevents user from modifying the schema and then reporting a flood of irreproducible bugs.
However, I find the version increments far more often that I expect.
" It is incremented by SQLite whenever the database schema is modified (by creating or dropping a table or index). " http://www.sqlite.org/pragma.html#pragma_schema_version
In particular when I simply connect and disconnect from sqlitestudio, even though I do not change the schema in any way.
Is there any way of preventing this happening ( or at least understanding what is going on ) ?

Resources