what are log files and why they are created during transaction in berkeleydb core api(dbapi)? - berkeley-db

We are using BerkeleyDB Java edition, core api to read/write cdrfiles, we are having a problem with log files.
When we are writing 9lack records to database then multiples log files are created with huge sizes, 1.08gb. We want to know why multiple logfiles are created while using transaction , is it due to every commit statement after writing data to database ? or is there any other reason ?

This is normal. The log files contains ongoing tranactions, as well as information you can use for recovering the database (which means they're suitable to use as backup and disaster recovery).
Read chapter 5 of the documentation carefully, as well as this section which explains the periodic maintenance you need to do on your database.

Related

What's the best way to archive records in SQLite using EF Core to a new file

I have a .NET 6 application that writes logs out to a SQLite file each time a cycle is completed. I'm using EF Core.
The application sits on a Raspberry Pi with limited resource, so I want to keep the live database small as when it gets large the system slows down. So I want to archive the logs to only keep the last 50 or so logs in the live database.
I am hoping to create a method that will will create a new SQLite database file and then copy the last oldest record over when a new log is created. I'd also want to limit how big the archive file is, maybe split out to create a new one once it reaches a certain size on disk.
I've had a look around and can't find any best practices anything documented. Could someone give me a steer to the best way to achieve this or similar.
I solved my issue by putting EFCore aside and instead using the sqlite-net-pcl nuget package from SQLite-net.
This allowed me to separate the creation of my archive and also apply additional commands not supported in EFCore like vacuum.
I still use EFCore and Linq to query the records to create my archive with and then again to remove those records once the archive is created.

copying sqlite3 db while being read

I have a script that was reading data from a sqlite3 database and while this script was running I made a copy of the database cp mydatabase mydatabase.bak. Will this affect either the script that was reading from the db or the copy of the db? I had a look at the sqlite documentation here [0] but I didn't put a lock on the db as per the instructions.
[0] http://www.sqlite.org/backup.html
Copying the file should be analogous to another application reading the database, so it shouldn't be a problem. Multiple applications can safely read the database file at the same time (per the SQLite FAQ).
As another point, consider that you can read from a database even if the database and its directory both lack write permissions. Since in that scenario there's no way for the reading application to be modifying the database file or creating a temp file that needs to be incorporated into it, there's no way for any of a number of simultaneously reading applications to affect what any of the others see.

how log files are created in berkelydb java edition db base api

we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and without transaction implementing secondary database concept the issues we are getting are as follows:-
with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how this happens..
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files _db.001,_db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon
It looks like this question was already answered here what are log files and why they are created during transaction in berkeleydb core api(dbapi)?.
From your description it actually looks like you're using Berkeley DB core, not Java Edition. __db.001 through __db.005 are the shared region system environment files. The environment files are described here. The log.* files are the transaction log files. The transaction log files are described in the answer referenced above.
These types of questions can often be more easily/quickly answered on the Berkeley DB forum on OTN.
Regards,
Dave

How enable iCloud support for sqlite?

I want to provide iCloud support for my wrapper around sqlite. Is not using coredata.
I wonder how enable iCloud for it. The database content is changed all the time (is for invoicing). Also, if is possible to have some kind of versioning will be great.
Exist any sample I can use to do this?
The short answer is no, you would need to use Core Data as you suspected. Apple has stated that sqlite is unsupported.
Edit: Check out the section on iCloud that's now in the iOS Application Programming Guide under Using iCloud in Conjunction with Databases
Using iCloud with a SQLite database is possible only if your app uses
Core Data to manage that database. Accessing live database files in
iCloud using the SQLite interfaces is not supported and will likely
corrupt your database. However, you can create a Core Data store based
on SQLite as long as you follow a few extra steps when setting up your
Core Data structures. You can also continue to use other types of Core
Data stores—that is, stores not based on SQLite—without any special
modifications.
You can't just put the SQLite database in the iCloud container, because it might get corrupted. (As you modify an SQLite DB, temporary files are created and renamed, so if the sync process starts copying those files, you'll get a corrupt database.)
If you don't want to move to Core Data, you can do what Core Data does: store your database in your document folder, and store a transaction log in the iCould container. Every time you change the database, you add those changes to a log file, so you can play them back and make equivalent changes on other devices.
This gets pretty complicated: aside from getting the log/reply logic right, you'll want to coalesce redundant changes and periodically collapse the log into a complete copy of the database.
You might have an easier time developing a solution if you can exploit knowledge of your application (Core Data has to solve the problem in the general case). For example, you could save invoices as separate files in the cloud container (text, Property List, XML, JSON, whatever), writing them out as the database changes and only importing ones if the system tells you they were created or changed.
In summary, your choice is either to migrate to Core Data or write a sync solution yourself. Which one is best depends on the particulars of your application.

How to import csv file( already uploaded in blob storage) in Azure(.Net)

I want to import csv file(already uploaded in blob storage) in Azure.
For example I have uploaded test.csv on blob storage, now I just want to import that test.csv file in .net(azure) and after importing I will insert that data into azure database. I am using C# .net. Please suggest how can I achieve this. I want to follow below steps:-
Creating a cvs file with all rows.
Upload it as blob.
Parse it with a Worker role and insert it in the sql azure db.
Thanks.
A bit more clarification around your question would be helpful. Are you trying to upload a file to Azure blob storage? Download it from there for your app to consume? What language(s) are you using?
There are plenty of examples of loading files into and pulling them from Azure blob storage using .NET at least a handful for doing it with Java or PHP.
I you can clarify what you're trying to do, I'd be happy to point you at the appropriate ones. :)
-- answer based on comment update --
The steps for retrieving the blob are fairly straight forward:
1) create your Azure storage client using your azure storage credentials
add a using clause:
using Microsoft.WindowsAzure.StorageClient;
get a client for accessing blob storage:
CloudBlobClient tmpClient = new CloudBlobClient("<nameofyourconfigsetting>");
get a referrence to the blob you want to download:
CloudBlob myBlob = tmpClient.GetBlobReference("container/myblob.csv");
2) read the blob & save to a file
myBlob.DownloadToFile("<path>/myblob.csv");
The save location can be the %temp% location or if its a large file you may want to allocate some local storage space and put it there. The other thing you want to keep in mind is that if you are doing this in a role instance, you'll need to make sure you have measures in place to prevent two instances from concurrently trying to process the same file. If the file is small enough, you can probably even keep it as a memory stream and process it that way. If this is the case, you can use the DownloadToStream property of the CloudBlob object.
For additionally reading, I'd recommend checking out the MSDN library for the details on the StorageClient and CloudBlob contains. Additionally, the Windows Azure Platform Training Kit has some good labs to help you get a better understanding of how Azure Storage works.

Resources