can i use bdb(berkeley db) file created by c implementation (python bsddb) by oracle berkeley db java edition? - compatibility

I have a berkeley db file (*.bdb) which is created by the C implementation(python bsddb module). Is it possible to read this file by a pure java implementation of Berkeley Db? I tried to read it using berkeley db java edition (je) but could not. je throws out an exception saying that it could not detect the berkeley database. Are berkeley db files not inter operable across different implementations? If so, why?

No.
According to the Berkeley DB Java Edition FAQ, Berkeley DB and Berkeley DB Java Edition are not compatible with one another because they have a different file layout structure.

Note that there are three different products
Berkeley DB -- the C implementation
Berkeley DB Java Edition
Berkeley DB XML
see, Wikipedia
It is true that the "Berkeley DB" and "Berkeley DB Java Edition" have different (i.e. incompatible) file format. However, the "Berkeley DB" product does provide a Java API via JNI. So it is possible to access data file written by the C implementation from Java, but not with the "Berkeley DB Java Edition".

I haven't researched the definite answer, but I have the same experience. A database created with pythons bdb, and also accessible with the cli utils, is not detected at all by the Java API. The reverse was also true.

Related

ORACLE Golden Gate Classic Installation location relative to Source DB

From reading the docs, which are not to the point imho, when wanting to extract from a Source Oracle DB:
do you need to install OGG Classic on the same Server of the Source Oracle DB always for the Extract?
or, can we move with a script the archived log files to another machine?
or can the extract work out that the Oracle Source DB is on another server via tnsnames, ldap, oranames, etc?
That is not clear to me from the docs.
Looking at this from licensing cost issues on big db server. Sure, we can shareplex to another machine.
Picture provided:
All three scenarios are supported. Migrating log files from one machine to another is referred to as Downstream Capture and using GoldenGate on one machine to capture from the DBMS over the network is referred to as Remote Capture.
In addition, with Oracle GoldenGate 19.1 for Oracle, you can capture across operating systems. This means you can run GoldenGate on a Linux machine to capture data from your AIX DBMS environment.

How to test Cassandra Database using Robotframework

I need to connect to Cassandra Database and Query from there.
I want to know, is there any exist database library for Cassandra in Robot Framework.
Short answer: no, there isn't such.
One of the active (and good) Cassandra drivers for Python is from a company called DataStax, here is its repo - https://github.com/datastax/python-driver. Have in mind it has some peculiarities getting installed and running in the various OSes.
But as it does not (regretfully) adhere to Python Database API, so you cannot just install it and straight ahead use by RF's DatabaseLibrary.
You could/should create your own library wrapping the driver calls (which shouldn't be that hard...).

Read Pervasive Database 9 without creating ODBC DSN

I am writing an application in C# (.NET 4.0) which has to integrate with another, much older application. Part of the requirement is that my program must read data from three Btrieve files. I can assume that these Btrieve data files will already exist on the computers where my program is installed, and I can also assume that Pervasive PSQL V9 will also be installed and the relational and transactional service programs are running.
I have the associated DDF files, and I can install them as part of my application. The way they were created I have to put them in a different directory to where the Btrieve data files are. (They have to be a sub-directory of the directory where the data files are).
I didn't know anything about Pervasive or Btrieve when I started, but after a bit of experimentation I have got to the point where I can create a DSN using the 32 bit ODBC administration tool, and I can read from the data files using the ODBC ADO connector. All good so far.
My question is, is it possible to read from these files from my .NET program without having to create an ODBC DSN on the machine? In other words, is it possible to specify the directory where the *.DAT files are and the directory where the *.DDF files are in the ODBC connection string?
I'm not committed to using ODBC, I'm happy to use OLEDB or any other technology that allows me to reliably read from these files using .NET.
While a DSN-less connection allows your to connect without a DSN, you would still need a Database Name. Pervasive Database Names can be created on the fly using DTI or DTO. Using C#, I would suggest DTO.
If you can't create a Database Name, you can use OLEDB. It supports using a path in the Data Source parameter of the connection string as documented in the Remote Connections section of the OLEDB documentation.
One more caveat, make sure to compile your .NET program as x86 and not AnyCPU. The Pervasive OLEDB provider is only 32 bit. If you install your app on a 64 bit Operating System compiled as AnyCPU, it will look for a 64 bit provider and fail.
You should search for DSN-less connection. Instead of passing DSN=mydsn to the connect method (where mydsn is the DSN you set up) you pass DRIVER=xxx (where xxx is the name of the driver) and any other attributes it needs to direct it at the files. There are loads of sites with lists of connection strings for different ODBC drivers so one is bound to list Pervasive if you cannot locate the documentation for your ODBC driver. Another alternative to so look at your DSN in the registry where you'll find the names of the attributes you need to specify.

How to map a SQLite database to another?

I need to export the data of a SQLite database and import this data to another SQLite database. But the two databases have different schemas.
Does exist an open source tool that can help me doing that job?
The only opensource tool that i know is opendbcopy that i'm using for migrate from a database server to another and also for a similar kind of job that you want to do with SQLite but i've done it with PostgreSQL.
However opendbcopy is JDBC compliant and can connect to every database that have a JDBC driver, so you can try, also if the schema is not the same you can use the column mapping feature :
In addition i know also a good commercial alternative (that is easier to use) that is ESF Database Migration Toolkit .

how log files are created in berkelydb java edition db base api

we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and without transaction implementing secondary database concept the issues we are getting are as follows:-
with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how this happens..
how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files _db.001,_db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon
It looks like this question was already answered here what are log files and why they are created during transaction in berkeleydb core api(dbapi)?.
From your description it actually looks like you're using Berkeley DB core, not Java Edition. __db.001 through __db.005 are the shared region system environment files. The environment files are described here. The log.* files are the transaction log files. The transaction log files are described in the answer referenced above.
These types of questions can often be more easily/quickly answered on the Berkeley DB forum on OTN.
Regards,
Dave

Resources