How to use Cassandra with TDE (Transparent Data Encryption) - encryption

I'm trying to figure out how to use Cassandra with TDE (Transparent Data Encryption) and in which DataStax edition TDE is supported.
I've been going through DataStax documentation and from what I see, TDE is supported only in DataStax Enterprise Edition. Is this correct?
Also, TDE is included on table/column level and is specified when creating new tables, not as some configuration?
Just want to confirm my assumptions.
Thanks in advance

Your assumptions are correct.
Transparent Data Encryption is only supported in DataStax Enterprise (since version 3.2).
Transparent Data Encryption is specified when you create/alter a table
ALTER TABLE users WITH compression = {
'sstable_compression' : 'Encryptor',
'cipher_algorithm' : 'AES/ECB/PKCS5Padding',
'secret_key_strength' : 128,
'chunk_length_kb' : 1
};
http://docs.datastax.com/en/datastax_enterprise/4.7/datastax_enterprise/sec/secTDEtblcrypt.html contains the latest documentation about Transparent Data Encryption in DSE 4.7

Related

FireDAC SQLite Standard Encryption question

If I declare and create some SQLite database inside FDConnectionDefs.ini as follows
[SQLITESAMPLE]
Database=sample.sdb
Password=masterkey
LockingMode=Normal
SharedCache=false;
DriverID=SQLite
it should be encrypted with aes-256 as standard settings. FDSQLiteSecurity1.CheckEncryption is returning aes-256
Later, if I add the param Encrypt=aes-256 to that definition my apps still work correctly.
But RAD Studio Data Explorer and FireDAC Explorer will only work with a setting
Encrypt=No(with aes-256 I get some corrupt datafile message from these two apps).
If I define the Encrypt=aes-256 param from the beginning all apps do work correct.
Maybe there is some other encryption mode standard defined, if I do not declare encryption mode from the beginning? I wonder about this.
The SQLite3 DB file is either encrypted, or not, from the beginning.
You will have to maunally backup the file from one encrypted state to another.
There is no "standard" free encryption on SQLite3.
Only a few variants:
FireDAC encryption
Closed Source SQLite Encryption Extension
SQLite-Crypt commercial
SQLCipher
WXSQLite3 variant
DISQLite3 commercial for Delphi
SynSQLite3 Delphi/FPC Open Source
and probably others... all incompatible!

Kafka Connector for Oracle Database Source

I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?
For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.
We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/

How to programmatically create a database in ADX using Java

I am using REST API (https://learn.microsoft.com/en-us/azure/kusto/api/rest/request) to interact with the database in ADX.
I want to create more databases in the same cluster. How should I do it using Java?
I am not using the Java SDK. I have relied on the REST APIs so far.
I think I cannot create a new database using the REST API, so looking for alternative.
It would have been really helpful if there was a command like ".create table tablename" just for the database.
Clusters and databases can be managed using the "Control Plane", aka ARM APIs. These APIs have libraries in different languanges (as well as REST).
For instance, for the java library use this link, for C# use this link
Example for how to create a database in C# library (Java should be very similar):
var database = managementClient.Databases.CreateOrUpdate(resourceGroup, clusterName, databaseName, new Database(location, softDeletePeriod: softDeletePeriod, hotCachePeriod: hotCachePeriod));
Read more here
I think you'll need to use the Azure ARM REST API since the database is treated as a resource. From that point you can interact with it through the ADX APIs.

Decryption of Transparent Data Encryption (TDE) in an Oracle Database using weblogic datasource

I have implemented Transparent Data Encryption (TDE) in an Oracle Database
Can you please tell how to decrypt it using weblogic datasource connection?
Thanks in advance
Rahul, nothing to do ... your application will access the information through the database and will receive clear text/decrypted data. The use case for Oracle TDE is that it protects data files from a database-bypass attack, as in: SysAdmin with OS access could read data from clear text data files.
Kind regards, Peter (PM at Oracle for TDE and Oracle Key Vault)

Spring batch tables creation fails in MariaDB

I used this schema to create Spring batch tables in MariaDB - https://github.com/spring-projects/spring-batch/blob/master/spring-batch-core/src/main/resources/org/springframework/batch/core/schema-mysql.sql.
BATCH_JOB_EXECUTION_PARAMS table fails with below error
Error: (conn=10719030) This table type requires a primary key
SQLState: 42000
ErrorCode: 1173
Add PRIMARY KEY(JOB_EXECUTION_ID, KEY_NAME) to BATCH_JOB_EXECUTION_PARAMS if that combination is Unique.
BATCH_JOB_EXECUTION_SEQ also has no PK. The UNIQUE key could be promoted to be the PK. (Ditto for some other tables.) That particular table is rather weird -- it turns a 1-byte UNIQUE_KEY into an 8-byte id!?!
BATCH_JOB_EXECUTION_PARAMS is a pretty awful variant of the classic EAV schema.
MySQL and MariaDB are different products and it looks like they behave differently in regards to primary keys. You are using the MySQL DDL script against a MariaDB server which is not officially supported by Spring Batch.
So either adapt the script accordingly (by adding the primary keys manually) and be aware that Spring Batch would not necessarily work as expected since it does not support MariaDB officially, or open a feature request in the JIRA of the project to request support for MariaDB.

Resources