Kafka Connector for Oracle Database Source - oracle11g

I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?

For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.

We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/

Related

Debezium MySQL schema_only snapshot mode semantics

We are using MariaDB with Debezium and dealing with upgrades. The process is to upgrade one DB host and then the next, etc. We're trying to minimize downtime and to avoid a snapshot of the data in the DB because it's quite large.
We could accept missing events/inconsistent snapshot of the data while we point the Debezium connector from the old DB to the new (upgraded) DB.
I'm seeking clarification of language in the Debezium MySQL connector enter link description here, specifically this bit:
*schema_only - the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started.
*
Does this mean that the connector will start, read the schema and then start producing data change events as they subsequently occur?

How to access on premise Teradata from Azure Databricks

We need to connect to on premise Teradata from Azure Databricks .
Is that possible at all ?
If yes please let me know how .
I was looking for this information as well and I recently was able to access our Teradata instance from Databricks. Here is how I was able to do it.
Step 1. Check your cloud connectivity.
%sh nc -vz 'jdbcHostname' 'jdbcPort'
- 'jdbcHostName' is your Teradata server.
- 'jdbcPort' is your Teradata server listening port. By default, Teradata listens to the TCP port 1025
Also check out Databrick’s best practice on connecting to another infrastructure.
Step 2. Install Teradata JDBC driver.
Teradata Downloads page provides JDBC drivers by version and archive type. You can also check the Teradata JDBC Driver Supported Platforms page to make sure you pick the right version of the driver.
Databricks offers multiple ways to install a JDBC library JAR for databases whose drivers are not available in Databricks. Please refer to the Databricks Libraries to learn more and pick the one that is right for you.
Once installed, you should see it listed in the Cluster details page under the Libraries tab.
Terajdbc4.jar dbfs:/workspace/libs/terajdbc4.jar
Step 3. Connect to Teradata from Databricks.
You can define some variables to let us programmatically create these connections. Since my instance required LDAP, I added LOGMECH=LDAP in the URL. Without LOGMECH=LDAP it returns “username or password invalid” error message.
(Replace the text in italic to the values in your environment)
driver = “com.teradata.jdbc.TeraDriver”
url = “jdbc:teradata://Teradata_database_server/Database=Teradata_database_name,LOGMECH=LDAP”
table = “Teradata_schema.Teradata_tablename_or_viewname”
user = “your_username”
password = “your_password”
Now that the connection variables are specified, you can create a DataFrame. You can also explicitly set this to a particular schema if you have one already. Please refer to Spark SQL Guide for more information.
Now, let’s create a DataFrame in Python.
My_remote_table = spark.read.format(“jdbc”)\
.option(“driver”, driver)\
.option(“url”, url)\
.option(“dbtable”, table)\
.option(“user”, user)\
.option(“password”, password)\
.load()
Now that the DataFrame is created, it can be queried. For instance, you can select some particular columns to select and display within Databricks.
display(My_remote_table.select(“EXAMPLE_COLUMN”))
Step 4. Create a temporary view or a permanent table.
My_remote_table.createOrReplaceTempView(“YOUR_TEMP_VIEW_NAME”)
or
My_remote_table.write.format(“parquet”).saveAsTable(“MY_PERMANENT_TABLE_NAME”)
Step 3 and 4 can also be combined if the intention is to simply create a table in Databricks from Teradata. Check out the Databricks documentation SQL Databases Using JDBC for other options.
Here is a link to the write-up I published on this topic.
Accessing Teradata from Databricks for Rapid Experimentation in Data Science and Analytics Projects
If you create a virtual network that can connect to on prem then you can deploy your databricks instance into that vnet. See https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html.
I assume that there is a spark connector for terradata. I haven't used it myself but I'm sure one exists.
You can't. If you run Azure Databricks, all the data needs to be stored in Azure. But you can call the data using REST API from Teradata and then save data in Azure.

Is it possible to combine two different database utility like Teradata's Fast export to Greenplum's GPloader?

Usually, I use a JDBC connection with some ETL tool to move data from one database(i.e Teradata) to another database(i.e Greenplum).
However, both of these databases comes with inbuilt utilities which can load/export huge amounts of data very fast, far faster than JDBC!. But the downside as far as I am aware of is that it can do so only to/from a file.
So, if I want to use them I have to Follow a process like-
Teradata ---(Fast Export)---> File ---(Gploader)---> Greenplum
I am wondering if it is possible to skip the File part and Combine the two utilities.
Teradata ---(FastExport & Gploader)--> Greenplum.
That way I can transfer huge amounts of data very quickly!
Yes, you most certainly can. Greenplum supports all kinds of external tables. One solution is to use an External Table that executes a command. That command can be a Java program that connects to Teradata to get data and uses the FastExport option.
I wrote the tool "gplink" to do just this. It automates the creation of Greenplum External Tables for JDBC sources.
Github:
https://github.com/pivotalguru/gplink
Teradata connection example:
https://github.com/pivotalguru/gplink/blob/master/connections/teradata.properties
And my blog:
http://www.pivotalguru.com/?page_id=982

How to validate data in Teradata from Oracle

My source data is in Oracle and target data is in Teradata.Can you please provide me the easy and quick way to validate data .There are 900 tables.If possible can you provide syntax too
There is a product available known as the Teradata Gateway that works with Oracle and allows you to access Teradata in a "heterogeneous" manner. This may not be the most effective way to compare the data.
Ultimately what your requirements sound more process driven and to be done effectively would require the source data to be compared/validated as stage tables on the Teradata environment after your ETL/ELT process has completed.

Balance reads in a MongoDB replica set with rmongodb

I have a MongoDB as a replica set with one master and one slave. I am using RmongoDB and I want to explicitly send a query to each machine using a parallelized for loop.
I succesfully created a conection with all the hosts:
mongo <- mongo.create(host=c("mastermng01:27001","slavemng01:27001"),
name="myRS",
username="user",
password="pass",
db="myDB")
ns_actual <- "myDB.MyCollection"
Then, I run a query like this:
cursor <- mongo.find(mongo,ns=ns_actual,query=list(var1="value"),
options=mongo.find.slave.ok)
So far, R knows the slave hosts and it is allowed to query them. But when it is going to do it? Can I force R to balance the queries among the hosts?
Sorry, no solution so far. The underlying C connector is not supporting this functionality. There is a new mongoC library available which supports this. But moving rmongodb to this library will take a lot of time which is currently not available.

Resources