Transfer huge data from maria db to ssms when their is data change in mariadb - mariadb

I did tried using schedular for Databricks notebook but it is creating unnecessary of loading data. The data in mariadb changes randomly it is not fixed , if I try pipeline I cant call a trigger for change in data and transfer of data from one Database to another.
Please help me with any pipeline ideas , azure datafactory ideas or python codes as well so that I can transfer tables when their are changes in Mariadb

One way to trigger pipeline is to use event-based trigger.
Creating event-based trigger in Azure Data Factory. Create trigger and select Type as Custom events.
Refer - https://www.mssqltips.com/sqlservertip/6063/create-event-based-trigger-in-azure-data-factory/
Second way is to use logic app. This is the best approach for your query.
Refer this answer by Trent Tamura

Related

Azure Synapse replicated to Cosmos DB?

We have a Azure data warehouse db2(Azure Synapse) that will need to be consumed by read only users around the world, and we would like to replicate the needed objects from the data warehouse potentially to a cosmos DB. Is this possible, and if so what are the available options? (transactional, merege, etc)
Synapse is mainly about getting your data to do analysis. I dont think it has a direct export option, the kind you have described above.
However, what you can do, is to use 'Azure Stream Analytics' and then you should be able to integrate/stream whatever you want to any destination you need, like an app or a database ands so on.
more details here - https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics
I think you can also pull the data into BI, and perhaps setup some kind of a automatic export from there.
more details here - https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-get-started-visualize-with-power-bi

best practice for bulk update in document DB

we have a scenario where we need to populate the collection every one hour with the latest data whenever we receive the data file in blob from external sources and at the same time , we do not want to impact the live users while updating the collection.
So, we have done below
Created 2 databases and collection 1 in both databases
Created a another collection in different database( configuration database ) with property as Active and Passive and this will have the Database1 and Database2 as values for the above properties
Now , our web job will run every time it sees the file in blob and check this configuration database and identify which one is active or passive and process the xml file and update the collection in passive database as that is not used by the live feed and once it is done , will update the active database to current and passive to live
now , our service will always check which one is active and passive and fetch the data accordingly and show to user
As we have to delete the data and insert the newly data in web job , wanted to know is this is best design we have come up with ? Does deleting and inserting the data will cost ? Is there better way to do bulks delete and insert as we are doing sequentially now
wanted to know is this is best design we have come up with ?
As David Makogon said, as for your solution, you need to manage and pay for multiple databases. If possible, you could create new documents in same collection and control which document is active in your program logic.
Does deleting and inserting the data will cost ?
the operation/request will consume the request units, which will be charged. To know Request Units and DocumentDB Pricing details, please refer to:
What is a Request Unit
DocumentDB pricing details
Is there better way to do bulks delete and insert as we are doing sequentially now
Stored Procedure that provides a way to group operations like inserts and submit them in bulk. You could create the stored procedures and then execute the stored procedure in your Webjobs function.

How to validate data in Teradata from Oracle

My source data is in Oracle and target data is in Teradata.Can you please provide me the easy and quick way to validate data .There are 900 tables.If possible can you provide syntax too
There is a product available known as the Teradata Gateway that works with Oracle and allows you to access Teradata in a "heterogeneous" manner. This may not be the most effective way to compare the data.
Ultimately what your requirements sound more process driven and to be done effectively would require the source data to be compared/validated as stage tables on the Teradata environment after your ETL/ELT process has completed.

How to make sure whenever application runs it first fetches data from database and then from coherence?

I am using QueryHint to set a jpa entity into coherence and updating it ,which in turn updates the value in coherence and database.
However when I again Run the application ,my first fetch is occuring directly from coherence.hence any update on the database is not reflecting into it.
How to make sure whenever application runs it first fetches data from database and then from coherence?
If you need updates made directly to the database to automatically show up in Coherence, then look at the Oracle Coherence "Hot-Cache" feature: http://docs.oracle.com/middleware/1212/coherence/COHIG/golden_g.htm
If you are using JPA and you just want to keep the data from getting out of sync, look at TopLink Grid: http://docs.oracle.com/middleware/1212/coherence/COHIG/tlg_integrate.htm

Can I solve this using oracle db listener?

Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
Can u provide me any begginner.s link of oracle db listener?
You probably want to look at setting up a Trigger in the database
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/triggers.htm
An alternative to a trigger would be to log update queries against the table (to a log file) and have a process monitor the log, sending out alerts when something happens.

Resources