Good afternoon,
We use odbc connector to access our Teradata (we use the TTU 15) and we are looking for a simple solution allowing to select the SCHEMA directly from the ODBC connection when doing a "GET DATA" from Power BI Desktop.
Why ? Because people who are using PBI does not have SQL knowledge and it takes more than 1 minutes to display the database's liste.
We know it's possible to do that from the SQL query but we are looking for a solution to do that in the connection itself.
We've tried using the DATABASE/DB/SCHEMA but no way, each time Power BI Desktop display us the entire list of database.
Any idea or tricks ?
Related
I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?
For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.
We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/
I am trying to resolve some connectivity issues between Business Objects and a Progress Open Edge database.
I am trying to find a system table (or tables) that can tell me what is running on the progress open edge database. I only have ODBC access to it.
Special bonus points if the running sql can be returned!
Thanks in advance....
It sounds like you want the "client statement cache".
This is available in 10.1C and higher. Once enabled for a session it will track the database access statements (SQL queries for SQL connections or 4gl stack trace for 4gl connections) as they occur. Is does not keep a history -- only the most recent statement is available.
I am a 4GL guy so you will have to excuse my SQL ineptitude but you can use SQL connections to fiddle with system tables.
The _Connect VST is what you are looking for. For best results use the _Connect-Id key which is "off by one" from the Usr# (Id fields on VST tables are indexed, no other fields are).
If you have access to the server, you can enable the client statement cache via PROMON. Select the "R&D" menu, then option 1, then option 18. Choose "1-Single" for SQL connections.
If you want to code it with SQL you need to muck about with the _Connect. _connect-cachingType and _Connect._connect-cacheinfo[1] fields.
_connect-cachingType = 1 will give you your most recent SQL statement (or 4gl statement if it is a 4gl connection)
_connect-cacheinfo is an array. element 1 is the only element with anything in it for SQL connection. (4gl connections may have a procedure stack trace...)
OE Databases have what is termed a "Statement Cache."
There's a KB on the technology here, and a discussion on accessing the cache information via the database's VST tables here.
My source data is in Oracle and target data is in Teradata.Can you please provide me the easy and quick way to validate data .There are 900 tables.If possible can you provide syntax too
There is a product available known as the Teradata Gateway that works with Oracle and allows you to access Teradata in a "heterogeneous" manner. This may not be the most effective way to compare the data.
Ultimately what your requirements sound more process driven and to be done effectively would require the source data to be compared/validated as stage tables on the Teradata environment after your ETL/ELT process has completed.
How to Synchronize local database(datas) value to Server(Hosting) database values in SQL server 2008 R2? ex:from our client having a pc wen they are enterting entry it will insert kay ah...but in our concern keeping backup from server Hosting i mean IBM server...suppose client
connecting internet connection means clicking single button event want to transfer OUR own server database also..can u got it
Is there such any option there? Please let me know.
Thank you in advance!
I suggest that use Redgate Data compare tools in order to synchronize data from one database to another database.
You can also use some query such as following query in order to determine deferent record in two database and synch them
USE Datbase1
INSERT INTO Schema1.Table1 (columns)
SELECT t1.Columns
FROM Datbase2.Schema2.Table2 t1
LEFT JOIN Schema1.Table1 t2 ON t1.keyColumn = t2.Keycolumn
WHERE t2.keycolumn IS NULL
The RedGate will be costly for you. Use can use Open Source Database compare and sync tool as
Open DBDiff
I have used it for my Live Databases compare with Local Database. Still today I have not found any issue.
Hope It will help you.
Open DBDiff is an open source database schema comparison tool for SQL Server 2005/2008.
It reports differences between two database schemas and provides a synchronization script to upgrade a database from one to the other.
I have an ASP.NET app that uses a SQL Server database. I now need to pull data from Sybase ASE into that SQL Server database for my app to consume, and I'm not having any success with my ideas.
Has anyone done this? Any ideas/suggestions/tips?
You can configure a linked server from SQL Server to Sybase. It should be fairly vanilla using the Sybase provider on the MS side.
Okay, I've finally (through lame trial and error) found out how to link my Sybase ASE (12.5) server to my SQL Server (2008) which will allow the integration I want. Here's roughly how I did it:
Logged in to Sybase ASE OLE DB Configuration Manager (this is like the Sybase version of Windows' ODBC Data Sources) and added an OLE DB data source. I believe you must be an admin on the PC to do this.
In SQL Server 2008 Management Studio, went to Server Objects > Linked Servers. Right click and select "New Linked Server".
In the Linked Server Properties, I set the following properties:
General:
--Linked server: the name of your linked server as you want it to appear in your linked server list
--Provider: Select Sybase ASE OLE DB Provider from the dropdown list.
--Product name: The exact name of the OLD DB data source you just created in Sybase ASE OLE DB Configuration Manager.
--Data source: Same as Product name.
--Provider string: I left this blank
--Location: I left this blank
--Catalog: The default database (master or whatever) to log on to.
Security:
--You need to map a valid SQL Server logon to a valid Sybase logon. I did not use impersonation (which does a credentials pass-thru).
--I chose my connection Be made without using a security context.
Server Options:
--All the defaults worked for me.
Throughout, the standard SQL Server help worked fairly well as a guide. Though not always true, F1 was my friend here.
I can now do distributed queries, DTS or SSIS packages, and use SSRS. This takes a lot of the suck out of Sybase ASE.
Of course the above can be done via the command line using sp_linkserver, but the GUI is more comfortable for a lowly dev like me.
Use Management Studio or Enterprise Manager to import the data using the data importation wizard. That should be it, just make sure you pick the right data provider in the wizard and you should be good to go.
If you want this to be a live feed create a small windows service to manage the exchange of information. It should be relatively simple to do, just a little bit of leg work on your end. If you are adverse to that there are plenty of off the shelf solutions that can do this for you.
The question is a little vague on specifics:
Is this a one time conversion or part of a repeated process.
Is the source machine "reachable" from your destination machine (can you connect the two or do you need to read in files)
With most conversions there are two parts:
Physically getting data from the source into the destination.
Mapping data from the source to the destination tables.
It is hard to make any recommendations without more info. What would be fine for a one time conversion would not work if you need to read in data all day every day. Also, if the source database can not be connected to and you have to pass files, they methods change.