WSO2 DSS - in wso2dss how to display all data of all tables - wso2-data-services-server

In WSO2 DSS, from SQL Server: how can I extract all data from my database without writing a select for each table - how can I do this?

You have to write the SQL select statements in order to expose your datasource as a service. However , in order to make the process easier you can use the generate feature, where it generate the dataservice for you.

Related

Transfer huge data from maria db to ssms when their is data change in mariadb

I did tried using schedular for Databricks notebook but it is creating unnecessary of loading data. The data in mariadb changes randomly it is not fixed , if I try pipeline I cant call a trigger for change in data and transfer of data from one Database to another.
Please help me with any pipeline ideas , azure datafactory ideas or python codes as well so that I can transfer tables when their are changes in Mariadb
One way to trigger pipeline is to use event-based trigger.
Creating event-based trigger in Azure Data Factory. Create trigger and select Type as Custom events.
Refer - https://www.mssqltips.com/sqlservertip/6063/create-event-based-trigger-in-azure-data-factory/
Second way is to use logic app. This is the best approach for your query.
Refer this answer by Trent Tamura

Connecting database from flow

How should I need to connect with the database from flow to fetch the data instead of vault querying?
1.Instead of using custom query criteria, I am planning to use query command(like SQL command) to fetch the data from database.
2.Is it possible?
3.If it is possible, how to achieve it?
serviceHub.jdbcSession().prepareStatement() or serviceHub.withEntityManager { }

Azure Cosmos DB - Gremlin API to clone existing collection into another collection

I have created a gremlin api database in Azure Cosmos DB and have data in one collection.
However, I want to know if there is a way to clone the data into another collection in another database.
I want to copy graph data from Dev environment to stage and prod environments.
You can use existing tools for cosmos SQL API(earlier known as documentdb), cosmosdb allows you to query graph via sql API as well
Something like "select * from c" can fetch you the json representation of how cosmosdb stores your graph data.
The simplest approach would be using cosmosdb migration tool:
Set input source as Cosmos SQL API/Documentdb, and use your dev endpoint with the following query select * from c
Set output type as json and export your data
Now use the downloaded json as input source and set your prod graph db as your output(choose documentdb/cosmos SQL API as output type) and run it.
This should push your dev graph data to prod.
You can also use other Azure tools such as data factory, which work with documentdb
Just used this CosmicClone to clone a cosmos db graph database form one account to another https://github.com/microsoft/CosmicClone. Cloned 500k records in 20mins. Looks like it would work with a DB to clone a collection.

SQL query using 2 different database engines

I need to use the results from an Oracle query in a SQLServer query within ASP.NET. Can I do this without exporting the Oracle query results to a temp file, e.g. csv, and creating a new table in SQLServer using the csv file? Thanks!
There are probably several different ways to do this, including connecting the SQL Server to the Oracle server using Linked servers,
But if that's not possible for some reason, or if you need to use the Oracle resultset as well as the SQL Server resultset in your asp.net page, you can fill a DataSet with the Oracle query results, and send a DataTable as a table valued parameter to a stored procedure in the SQL Server.
Look at this page in MSDN, explaining how to use table valued parameters in sql server.
You can also look here for a step by step instruction on how to create and use table valued parameters in c#.

specify default schema for a database in db2 client

Do we have any way to specify default schema in cataloged DBs in db2 client in AIX.
The problem is , when it's connecting to DB, it's taking user ID as default schema and that's where it's failing.
We have too many scripts that are doing transactions to DB without specifying schema in their db2 sql statements. So it's not feasible to change scripts at all.
Also we can't create users to match schema.
You can try to type SET SCHEMA=<your schema> ; before executing your queries.
NOTE: Not sure if this work (I am without a DB2 database at the moment, but it seems that work) and depending on your DB2 version.
You can create a stored procedure that just changes the current schema and then set the SP as connect proc. You can test some conditions before make that schema change, for example if the stored procedure is executed from the AIX server directly with a given user.
You configure the database to use this SP each time a connection is established by modifying connect_proc
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.config.doc/doc/r0057371.html
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.dbobj.doc/doc/c0057372.html
You can create alias in the new user schema that points to the tables with the other schema. Refer these links :
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0000910.html
http://bytes.com/topic/db2/answers/181247-do-you-have-always-specify-schema-when-using-db2-clp

Resources