How can I connect InfluxDB to DataGrid? - datagrid

I'm trying to connect my JetBrains DataGrid to InfluxDB.
But there is no InfluxDB in data sources:
Does anyone used DataGrid with InfluxDB?

You can connect it if there is a JDBC driver for InfluxDB (if it exists, of course; right now it looks like there is no such driver).
https://www.jetbrains.com/help/datagrip/2020.2/connecting-to-a-database.html#other
If you connect to the vendor that is not in the list of supported data sources, DataGrip uses JDBC metadata for database object retrieval (introspection) and the Generic SQL dialect. The introspection with JDBC metadata means that some specific database objects will not appear in the database tree view. With the Generic SQL dialect, you will have basic code completion like SELECT * FROM <table_name>, but code completion will not include objects that were not retrieved during introspection.
Other than that: https://youtrack.jetbrains.com/issue/DBE-5158 -- watch this ticket (star/vote/comment) to get notified with any progress.

Related

Dynamically generate view definition for odbc cosmos data source

There is requirement that where my 3rd party application needs to connect with cosmos DB. So I setup an ODBC connection for COSMOS via ODBC Cosmos Driver. In my cosmos account containers are dynamically generated.
Problem is in some cases I am not be able to query some columns via ODBC. These are not nested properties in document.
I also tried one workaround solution by going in Schema Editor > Schema Definition > View Definition > Add New view.By this i am able to get the data.
But as said my containers are dynamically generated, so how can i add view definition dynamically to containers.

How to access on premise Teradata from Azure Databricks

We need to connect to on premise Teradata from Azure Databricks .
Is that possible at all ?
If yes please let me know how .
I was looking for this information as well and I recently was able to access our Teradata instance from Databricks. Here is how I was able to do it.
Step 1. Check your cloud connectivity.
%sh nc -vz 'jdbcHostname' 'jdbcPort'
- 'jdbcHostName' is your Teradata server.
- 'jdbcPort' is your Teradata server listening port. By default, Teradata listens to the TCP port 1025
Also check out Databrick’s best practice on connecting to another infrastructure.
Step 2. Install Teradata JDBC driver.
Teradata Downloads page provides JDBC drivers by version and archive type. You can also check the Teradata JDBC Driver Supported Platforms page to make sure you pick the right version of the driver.
Databricks offers multiple ways to install a JDBC library JAR for databases whose drivers are not available in Databricks. Please refer to the Databricks Libraries to learn more and pick the one that is right for you.
Once installed, you should see it listed in the Cluster details page under the Libraries tab.
Terajdbc4.jar dbfs:/workspace/libs/terajdbc4.jar
Step 3. Connect to Teradata from Databricks.
You can define some variables to let us programmatically create these connections. Since my instance required LDAP, I added LOGMECH=LDAP in the URL. Without LOGMECH=LDAP it returns “username or password invalid” error message.
(Replace the text in italic to the values in your environment)
driver = “com.teradata.jdbc.TeraDriver”
url = “jdbc:teradata://Teradata_database_server/Database=Teradata_database_name,LOGMECH=LDAP”
table = “Teradata_schema.Teradata_tablename_or_viewname”
user = “your_username”
password = “your_password”
Now that the connection variables are specified, you can create a DataFrame. You can also explicitly set this to a particular schema if you have one already. Please refer to Spark SQL Guide for more information.
Now, let’s create a DataFrame in Python.
My_remote_table = spark.read.format(“jdbc”)\
.option(“driver”, driver)\
.option(“url”, url)\
.option(“dbtable”, table)\
.option(“user”, user)\
.option(“password”, password)\
.load()
Now that the DataFrame is created, it can be queried. For instance, you can select some particular columns to select and display within Databricks.
display(My_remote_table.select(“EXAMPLE_COLUMN”))
Step 4. Create a temporary view or a permanent table.
My_remote_table.createOrReplaceTempView(“YOUR_TEMP_VIEW_NAME”)
or
My_remote_table.write.format(“parquet”).saveAsTable(“MY_PERMANENT_TABLE_NAME”)
Step 3 and 4 can also be combined if the intention is to simply create a table in Databricks from Teradata. Check out the Databricks documentation SQL Databases Using JDBC for other options.
Here is a link to the write-up I published on this topic.
Accessing Teradata from Databricks for Rapid Experimentation in Data Science and Analytics Projects
If you create a virtual network that can connect to on prem then you can deploy your databricks instance into that vnet. See https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html.
I assume that there is a spark connector for terradata. I haven't used it myself but I'm sure one exists.
You can't. If you run Azure Databricks, all the data needs to be stored in Azure. But you can call the data using REST API from Teradata and then save data in Azure.

Symfony - Log runnables natives queries when database is out

I'am working on a Symfony app that provides a rest web service (simple HTTP Request with JSON).
That service check some rules and inserts few lines in two MySQL table (write only).
For optimize reason, even if Doctrine bundle is available, i use native MySQL Query (with bind params) to insert this lines.
My need is : If for any reason, the database is not available, write "runnables" queries into a log file.
The final purpose is that when database is back, i want to be able to execute directly the file's content on the database.
Note that there is no unique constraint (pk is a generated uuid) and no lock or transaction to handle (simple insert statements).
I write a custom SQLLogger, but when $connection->insert(...) is called, the connect fail before logger is called.
So, my question is : There is a way to get the final query (with binded parameters) without database connection ?
Or should i rewrite the mecanism that bind params into query and log it myself when database is not available ?
Best regards,
Julien
As the final query with parameters is build by the database, there is just no way to build the query with PHP and to be garanteed that the query will be the same as the database.
The only way si to build query without binded parameters, but this is clearly not a good practice.
So, i finally decided to store all the JSON (API request body) in a file if the database is not available.
So when the database is back, instead of replay SQL queries, i can replay the original HTTP query.
Hope this late self-anwser will help someone.
Best regards.

Loading Hive Data into Oracle Db

I need to mapping a Hive table into a Oracle Db using Odi 11.1.1.6.0 .
I create physical and logical schema of the two technology. (connection test is ok)
I have a physical and logical odi agent that use Http and port 20910. (connection test is ok)
I used RKM for the reverse engineering of the two tables (tables Hive and the corresponding Oracle table with the same fields).
After that, I create a project with an interface to test the mapping.
I use drag and drop for the source Hive table and the Target Oracle Table.
After that, I use drag and drop of each field of Hive table to the corresponding oracle table.
The dimension/type of each field of the two tables are the same.
I control the Flow of interface and this use only IKM File-Hive to Oracle (OLH).
When I start the interface, the session starting but there is this error:
ODI-1226: Step Hive_to_Oracle_test fails after 1 attempt(s).
ODI-1240: Flow Hive_to_Oracle_test fails while performing a Integration operation. This flow loads target table TEST_TABLE.
Caused By: com.sunopsis.dwg.function.SnpsFunctionBaseException: ODI-30038: OS command returned 4.
at com.sunopsis.dwg.tools.OSCommand.actionExecute(OSCommand.java:294)
at com.sunopsis.dwg.function.SnpsFunctionBase.execute(SnpsFunctionBase.java:276)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execIntegratedFunction(SnpSessTaskSql.java:3437)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.executeOdiCommand(SnpSessTaskSql.java:1509)
at oracle.odi.runtime.agent.execution.cmd.OdiCommandExecutor.execute(OdiCommandExecutor.java:44)
at oracle.odi.runtime.agent.execution.cmd.OdiCommandExecutor.execute(OdiCommandExecutor.java:1)
at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
at java.lang.Thread.run(Thread.java:662)
hive_to_oracle_test is my interface,TEST_TABLE is my oracle table.
Any idea?
I think you can use SQOOP check https://ccp.cloudera.com/display/con/Quest+Data+Connectors. You can also look into ORAOOP check http://archive.cloudera.com/cdh/3/adapters/oraoopuserguide.pdf for more detials. Both of this could be used for data transfer from hive to oracle database.

Connecting to DSN created by SQLite driver

How to connect to DSN created by SQlite Driver using SQL anywhere APIs from C++ code?
I am using db_string_connect() to connect to sybase adaptive server anywhere. I want to use the same function to connect to the DSN created by SQLite Driver as well but db_string_connect() API is returning sqlcode -103 ["You supplied an invalid user ID or an incorrect password."].
I have this somewhat weird requirement because I want to abstract the connection to different databases at ODBC layer. And the code to connect to sybase is already written and I want to minimize the changes in the code. Hope I am making some sense.
Thanks.
You will not be able to use a function from SQL Anywhere client library to connect directly to some other database. Typically, if you need to be able to connect and manipulate different types of database systems, you have to introduce a database layer that sits between the vendor specific client libraries and your code. This could be something you write yourself or use an existing one.

Resources