If my logged message (log schema) has nested structures ,how to use the cassandra log appender provided by Kaa (Kaa log appender UI console) to map the nested field values to the cassandra columns ?
Current implementation of cassandra log appender doesn't allow to map nested fields. We have the ticket about improving the log appender and will implement it in the next KAA version.
If you aren't able to wait until the next Kaa release, we suggest you look at attempt to implement such functionality.
Related
I'm trying to connect to db in Azure Data Explorer from R using AzureKusto library. Following this documentation https://github.com/Azure/AzureKusto, after calling kusto_database_endpoint(...) function I need to open a browser page and insert the printed code manually. There's a way to skip this manual step and do it automatically? Or there are alternatives for connecting to ADX db?
Thanks for the help!
Co-creator of the package here. Thank you for the question. Yes, you can use the get_kusto_token function to obtain a token and then pass it to kusto_database_endpoint as the .query_token argument. get_kusto_token supports the following authentication flows:
"authorization_code"
"device_code"
"client_credentials"
"resource_owner"
For example, if you have an AAD application service principal that has access to the Azure Data Explorer cluster, you can use its ID and secret to authenticate:
# authenticate using client_credentials method: see ?AzureAuth::get_azure_token
token <- get_kusto_token("https://mycluster.kusto.windows.net",
tenant="mytenant",
authtype="client_credentials",
app="myappid",
password="myclientsecret")
kusto_database_endpoint(server = "mycluster.kusto.windows.net",
database = "mydb",
.query_token=token)
The help page ?AzureKusto::get_kusto_token provides more detailed information on this. Also, please note that the get_kusto_token function is a wrapper around AzureAuth::get_azure_token. The readme for the AzureAuth R package has more detailed examples of other methods of obtaining an Azure access token: https://github.com/Azure/AzureAuth
We need to connect to on premise Teradata from Azure Databricks .
Is that possible at all ?
If yes please let me know how .
I was looking for this information as well and I recently was able to access our Teradata instance from Databricks. Here is how I was able to do it.
Step 1. Check your cloud connectivity.
%sh nc -vz 'jdbcHostname' 'jdbcPort'
- 'jdbcHostName' is your Teradata server.
- 'jdbcPort' is your Teradata server listening port. By default, Teradata listens to the TCP port 1025
Also check out Databrick’s best practice on connecting to another infrastructure.
Step 2. Install Teradata JDBC driver.
Teradata Downloads page provides JDBC drivers by version and archive type. You can also check the Teradata JDBC Driver Supported Platforms page to make sure you pick the right version of the driver.
Databricks offers multiple ways to install a JDBC library JAR for databases whose drivers are not available in Databricks. Please refer to the Databricks Libraries to learn more and pick the one that is right for you.
Once installed, you should see it listed in the Cluster details page under the Libraries tab.
Terajdbc4.jar dbfs:/workspace/libs/terajdbc4.jar
Step 3. Connect to Teradata from Databricks.
You can define some variables to let us programmatically create these connections. Since my instance required LDAP, I added LOGMECH=LDAP in the URL. Without LOGMECH=LDAP it returns “username or password invalid” error message.
(Replace the text in italic to the values in your environment)
driver = “com.teradata.jdbc.TeraDriver”
url = “jdbc:teradata://Teradata_database_server/Database=Teradata_database_name,LOGMECH=LDAP”
table = “Teradata_schema.Teradata_tablename_or_viewname”
user = “your_username”
password = “your_password”
Now that the connection variables are specified, you can create a DataFrame. You can also explicitly set this to a particular schema if you have one already. Please refer to Spark SQL Guide for more information.
Now, let’s create a DataFrame in Python.
My_remote_table = spark.read.format(“jdbc”)\
.option(“driver”, driver)\
.option(“url”, url)\
.option(“dbtable”, table)\
.option(“user”, user)\
.option(“password”, password)\
.load()
Now that the DataFrame is created, it can be queried. For instance, you can select some particular columns to select and display within Databricks.
display(My_remote_table.select(“EXAMPLE_COLUMN”))
Step 4. Create a temporary view or a permanent table.
My_remote_table.createOrReplaceTempView(“YOUR_TEMP_VIEW_NAME”)
or
My_remote_table.write.format(“parquet”).saveAsTable(“MY_PERMANENT_TABLE_NAME”)
Step 3 and 4 can also be combined if the intention is to simply create a table in Databricks from Teradata. Check out the Databricks documentation SQL Databases Using JDBC for other options.
Here is a link to the write-up I published on this topic.
Accessing Teradata from Databricks for Rapid Experimentation in Data Science and Analytics Projects
If you create a virtual network that can connect to on prem then you can deploy your databricks instance into that vnet. See https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html.
I assume that there is a spark connector for terradata. I haven't used it myself but I'm sure one exists.
You can't. If you run Azure Databricks, all the data needs to be stored in Azure. But you can call the data using REST API from Teradata and then save data in Azure.
Currently I'm using WCF as service bus. But I want to switch to a more powerful service bus. I Chose Rebus.
I'm somehow new to Rebus. I have some problems :
1) My data is persisted in a DB table. I want publisher to read all persisted data every n seconds and publishes it to subscribers and then set a sent flag to data in DB.
Is there some timing for publishing?
Reading and publishing and changing (setting flag) data must be done in a transaction scope. Is there any defined solution in Rebus?
2) In Consumer, I want to save published data in some table. Reading message from message queue and saving in DB (in my handler) must be done in transaction scope. How Rebus do this?
3) Message label for published messages set to a random unique string. I want to set my custom label for created MSMQ message. is there any solution?
1) You are on your own when it comes to querying database tables at regular intervals – there's no built-in mechanism in Rebus that does this.
I can recommend you take a look at System.Timers.Timer or something similar.
2) You can enable automatic transaction scopes in your Rebus handlers by using the Rebus.TransactionScopes package.
3) Out of the box, it is not possible to specify the label to be used on the MSMQ message. It will be set by Rebus to a string consisting of the message type and ID as indicated by this extension method.
I’m trying to set up a very simple BAM scenario within BizTalk Server 2013R2 upon which to build, involving tracking just the time of all incoming messages processed by a port.
To this end I have :
Within Excel, created an Activity Definition (called
SimpleReceiveTest) containing a single Item called ReceiveTime of
type milestone (date/time), and a View Definition (also called
SimpleReceiveTest) containing just this Activity Definition and Item.
Imported this BAM definition spreadsheet using bm.exe
Added view rights to SimpleReceiveTest again using bm.exe
Launched the Tracking Profile Editor, imported the BAM Activity
Definition, and mapped ActivityID = MessageID and ReceiveTime =
PortStartTime by drag and drop from the Messaging Property Schema, as
shown below :
Set the Port Mappings for MessageID and PortStartTime to relate to a
test Receive Port ReceivePort1 that I am using for testing. This is
using a pass-through pipeline.
Saved and applied the above Tracking Profile
It is my understanding that for any messages received on port ReceivePort1 I should now get a tracking activity created. However this is not happening – there are no records in any of the BAM tables/views and no data is available within the BAM Portal.
I have tried restarting the hosts, and have verified that the TDDS_FailedTrackingData table is empty, there is nothing relevant in the event log, a Tracking host is running and the SQL Agent Jobs are running. I have also tried running these jobs manually.
Have I missed something, and am I correct in my expectation that this simple scenario should create tracked activities for any messages passing through the Receive Port? If so what can I try to further diagnose this?
Now fixed - it's actually a bug in vanilla BizTalk 2013R2 when using a standard pipeline that has been fixed in CU2.
FIX: BAM tracking doesn’t work when you use the XMLReceive or a custom pipeline in BizTalk Server
How to improve datapower monitoring ? I want to improve our monitoring techniques say for example, want to check that all objects (FSH /MQFSHs, SSl proxy, crypto profile etc) are up and incase if it goes down , should be notified by email or something. checking number of files in file management ondisk folders.Basically validate the adapter after deployment (we use soapUi to test adapter functionality, however something else to improve or added validation).please suggest any ideas that can be implemented as a process improvement on Datapower
For example you can get the status off all your domains using this soma call. You can test this using soap UI. You can get the list of various soma calls using the datapower mgmt wsdl (available in datapower store directory).
<!-- get all the domains -->
<xsl:variable name="domainsList">
<dp:url-open target="{$XML-MGMT-URL}" response="responsecode">
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
<env:Body>
<dp:request xmlns:dp="http://www.datapower.com/schemas/management">
<dp:get-status class="DomainStatus"/>
</dp:request>
</env:Body>
</env:Envelope>
</dp:url-open>
</xsl:variable>
Try using SOMA commands of XML management interface to check the object status.
I am not sure if this is the best approach but this is how i have implemented it. You can always create a testing service in DataPower with/without interactive java application to perform all the soap test you are performing using soapUI. You can perform SOMA/AMP calls to check the status of objects, ping external services, etc. You can schedule these test on a regular interval or manual.
Depending how you set it up, you can either generate an email with status of each object/service you are testing or create a html dashboard that records the current status of everything.