WSO2DSS 3.2.2 call two queries from one operation - wso2-data-services-server

I want call two queries in one operation and get the result. In 3.1.1 version I use but in new version it not work. any solution for this ?
this is what i use earlier
<call-query-group>
<call-query href="OnBoardingCheckList_Query">
<with-param name="partyid" query-param="partyid"/>
<with-param name="loginName" query-param="loginName"/>
</call-query>
<call-query href="ManagemetPortal_query" requiredRoles="">
<with-param name="loginName" query-param="loginName"/>
</call-query>
</call-query-group>
unfortunately this is not working in wso2dss 3.2.2
Cheers!
Chathura

I am unfamiliar with older versions but I think what you are looking to do is invoke two queries and return one result. A challenge is how do you handle the situation where the results of each query have differing schema?
If you are updating values in one or both of the quesries then you should look into 'boxcaring'.
If you are just reading from botth queries then read on...
I have previously handled this using WSO2 ESB along with WSO2 DSS.
Essentially setup your two DSS operations.
Configure ESB to invoke the DSS operations and aggregate the DSS responses into a single result.
Publish the ESB service, not the two DSS operations
You can read more here. http://dakshithar.blogspot.ca/2014/05/entity-aggregation-with-wso2-esb-and_14.html
Also if for development purposes you need to run both DSS and ESB on a single machine you will need to set the Port Offset of one of the deployments so they can run side-by-side on the machine (no conflict). You can choose to change Port Offset either DSS or ESB, it oes not matter. I usually change the offset of the last one I installed.
To set Port Offset
Port Offset can be set in the /repository/conf/carbon.xml file in the ESB or DSS binary distribution folder. Set the offset value as 1

Related

Kafka Connector for Oracle Database Source

I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?
For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.
We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/

topbeat to monitor specific java process

I am newbie to the beats. I am using topbeat to monitor the system health.
Up to this point everything is fine.
Now I need to monitor the resource utilization of a java process, so I configured topbeat.yml as: procs: ["java"]
In my linux box there are 4 java processes are running but I am interested in only one java process. So,
Is there any way to monitor specific java process using regex?
Is there any way to differentiate the processes by name [not with pid]?
If you wish to view certain processes then you can use sample topbeat dashboards and in that dashboard there is one search which is for proc stats. From there select proc.name from the available fields and further filter it to select your relevant proc.name
Suggestion from elastic forum: https://discuss.elastic.co/t/topbeat-monitor-specific-java-process/65594/2 Try MetricBeat and see if it helps.

How could Bosun fit for my usecase?

I need an alerting system where I could have my own metric and threshold to report for anomalies (basically alerting on the basis of logs and data in DB). I explored Bosun but not sure how to make it work. I have following issues:-
There are pre-defined items which are all system level, but I couldn't find a way to add new items, i.e. custom items
How will bosun ingest data other than scollector. As I understand could I use logstash as data source and totally miss OpenTDSP( Really don't like HBase dependency)?
By Items I think you mean metrics. Bosun learns about metrics, and their tag relationships when you do one of the following:
Relay opentsdb data through Bosun (http://bosun.org/api#sending-data)
Get copies of metrics sent to the api/index route http://bosun.org/api#apiindex
There are also metadata routes, which tell bosun about the metric, such as counter/gauge, unit, and description.
The logstash datasource will be deprecated in favor of an elastic datasource in the coming 0.5.0 release. But it is replaced by an elastic one is better (but requires ES 2+). To use those expressions see the raw documentation (bosun.org docs will updated next release): https://raw.githubusercontent.com/bosun-monitor/bosun/master/docs/expressions.md. To add it you would have something like the following in the config:
elasticHosts=http://ny-lselastic01.ds.stackexchange.com:9200,http://ny-lselastic02.ds.stackexchange.com:9200,http://ny-lselastic03.ds.stackexchange.com:9200
The functions to query various backends are only loaded into the expression library when the backend is configured.

Pool Multiple Messages with BizTalk 2006 SQL Adapter

I have a StoredProcedure that returns a simple table containing several records:
DECLARE #STEPS_TABLE AS TABLE (OrchestrationID uniqueidentifier, [Message] nvarchar(1000));
-- LOADING THE VALUES HERE
SELECT * FROM #STEPS_TABLE As Step FOR XML AUTO, XMLDATA, ELEMENTS
I used the SQL Transport Schema Generation Wizard to create my schema and could configure the port correctly. If I use this schema on my orchestration, it works perfectly. BizTalk starts one instance of the orchestration everytime the #STEPS_TABLE has more than 1 record.
Reading Microsoft technical documentation, they recommend to get several messages in one call and then use the XML pipeline to disassemble the multi-row BizTalk message into a single-row BizTalk message.
I haven't used the XML pipeline before, so I tried the provided steps but couldn't get it to work.
Could somebody provide me a link to a "how to" (didn't find anything until now, after several hours of searching) or give me some hints to succeed.
Thanks in advance.
... some hours later I could figure it out myself. So if anybody comes across the same issue as me, here you have some guidelines to make it work on your environment.
At the end I followed a different walkthrough from Microsoft and avoided the pipeline recommendation altogether. The documentation I found is called "Disassembling Result Sets Using the SQL Adapter" and does exactly what i was looking for. You can just follow the whole walkthrough from Microsoft but avoid the creation of the send port and make some little adjustment on the receive port.
After following the technical document you will end up with two schemas, I will call them message and envelope (contains several messages) for the sake of this excercise. In your orchestration you can create a receiving port that maps to the message and then when you configure it as a SQL Port and you link it to your stored procedure (or select statement), you only have to change the Document Root Element Name to the envelope root name; the XML Receive pipeline (provided by default in BizTalk 2006) will do the magic of disassembling the messages contained in the envelope and instantiating an orchestration for each message.
The Microsoft "Disassembling Result Sets Using the SQL Adapter" walkthrough can be found under:
http://msdn.microsoft.com/en-us/library/aa562098(v=bts.20).aspx
Mission accomplished :)

Oracle coherence with weblogic server?

Hi i am new to oracle coherence,
Question 1 : my scenario is, i have to implement the oracle coherence replicated cache in my webapplication.(with weblogic server).the coherence should be the part of the weblogic server means when i start the weblogic server the coherence should start
(both should run in the single JVM).please help me how to do it ?
Question 2 : whether i need a database to maintain the records or oracle coherence it self maintain in file system ? if yes means how and what will happen for the cached data when i shutdown the server?
Q1:
I would describe it in couple of steps:
Place coherence.jar in the classpath. Depending on specific case it can be WLS classpath or application's classpath. Unless you want to share coherence node between many applications it is often a better idea to put it to application's classpath. It also has other advantages like easier maintenance.
Prepare your own cache configuration for the replicated topology. You can skip this step if you want to use coherence default cache configuration coherence-cache-config.xml which includes replicated topology, but keep in mind that your cache name must start with repl- and this is in general not recommended for production. Otherwise put the following to your custom-cache-config.xml file and add it to your application's classpath.
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>my-repl-cache</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<replicated-scheme>
<scheme-name>replicated</scheme-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
<autostart>true</autostart>
</replicated-scheme>
</caching-schemes>
</cache-config>
Create a ContextListener for your application and place the following code into contextInitialized method:
// join existing cluster or form a new one
CacheFactory.ensureCluster();
Start your WLS with the following option:
-Dtangosol.coherence.cacheconfig=custom-cache-config.xml
Deploy and start your application (possibly on many servers)
Q2:
In general coherence is in memory solution and doesn't persist data by default. If you need to manage data in persistent store you can look into CacheStore interface. This is described here in the documentation.
Keep in mind that often you have more than one coherence node in the cluster so you will not lose your data when you shutdown one of them because data is always stored also in other JVM(s). When you restart your node it will join the cluster and your data will be there.
Starting with WebLogic 12.1.2, there is excellent Coherence integration via the "Coherence Containers" functionality of WebLogic, in addition to the ActiveCache feature of WebLogic. Here is a URL for the container feature: http://docs.oracle.com/middleware/1212/wls/WLCOH/deploy-wls-coherence.htm
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

Resources