XI52 "any-to-any" transformations - ibm-datapower

I have a virtual XI52 gateway with firmware version of IDG.7.5.1.0. I would like to test transforming from one type to another (json to xml or vice versa is fine) and also (if possible) interacting with the transformation via SOMA in the XML management interface. I've been looking around all day for a simple example of how to do a transformation and haven't found one.
Any examples on how to do this? Even if there's a solid/understandable doc about how to set this up would be great.

Related

Translate API - different result from the web service

When using the translation API, I get a different translation (and worse) than if I use translate.google.com.
I am working on a project for a client, and the client was dissatisfied with the translation and noticed the difference.
Do these two service use different engines? I read that the API uses nmt-mode now, and that translate.google.com already uses the same engine.
Both set to translate from Norwegian to English.
Any more information that can clear this up?
Thanks!
The result differences between the translate.google.com and the Translation API calls are considered as an expected behavior that can be generated due to maintenance tasks and the logic used by the internal processes; However, the engines used for each service seems to be private information.
Based on this, it is normal to get some variances when using the API. I think you can use the model parameter option as an available workaround in case you want to specify which of the available models to use, as well as take a look on the Specifying a model official documentation to get detail information about this alternative.
It's almost about 3 years later and the problem still remains!
So I was trying to translate a dataset with the Google Translate API, but in the end it failed to translate some texts to the target language (in my case, Persian/Farsi). So I decided to check them to see if there's a pattern and maybe translate them using the web version of Google Translate.
As I was doing so, I figured that the web version actually could translate some of those untranslated texts, BUT not all. When trying to find a reason for such behaviour, I found out that most of them were names and not sentences. But as we know, names can easily be written with the target language characters as the translation. But why the API doesn't transform those names while the web version does? This photo will explain everything perhaps:
verified translation
As can be seen, some translations have a badge indicating that the translation has been verified, while some others don't.
So to recap, my guess is that maybe the API is set to only use verified translations, but as for the web version, even unverified translations are allowed since you can edit or report them.

Is there any way to use Point Cloud Library (or similar) to get a skeleton from point cloud data like obj?

everyone.
I've been stuck for some days searching some way to get the skeleton of a point cloud data (like OBJ) but not using kinect. Is it possible?
I found the Point Cloud Library which does a lot of tasks related to point cloud data, and in their documentation there is a body keypoints detector, but it also works with kinect grabbers.
In my case, I have a point cloud data like in the picture, which was generated by another depth sensor scanner. Is it possible to find the key points in such data?
I really would appreciate any help. Thanks in advance.
Even if it's not explicitly mentioned in the tutorial you linked, a quick to the code suggests that you can use different data sources (e.g. PCD files), so you're not stuck with the live capture from Kinect.
All the tutorial code really does is the following:
Setup the GPU for the people parts detection.
Pick the appropriate data source.
Load the tree files for the body part detector.
Run the PeopleDetector on a single frame captured from the live grabber stream/PCD file.

Use Julia to perform computations on a webpage

I was wondering if it is possible to use Julia to perform computations on a webpage in an automated way.
For example suppose we have a 3x3 html form in which we input some numbers. These form a square matrix A, and we can find its eigenvalues in Julia pretty straightforward. I would like to use Julia to make the computation and then return the results.
In my understanding (which is limited in this direction) I guess the process should be something like:
collect the data entered in the form
send the data to a machine which has Julia installed
run the Julia code with the given data and store the result
send the result back to the webpage and show it.
Do you think something like this is possible? (I've seen some stuff using HttpServer which allows computation with the browser, but I'm not sure this is the right thing to use) If yes, which are the things which I need to look into? Do you have any examples of such implementations of web calculations?
If you are using or can use Node.js, you can use node-julia. It has some limitations, but should work fine for this.
Coincidentally, I was already mostly done with putting together an example that does this. A rough mockup is available here, which uses express to serve the pages and plotly to display results (among other node modules).
Another option would be to write the server itself in Julia using Mux.jl and skip server-side javascript entirely.
Yes, it can be done with HttpServer.jl
It's pretty simple - you make a small script that starts your HttpServer, which now listens to the designated port. Part of configuring the web server is that you define some handlers (functions) that are invoked when certain events take place in your app's life cycle (new request, error, etc).
Here's a very simple official example:
https://github.com/JuliaWeb/HttpServer.jl/blob/master/examples/fibonacci.jl
However, things can get complex fast:
you already need to perform 2 actions:
a. render your HTML page where you take the user input (by default)
b. render the response page as a consequence of receiving a POST request
you'll need to extract the data payload coming through the form. Data sent via GET is easy to reach, data sent via POST not so much.
if you expose this to users you need to setup some failsafe measures to respawn your server script - otherwise it might just crash and exit.
if you open your script to the world you must make sure that it's not vulnerable to attacks - you don't want to empower a hacker to execute random Julia code on your server or access your DB.
So for basic usage on a small case, yes, HttpServer.jl should be enough.
If however you expect a bigger project, you can give Genie a try (https://github.com/essenciary/Genie.jl). It's still work in progress but it handles most of the low level work allowing developers to focus on the specific app logic, rather than on the transport layer (Genie's author here, btw).
If you get stuck there's GitHub issues and a Gitter channel.
Try Escher.jl.
This enables you to build up the web page in Julia.

Spark RDD lineage graph representation

I would like to know if there is a way of using the information provided by
the function of the spark api RDD.toDebugString() to a more structured format, so it can be used to automatically get a graphical representation, for example with graphviz.
It seems that there is some activity around this going on:
https://issues.apache.org/jira/browse/SPARK-1015
But I would like to get the info from toDebugString() to a structured format,
and later decide which graph format to use for representation.
toDebugString() internally iterates through the recursive structure of an RDD, building a displayable string.
Instead of making toDebugString() return a more structured output, read its inner implementation (which does rely on structured data), and modify it to save the data the way suitable for you.
You don't have to wait for any issue on JIRA, just DIY :)
A more detailed and formatted visual representation can be seen using the spark UI which run by default on 4040 port.
Here it the screenshot showing all the details:

Is it possible to replace apache solr in broadleafcommerce with ElasticSearch?

I am trying to use broadleafcommerce and customize it.On study i found it uses Apache Solr . However, i am already handy with
ElasticSearch as i am currently using ElasticSearch only in my workplace. so, i'm curious as if i can replace that customizable code of broadleafcommerce with ElasticSearch. If it is possible, i also want to know how long will it take or what will be its difficulty level ?
Thanks in advance !
The product is open source, you can have a look at the code yourself. Here is the package that would need to be made solr independent. As far as I see there are quite some dependencies on Solr now, but maybe you can give it a shot and contribute it back. In the end that's the power of open source.
I can't tell exactly how much work that would be since I don't know the product and what it does with the data. The solr schema would need to be translated to the related elasticsearch mapping, then the indexer will need to be converted in order to push data to elasticsearch (otherwise if technically doable you could write a river that imports data in elasticsearch from the framework itself). Last step is to convert the search code together with the facets, highlighting etc.
Maybe you (or the people behind the project) might want to have a look at spring data which has now a community driven spring-data-solr project and an unofficial elasticsearch implementation too.

Resources