PDAL pipelines and getting the difference - point-cloud-library

I have 2 overlapping point clouds - A.las and B.las.
A is from 2015 and B is from 2016, both are from the same area.
I have PDAL (through OSGeos4W64), I'm trying to create a new file containing all the points which are different, this can be in two files i.e. A_diff and B_diff or as a single All_diff.
I've tried to use diff within pdal and pcl, but I'm not sure how to write the syntax of the json file, the www.pdal.io site is not great for beginners. Can anyone provide me with an example?
Heres the pcl info http://docs.pointclouds.org/trunk/classpcl_1_1_segment_differences.html
Thank you for any help.

It is not possible to do this as a PDAL pipeline with the current suite of stages.
The problem is that all reader stages will be subject to the same filter stages (not entirely true, there is a concept of branching pipelines, but it is not widely used). Regardless, there is no way to query one input cloud from another in the pipeline setup. The only workaround that comes immediately to mind would be to develop a custom filter that accepts as one of its inputs the filename of the cloud to query against. We do something similar when colorizing points from a raster. You'd have to develop two pipelines (A to B, and B to A) and write the partial diffs.
I think the easiest way forward is to create a new PDAL kernel that does exactly what you need.

Related

Use BizTalk for converting XML to JSON format

We are working on a project that converts/transforms XML files from one format to another. The file and output file is not only different from "elements name" prospective but there are also calculations that involve huge number of DB tables for mapping elements and lookup values. In addition, the element names are different from both sides and there are too much conditional logic operate inside.
We have a C# project that does the whole logic for us but it takes us 2-3 minutes for a single file to be converted that is why we want to use a ready-made tool instead.
My Question Is: Does BizTalk support conversion of XML to JSON and vice versa by including business logic, Lookup values (tbls), different mappings of elements, and etc? Can I also run it as a service so that it handles the process in a loop base for converting thousands of files every day?
Yes. BizTalk can do this. In particular, BizTalk 2013R2 has some enhanced support for JSON, and 2016 (coming out later this year) should see further improvements. BizTalk is pretty much made for this.
However, I'd caution you against doing this purely for speed. It's entirely possible that a BizTalk integration for this will take as long as or longer than your C# project (depending on what methods/patterns you used in the C# project). It's also possible it could go a lot faster. It really depends on a lot of factors (size of the file, connectivity to the database, complexity of rules/transformations).
What BizTalk will bring to bear is an easier mapping/transformation interface, a built in rules engine, adapters and pipelines for connecting to your data sources/destinations, and baked in reliability/throttling/resource allocating/multithreading.
One other thing to add - if you envision having many integration needs such as this, then BizTalk can provide a solid foundation for building an integration platform/ESB.

How to properly debug OCaml code?

Can I know how an experienced OCaml developer debugs his code?
What I am doing is just using Printf.printf. It is too troublesome as I have to comment them all out when I need a clean output.
How should I better control this debugging process? special annotation to switch those logging on or off?
thanks
You can use bolt for this purpose. It's a syntax extension.
Btw. Ocaml has a real debugger.
There is a feature of the OCaml debugger that you may not be aware of which is not commonly found with stateful programming and is called time travel. See section 16.4.4. Basically since all of the information from step to step is kept on the stack, by keeping the changes associated with each step saved during processing, one can move through the changes in time to see the values during that step. Think of it as running the program once logging all of the values at each step into a data store then indexing into that data store based on a step number to see the values at that step.
You can also use ocp-ppx-debug which will add a printf with the good location instead of adding them manually.
https://github.com/OCamlPro-Couderc/ocp-ppx-debug

Association of any kind inbetween QC projects?

I want to associate one QC project with another (e.g., manual testing and automation testing). I use QC 11.00
I would like to know what kind of association there can be between two QC projects (on the same domain), so I do not have to maintain two projects and then copy paste what I need e.g. common repositories etc.
I'm not sure that you can do this. A project in QC is supposed to be a self-contained entity, that is, there is no way (that I know of) that you can automatically move data between projects.
Sure, you can copy and paste data, as well as create a project with another one as base, but that is probably not what you want.
I would rather have manual testing and automation in the same project, which makes more sense I think. The point is that the project is supposed to identify the test object, rather than the test methodology - the latter can be done better in Test Plan where you specify a Test Type when you create your test.
This way, you will have all defects and test reports for your test object in the same project which will make it all the easier to track what is going on.
As a general rule; you would want to keep all project data for one project in that project; and, you want project data from that project to be unique and separate from all other projects.
That being said... if you really wanted to do this (and were able to convince a QC subject matter expert that it was a good idea?), then it should be a relatively simple matter to amend the workflow with additional code to interface with another project.

Is it possible to replace apache solr in broadleafcommerce with ElasticSearch?

I am trying to use broadleafcommerce and customize it.On study i found it uses Apache Solr . However, i am already handy with
ElasticSearch as i am currently using ElasticSearch only in my workplace. so, i'm curious as if i can replace that customizable code of broadleafcommerce with ElasticSearch. If it is possible, i also want to know how long will it take or what will be its difficulty level ?
Thanks in advance !
The product is open source, you can have a look at the code yourself. Here is the package that would need to be made solr independent. As far as I see there are quite some dependencies on Solr now, but maybe you can give it a shot and contribute it back. In the end that's the power of open source.
I can't tell exactly how much work that would be since I don't know the product and what it does with the data. The solr schema would need to be translated to the related elasticsearch mapping, then the indexer will need to be converted in order to push data to elasticsearch (otherwise if technically doable you could write a river that imports data in elasticsearch from the framework itself). Last step is to convert the search code together with the facets, highlighting etc.
Maybe you (or the people behind the project) might want to have a look at spring data which has now a community driven spring-data-solr project and an unofficial elasticsearch implementation too.

Storing and downloading Data in iOS Applications

I am a bit new to iOS Development and I was wondering if someone could point me in the right direction regarding an application I am working on.
I am currently working on an application that will be displaying product lists and categories. The list is updated on a weekly basis (one every week).
I am now trying to decide two things:
1- What's the best method of storing this data, I am looking for a way that will allow me to replace the data in the application once every week.
2- Is it going to be beneficial to use CoreData? Note that I Only have Product Category, Product and Product Information entities.
Appreciate your support.
I would use Core Data. Because I know Core Data and am used to work with it. But this is clearly very much like using a chainsaw to cut a slice of bread.
As I understand, you're not familiar with Core Data. Maybe it's not the right tool for the job considering the learning curve.
In your case I would simply use JSON files as provided by the server.
That said, if your looking in Core Data anyway, any store will do, either atomic, XML or SQLite. The first two will load the whole data set in memory and queries will be done in memory as well. SQLite provides the benefits usually associated with databases, with a slightly increased complexity. A chainsaw.
I would use Core Data. If you haven't worked with Core Data before, learn it. It's a great framework.

Resources