Is there any way to save local and global descriptors as txt instead of pcd in Point Cloud Library? - point-cloud-library

I have used this script to generate local and global descriptors from a point cloud. The script lets you save the descriptors in PCD format (line 105 of the script) but I'm having trouble loading the pcd files in Python to train a model [Opened a discussion here].
I'm thinking of an alternative. Does anyone know a way to save the descriptors as a txt file instead of pcd? Thanks!

#IBitMyBytes Thanks for the reply. I just saved it in default PCD format in PCL and then read it like a text file in Python and removed the headers.

Related

Robotframework : How to chunk a binary file or read the file chunk by chunk

I have a variable with the binary file read it from a file in Robotframework:
${fileData}= Get Binary File ${CHUNK_GEOJSON_FILE_UPLOAD_PATH}
This keyword read the entire file, no arguments to determine the among of bytes to be read. So what I actually need is to save in ${fileData} only 1MB, or I need to separate the entire file into differents chunks(1Mb) because I will use those chunks to upload the file by chunks using the PATCH from the tus protocol.
Any help will be appreciated
The keyword you are using can get all file details at one go. Can't separate with size that you are intended to do.
I suggest you writing python function and call that as keyword to do this activity.

How to convert .dat + .sps to .sav on command line

I get a lot of datasets that arrive as .dat files with syntax files for converting to SPSS (.sps). I'm an R user, so I need to convert the .dat file into a .sav that R can read.
In the past, I've used PSPP to do this manually. (I can't afford SPSS!) But I'd MUCH prefer a programmatic solution.
I thought pspp-convert would do the trick, but there's something I'm not understanding about how that works in terms of inputting the syntax file:
My files are:
data.dat
data.sps (which correctly points to data.dat)
I tried
pspp-convert data.sps data.sav
But get
`data.sps' is not a system or portable file.
Makes sense since the input is supposed to be a portable file. Am I trying to do something beyond the scope of this CLI?
Generally speaking, there MUST be some way to apply an SPS file to a DAT file to get a SAV file (or any other portable file) back, right?
From an SPSS Statistics point of view, a .dat file extension most often means the data is in a fixed ASCII text format. You would need the accompanying codebook to tell you what variables to read and in what formats. The SPSS Statistics command syntax file (.sps) does this for you. But this file is simply the list of SPSS Statistics commands used to read the ASCII data. It is not a data file itself.
Elsewhere you've referenced these files as "portable files". An SPSS Statistics portable file (.por) is a very special case of an ASCII file; structured to be read and written by SPSS Statistics. In any case, if your preferred tool takes an SPSS Statistics portable file (.por), these *.dat files likely aren't it.
Assuming these *.dat files are fixed ASCII text files, you'll need to discern how the information therein is stored and then use a likely tool for reading ASCII text.

Read and write a Netcdf file using R

How can I read and write the following file using R ?
https://www.dropbox.com/s/vlnrlxjs7f977zz/3B42_daily.2012.11.23.7.nc
In other words, I would like to read the "3B42_daily.2012.11.23.7.nc" file and write with the same structure that it is written.
Best regards
Package ncdf have functions to do this. You should also read other Q&A on this site tagged with netcdf and r.
Basically to read a netcdf file:
library(ncdf)
a <- open.ncdf('your/path/to/your/file.nc') #that opens a connection to the file
Then function get.var.ncdf helps you extract the data, variable by variable.
The process to write one is described in this Q&A.
The idea is to create dimensions first using dim.def.ncdf then the variables with var.def.ncdf and finally the file itself using create.ncdf.

Hadoop InputFormat for Excel

I need to create a map-reducing program which reads an Excel file from HDFS and does some analysis on it. From there store the output in the format of excel file. I know that TextInputFormat is used to read a .txt file from HDFS but which method or which inputformat should I have to use?
Generally, hadoop is overkill for this scenario, but some relevant solutions
parse the file externally and convert to an hadoop compatible format
read the complete file as a single record see this answer
use two chained jobs. the 1st like in 2, reads the file in bulk, and emits each record as input for the next job.

Cutting A File into Chunks in Qt

Can anybody give me a hint or initial idea how may I cut a file into chunks in Qt ? Is there any particular file like java it has built in function to split. Later on I want to calculate SHA-256 hash value of each chunks. Any idea guys ??
There is no built in function for that.
Open the original file.
Open a file for the first chunk.
Read bytes from the original file.
Write bytes to the chunk file.
Repeat.
See QFile documentation.

Resources