Can I read a point clouds from a .e57 files without any library? - point-cloud-library

The libE57 is okay but I have to try to do this without any library. I found presentation and there I found next:
Binary Encoding
Blobs :
Opaque encoding
Images, user-defined data ....
"Opaque encoding" means that I can't read .e57 without libE57?
Is there some way to parse it?
I have a parser for a .pcd, .pts, .ptx. Can I convert a .e57 to one of them?

No. You can read data of .e57 only using libe57.
http://www.libe57.org/documentation.html

Related

Web3.js and Ether.js help me

Im learning web3, and while examining the source code of a page, I saw that the abi file was written in a strange way, what is it? how can i decode?
I assumed it was hexcode and tried to convert it..
http://ddecode.com/hexdecoder/
it didn't work
ABIs are JSON files. JSON stands for JavaScript Object Notation, and is a commonly-used data transfer format. Here's an introduction to JSON: https://www.w3schools.com/whatis/whatis_json.asp
Here's a quick description of ABIs: https://ethereum.stackexchange.com/a/235/97038
EDIT: It appears that the file you're looking at is obfuscated: https://blog.jscrambler.com/javascript-obfuscation-the-definitive-guide

How to read csv file with unknown formatting and unknown encoding in R Program? (example file provided)

I have tried my best to read a CSV file in r but failed. I have provided a sample of the file in the following Gdrive link.
Data
I found that it is a tab-delimited file by opening in a text editor. The file is read in Excel without issues. But when I try to read it in R using "readr" package or the base r packages, it fails. Not sure why. I have tried different encoding like UTF-8. UTF-16, UTF16LE. Could you please help me to write the correct script to read this file. Currently, I am converting this file to excel as a comma-delimited to read in R. But I am sure there must be something that I am doing wrong. Any help would be appreciated.
Thanks
Amal
PS: What I don't understand is how excel is reading the file without any parameters provided? Can we build the same logic in R to read any file?
This is a Windows-related encoding problem.
When I open your file in Notepad++ it tells me it is encoded as UCS-2 LE BOM. There is a trick to reading in files with unusual encodings into R. In your case this seems to do the trick:
read.delim(con <- file("temp.csv", encoding = "UCS-2LE"))
(adapted from R: can't read unicode text files even when specifying the encoding).
BTW "CSV" stands for "comma separated values". This file has tab-separated values, so you should give it either a .tsv or .txt suffix, not .csv, to avoid confusion.
In terms of your second question, could we build the same logic in R to guess encoding, delimiters and read in many types of file without us explicitly saying what the encoding and delimiter is - yes, this would certainly be possible. Whether it is desirable I'm not sure.

How to read a BLOB with qt-type compression?

I have a file (about 100k files, to be specific) containing a data from weather radars - one file is a one radar image. It is a mosaic of data from several radars, creating a map of a reflectivity over whole country.
The files have extension .cmax and I need to convert them to something more useful (eg. array of reflectivities) for further uses.
I have asked data provider how to read those files. They responded:
The standard product format in our system (.cmax) is the internal format of the company that provides us with the software. It consists of an xml and binary part. It can be read by reading as a stream of bytes. Firstly, parse the initial bytes as xml, then treat the rest (BLOBs) as a binary data compressed with the "qt" method. You need to unpack them using a library that supports this compression mode. In general, you have to work a little, but it can be done in virtually any programming language.
The main issue is with the binary part of data. I have tried to decompress it with zlib (googling qt compression it comes out) and reading as a binary data in C++. None of them worked. It also doesn't seem resonable to me to try reading that data as binary in Qt.
The file begins with those lines:
<product version="5.44.5" datetime="2017-01-01T18:00:00" datatype="dBZ" type="cmax" name="CMAX" owner="">
<data time="18:00:00" date="2017-01-01">
Then, there are radars specifications and image details (active radars, min and max reflectivity etc). XML part ends with:
</product>
<!-- END XML -->
<BLOB blobid="0" size="79617" compression="qt">(here are lots of binary data)</BLOB>
I'm looking for a way (tool?) to convert that binary data. For example, it could be that mentioned library.
Looking at the details, this is most likely Leonardo (Selex/Gematronic) Rainbow5 format. zlib is the right lib for decompression. But there are some tricks to it. A python reader is implemented in the wradlib library (https://github.com/wradlib). Maybe you can adapt from that code. Disclaimer: I'm one of the wradlib devs.
Did you try simply using the qUncompress() function? https://doc.qt.io/qt-5/qbytearray.html#qUncompress

Is there any way to use Å letter in R?

Is there any way to deal with this letter in R -Å?
In some configuration I'm able to read this letter from SQL by RODBC, but I didn't found any solution to save this letter to csv or txt. It's always getting converted to normal A or Ĺ.
Also, how to read this letter correctly from Excel file?
I understand from you question that the letter displays properly inside R but you have problems writing it to files.
R's writing functions usually have an encoding parameter (for example, for write.csv and write.table it's called fileEncoding).
When you don't set it explicitly, the function will encode the file using your OS's (or R-installations) native encoding, which can sometimes cause problems with special characters. What exactly goes wrong and how to fix it depends heavily on your system setup - especially if you're also interacting with databases, as you describe.
But very often, an easy fix is writing files in UTF-8 encoding, i.e.
write.csv(your_df, your_path, fileEncoding='UTF-8')
as most external programs (such as Excel) are able to automatically detect and properly read UTF-8 encoded files.
Set the fileEncoding argument on write.table to fit your needs (e.g., if your text is encoded as UTF-8, try write.table(my_tab, file = "my_tab.txt", fileEncoding = "UTF8")).

is there any way we can find PDF file is compressed or not?

we are using ITEXTPDF to compress the PDF but the issues is we want to compress the files which are compressed before uploading into our site...if the files are uploaded without compressing we would like to leave those like that..
so to do that we need to identify is that PDF is compressed or not..am wondering is there any way we can identify PDF is compressed or not using ITEXTPDF or some other tool!!!..
i have tried to Google it but couldn't find appropriate answer..
kindly let me know if u have any idea...
thanks
There are several types of compression you can get in a PDF. Data for objects can be compressed and objects can be compressed into object streams.
I voted Mark's answer up because he's right: you won't get an answer if you're not more specific. I'll add my own answer with some extra information.
In PDF 1.0, a PDF file consisted of a mix of ASCII characters for the PDF syntax and binary code for objects such as images. A page stream would contain visible PDF operators and operands, for instance:
56.7 748.5 m
136.2 748.5 l
S
This code tells you that a line has to be drawn (S) between the coordinate (x = 56.7; y = 748.5) (because that's where the cursor is moved to with the m operator) and the coordinate (x = 136.2; y = 748.5) (because a path was constructed using the l operator that adds a line).
Starting with PDF 1.2, one could start using filters for such content streams (page content streams, form XObjects). In most cases, you'll discover a /Filter entry with value /FlateDecode in the stream dictionary. You'll hardly find any "modern" PDFs of which the contents aren't compressed.
Up until PDF 1.5, all indirect objects in a PDF document, as well as the cross-reference stream were stored in ASCII in a PDF file. Starting with PDF 1.5, specific types of objects can be stored in an objects stream. The cross-reference table can also be compressed into a stream. iText's PdfReader has a isNewXrefType() method to check if this is the case. Maybe that's what you're looking for. Maybe you have PDFs that need to be read by software that isn't able to read PDFs of this type, but... you're not telling us.
Maybe we're completely misinterpreting the question. Maybe you want to know if you're receiving an actual PDF or a zip file with a PDF. Or maybe you want to really data-mine the different filters used inside the PDF. In short: your question isn't very clear, and I hope this answer explains why you should clarify.

Resources