Delorem XMap Hex/Binary to Lat Long - hex

I have a data dump from a database used by XMap.
It looks like XMap stores the Lat/Lng as a Hex value. Can anyone help me decode these?
I was also able to use Xmap to upload some of my own data and see how it converted that to Hex. I am just unable to do a bulk export with the version of Xmap that I have.
Long: -100.00 Lat :35.00 0000004E 0000806E
-101.00 35.00 0000804D 0000806E
-101.1 35.1 3333734D 3333736E
-101.2 35.2 6666664D 6666666E
Lat Lon Hex
35.21285737 -98.44795716 0x57A9C64E17C1646E
35.21305335 -98.44786274 0x6FACC64EABBA646E
35.94602108 -96.74434793 0x35B9A04FC8E8066E
34.89283431 -99.037117 0xC03F7B4E9BB78D6E
34.89300668 -99.03754044 0xE0317B4EF5B18D6E
34.41109633 -100.2820795 0xD2E4DB4D3261CB6E
33.97470069 -101.2196311 0x21E3634D023D036F
34.0079211 -101.1440331 0x53906D4D71FCFE6E
32.76227534 -104.2691193 0x808DDD4BC36D9E6F
32.77947819 -104.204128 0x22DFE54B0F3A9C6F
32.77947819 -104.204128 0x22DFE54B0F3A9C6F
32.64307308 -104.5322441 0x6DDFBB4BC8AFAD6F
32.64290345 -104.531814 0x85EDBB4B57B5AD6F
32.47907078 -104.5652282 0x9AA6B74BCFADC26F
32.47907078 -104.5652282 0x9AA6B74BCFADC26F
32.22682178 -101.3943434 0x28864D4D81F7E26F
32.07237184 -101.8558813 0x7B72124D85BCF66F
31.89574015 -102.4611448 0x35F9C44C63580D70
31.8808713 -102.4563805 0x5395C54C9C3F0F70
31.18487537 -101.1440152 0xE9906D4D01566870
31.28633738 -100.4128259 0x8528CB4D4C595B70
31.0369339 -100.5050048 0x015CBF4DC0457B70
30.83263898 -100.6261411 0x9CDAAF4D166C9570

So this exact problem just came up at work a last night, and I spent a few hours decoding and converting the information.
It appears that the lat and long are stored in 32-bit Little-endian chunks (read more about endianness here (wikipedia))
From your example 35.21285737 -98.44795716 0x57A9C64E17C1646E converts as follows:
Split to 32 bit sections --> 0x57A9C64E (lng) 0x17C1646E (lat)
Adjust for endianness
LAT: 17 C1 64 6E => Swap Bytes => 6E 64 C1 17 ==> 1852096791 (base 10)
LNG: 57 A9 C6 4E => Swap Bytes => 4E C6 A9 57 ==> 1321642327 (base 10)
from that information, I then used a linear regression to figure out a conversion equation (http://arachnoid.com/polysolve/. Originally, I tried using Excel's regression tools, but it didn't provide nearly enough accuracy). It ended up working out much nicer than I originally thought. However, it seems like there should be a sign bit within the data, but I didn't figure out how to retrieve it, so there are two separate equations for whether it is lat or long.
LAT = 256 + raw / -2^23
LNG = -256 + raw / 2^23
If we go ahead and run our test data through the equations we receive:
Lat: 35.212857365
Lng: -98.447957158
If i get more time in the future, I may try to figure out a more efficient method for converting (taking the sign bit into account) but for now this method worked well enough for me.
Now that this data if figured out, one could expand it to be able to convert the raw data for other more complex geometry types (such as lines). I haven't had a chance to finish working out all the details for RAW data that's used with lines. However, what I did look at it seems that it includes a header that contains some additional information, such as how many Lat/Long Points are in the data. Other than that, it looked about as straightforward as the single points.
-- Edit --
Revisited this today and after much digging found a better formula for conversion from GPSBable source code.
COORD = (0x80000000 - raw) / 0x800000
We can also do the inverse and convert from a COORD back to the raw data
RAW = 0x80000000 - (coord * 0x800000)
I also looked into the sign bit and as far as I can tell, sign bits are not stored in the data, so you have to be aware of that. I also have code that implements Point, Line, and Polygon decoding in PHP if anybody needs it.

Related

Intel XED: different function call addresses when decoding the same instruction [duplicate]

I need a helping hand in order to understand the following assembly instruction. It seems to me that I am calling a address at someUnknownValue += 20994A?
E8 32F6FFFF - call std::_Init_locks::operator=+20994A
Whatever you're using to obtain the disassembly is trying to be helpful, by giving the target of the call as an offset from some symbol that it knows about -- but given that the offset is so large, it's probably confused.
The actual target of the call can be calculated as follows:
E8 is a call with a relative offset.
In a 32-bit code segment, the offset is specified as a signed 32-bit value.
This value is in little-endian byte order.
The offset is measured from the address of the following instruction.
e.g.
<some address> E8 32 F6 FF FF call <somewhere>
<some address>+5 (next instruction)
The offset is 0xFFFFF632.
Interpreted as a signed 32-bit value, this is -0x9CE.
The call instruction is at <some address> and is 5 bytes long; the next instruction is at <some address> + 5.
So the target address of the call is <some address> + 5 - 0x9CE.
If you are analyzing the PE file with a disassembler, the disassembler might had given you the wrong code. Most malware writer uses insertion of E8 as anti-disassembly technique. You can verify if the codes above E8 are jump instructions where the jump location is after E8.

QCryptographicHash - what is SHA3 here in reality?

I got such a piece of code:
void SHAPresenter::hashData(QString data)
{
QCryptographicHash* newHash = new QCryptographicHash(QCryptographicHash::Sha3_224);
newHash->addData(data.toUtf8());
QByteArray hashResultByteArray = newHash->result();
setHashedData(QString(hashResultByteArray.toHex()));
delete newHash;
}
According to Qt spec, QCryptographicHash::Sha3_224 should "generate an SHA3-224 hash sum. Introduced in Qt 5.1". I wanted to compare result of that code to something other source to check whether I put data in correct manner. I found site: https://emn178.github.io/online-tools/sha3_224.html
So we have SHA3_224 in both cases. The problem is that the first will generate such a byte string from "test":
3be30a9ff64f34a5861116c5198987ad780165f8366e67aff4760b5e
And the second:
3797bf0afbbfca4a7bbba7602a2b552746876517a7f9b7ce2db0ae7b
Not similar at all. But there is also a site that do "Keccak-224":
https://emn178.github.io/online-tools/keccak_224.html
And here result is:
3be30a9ff64f34a5861116c5198987ad780165f8366e67aff4760b5e
I know that SHA3 is based on Keccak's functions - but what is the issue here? Which of these two implementations follows NIST FIPS 202 in proper manner and how do we know that?
I'm writing a Keccak library for Java at the moment, so I had the toys handy to test an initial suspicion.
First a brief summary. Keccak is a sponge function which can take a number of parameters (bitrate, capacity, domain suffix, and output length). SHA-3 is simply a subset of Keccak where these values have been chosen and standardised by NIST (in FIPS PUB 202).
In the case of SHA3-224, the parameters are as follows:
bitrate: 1152
capacity: 448
domain suffix: "01"
output length: 224 (hence the name SHA3-224)
The important thing to note is that the domain suffix is a bitstring which gets appended after the input message and before the padding. The domain suffix is an optional way to differentiate different applications of the Keccak function (such as SHA3, SHAKE, RawSHAKE, etc). All SHA3 functions use "01" as a domain suffix.
Based on the documentation, I get the impression that Keccak initially had no domain suffix concept, and the known-answer tests provided by the Keccak team require that no domain suffix is used.
So, to your problem. If we take the String "test" and convert it to a byte array using ASCII or UTF-8 encoding (because Keccak works on binary, so text must be converted into bytes or bits first, and it's therefore important to decide on which character encoding to use) then feed it to a true SHA3-224 hash function we'll get the following result (represented in hexadecimal, 16 bytes to a line for easy reading):
37 97 BF 0A FB BF CA 4A 7B BB A7 60 2A 2B 55 27
46 87 65 17 A7 F9 B7 CE 2D B0 AE 7B
SHA3-224 can be summarised as Keccak[1152, 448](M || "01", 224) where the M || "01" means "append 01 after the input message and before multi-rate padding".
However, without a domain suffix we get Keccak[1152, 448](M, 224) where the lonesome M means that no suffix bits are appended, and the multi-rate padding will begin immediately after the input message. If we feed your same input "test" message to this Keccak function which does not use a domain suffix then we get the following result (again in hex):
3B E3 0A 9F F6 4F 34 A5 86 11 16 C5 19 89 87 AD
78 01 65 F8 36 6E 67 AF F4 76 0B 5E
So this result indicates that the function is not SHA3-224.
Which all means that the difference in output you are seeing is explained entirely by the presence or absence of a domain suffix of "01" (which was my immediate suspicion on reading your question). Anything which claims to be SHA3 must use a "01" domain suffix, so be very wary of tools which behave differently. Check the documentation carefully to make sure that they don't require you to specify the desired domain suffix when creating/using the object or function, but anything which claims to be SHA3 really should not make it possible to forget the suffix bits.
This is a bug in Qt and reported here and Fixed in Qt5.9

How to write an array of floats as Image in julia by using Images.jl

I need to read an image, manipulate it a bit and then save it as image again. To do that, I've found the excellent Images.jl package in julia. I was able to read image, convert it to Floating point array and then manipulate it (cropping image and changing some values on the image). However, I could not find a way to store it as a jpg. file again. Here is the process I apply to manipulate the data. For the code below, let's assume I have an dog.jpg file in the same directory.
Using Images,Colors
averageImage = zeros(1,1,3)
averageImage[1,1,:] = [123.68 116.779 103.779]
function data(img, averageImage)
a0 = load(img)
new_size = ntuple(i->div(size(a0,i)*224,minimum(size(a0))),2)
a1 = Images.imresize(a0, new_size)
i1 = div(size(a1,1)-224,2)
j1 = div(size(a1,2)-224,2)
b1 = a1[i1+1:i1+224,j1+1:j1+224]
c1 = separate(b1)
d1 = convert(Array{Float32}, c1)
e1 = reshape(d1[:,:,1:3], (224,224,3,1))
f1 = (255 * e1 .- averageImage)
g1 = permutedims(f1, [2,1,3,4])
g1 = g1[:,:,:]
# here type of g1 is : Array{Float64,3}
end
A = data("dog.jpg",averageImage)
Here, I was able to get A. Now, I need to save that A array as image.
To to that, I try the following :
save("modified_dog.jpg",A)
I got the following error:
ERROR: ArgumentError: FixedPointNumbers.UFixed{UInt8,8} is an 8-bit
type representing 256 values from 0.0 to 1.0; cannot represent -79.68
Unfortunatelly, I do not know to do that conversion.
Is there anyone to help me to save the mentioned A array ? Thanks in advance.
I haven't looked at the bulk of your function, but at the end you could try:
result = convert(Image, map(ScaleMinMax(Float64, 0.0, 256.0), g1))
save("/tmp/test.png", result)
which might convert it.
The documentation has a mysterious section entitled MapInfo (not the GIS system) which sheds a flickering light on the subject...
The NRRD format is a reasonable choice for floating-point images, though be aware that it doesn't have widespread support in external 2d graphics programs (it's more widely used for 3d images). If you just use a file name like "test.nrrd" it should just work.

How to read un-formatted data file saved via a VAX FORTRAN code with "map" and "union"

guys. I am trying to read a scientific datum file stored by a VAX FORTRAN code. The data were stored in structure, of which the file and code descriptions are as follows. I googled that FORTRAN 77 might read the file, but my frequently used language is not FORTRAN. So can some one tell me how to read the data into a FORTRAN or C/IDL/etc. variables? For example, N units of the structure are stored in file "pxm.mos", how can I read the data into my variables?
Thanks a lot!
Here are the descriptions.
c FILE name is "pxm.mos"
c FILE AND RECORD STRUCTURE
c The files were created with form='unformatted', organization='sequential',
c access='sequential', recordtype='fixed', recordsize=512.
c The following VAX FORTRAN code specifies the record structure:
structure /PXMstruc/
union
map
integer*4 buffer(512)
end map
map
integer*4 mod16
integer*4 mod60
integer*4 line
integer*4 sct
integer*4 mfdsc
integer*4 spare(3)
real*4 datamin
real*4 datamax
real*4 data(0:427)
end map
end union
end structure
record /PXMstruc/ in
This isn't hard. You can think of structure like a C struct, with unions. Each record is 2048 bytes (512 "longwords" in VAX terms) and consists of five 32-bit ints, an array of 3 ints for padding, two 32-bit floats and then an array of 428 floats. Given that the file is fixed length, there's no metadata to worry about. The union with "buffer" can be ignored.
I would be more concerned about how the file made its way to your computer, assuming it originated on a VMS system. You'll want to verify that the file size is an exact multiple of 2048 bytes. Most likely it transferred just fine, so declare a struct with the right layout and read it in, record by record.

Retrieve video file duration (time) using R

I’m creating a code to delete some video files that I don’t need. The videos are from CCTV footage and they record 24/7. However the software that records the video saves the files in ~1 hour videos and this is the problem (not being exact duration). I’m only interested in keeping videos from a particular part of the day (which varies) and because the duration of the video is not exact this is causing me problems.
The video file name has a date and time stamp but only for the start so if I could find the duration everything becomes simple algebra.
So my question is simple is it possible to get the duration (time) of video files using R?
Just a couple of other useful information the videos are from several cameras and each camera as a different recording frame rate so using file.info to return the file size and derive the length of the video is not an option. Also the video files are in .avi format.
Cheers
Patrao
As far as I know, there are no ready packages that handle video files in R (like matlab does). This isn't a pure R solution, but gets the job done. I installed CLI interface to MediaInfo and called it from R. I called it using system.
wolf <- system("q:/mi_cli/mediainfo.exe Krofel_video2volk2.AVI", intern = TRUE)
wolf # output by MediaInfo
[1] "General"
[2] "Complete name : Krofel_video2volk2.AVI"
[3] "Format : AVI"
[4] "Format/Info : Audio Video Interleave"
[5] "File size : 10.7 MiB"
[6] "Duration : 11s 188ms"
[7] "Overall bit rate : 8 016 Kbps"
...
[37] "Channel count : 1 channel"
[38] "Sampling rate : 8 000 Hz"
[39] "Bit depth : 16 bits"
[40] "Stream size : 174 KiB (2%)"
[41] "Alignment : Aligned on interleaves"
[42] "Interleave, duration : 63 ms (1.00 video frame)"
# Find where Duration is (general) and extract it.
find.duration <- grepl("Duration", wolf)
wolf[find.duration][1]# 1 = General, 2 = Video, 3 = Audio
[1] "Duration : 11s 188ms"
Have fun parsing the time.
This might be a bit low level, but if you're up to the task of parsing binary data, look up a copy of the AVI spec and figure out how to get both the number of video frames and the frame rate.
If you look at one of the AVI files using a hex editor, you will see a series of LIST chunks at the beginning. A little farther into this chunk will be a vids chunk. Immediately following vids should be a human-readable video four-character code (FourCC) specifying the video codec, probably something like mjpg (MJPEG) or avc1 (H.264) for a camera. 20 bytes after that will be 4 bytes stored in little endian notation which indicate the frame rate. Skip another 4 bytes and then the next 4 bytes will be another little endian number which indicate the total number of video frames.
I'm looking at a sample AVI file right now where the numbers are: frame rate = 24 and # of frames = 0x37EB = 14315. This works out to 9m56s, which is accurate for this file.

Resources