I was using sox to convert a 2 channels, 48000Hz, 24bits wav file (new.wav) to a mono wav file (post.wav).
Here are the related commands and outputs:
[Farmer#Ubuntu recording]$ soxi new.wav
Input File : 'new.wav'
Channels : 2
Sample Rate : 48000
Precision : 24-bit
Duration : 00:00:01.52 = 72901 samples ~ 113.908 CDDA sectors
File Size : 447k
Bit Rate : 2.35M
Sample Encoding: 24-bit Signed Integer PCM
[Farmer#Ubuntu recording]$ sox new.wav -c 1 post.wav
[Farmer#Ubuntu recording]$ soxi post.wav
Input File : 'post.wav'
Channels : 1
Sample Rate : 48000
Precision : 24-bit
Duration : 00:00:01.52 = 72901 samples ~ 113.908 CDDA sectors
File Size : 219k
Bit Rate : 1.15M
Sample Encoding: 24-bit Signed Integer PCM
It looks fine. But let us check the header of post.wav:
[Farmer#Ubuntu recording]$ xxd post.wav | head -10
00000000: 5249 4646 9856 0300 5741 5645 666d 7420 RIFF.V..WAVEfmt
00000010: 2800 0000 feff 0100 80bb 0000 8032 0200 (............2..
00000020: 0300 1800 1600 1800 0400 0000 0100 0000 ................
00000030: 0000 1000 8000 00aa 0038 9b71 6661 6374 .........8.qfact
00000040: 0400 0000 c51c 0100 6461 7461 4f56 0300 ........dataOV..
This is the standard wav file header structure.
The first line is no problem.
The second line "2800 0000" shows the size of sub chunk "fmt ", it should be 0x00000028 (as this is little endian) = 40 bytes. But there are 54 bytes (before sub chunk "fmt " and sub chunk "data").
The third line shows "ExtraParamSize" is 0x0018 = 22 bytes. But actually it is 36 bytes (from third line's "1600" to 5th line's "0100"). The previous 16 bytes are standard.
So what's the extra 36 bytes?
Ok,I found out the answer.
Look at the second line, we can found that audio format is "feff", actual value is 0xFFFE, so this is not a PCM standard wave format, but a extensible format.
Wav head detailed introduction can refer to this link. The article is well written and thanks to the author.
So as this is a Non-PCM format wav, "fmt " chunk space occupied 40 bytes is no problem, and followed by a "fact" chunk, and then is "data" chunk, So everything makes sense.
Related
I'm trying to understand the algorithm used for compression value = 1 with the Epson ESCP2 print command, "ESC-i". I have a hex dump of a raw print file which looks, in part, like the hexdump below (note little-endian format issues).
000006a 1b ( U 05 00 08 08 08 40 0b
units; (page1=08), (vt1=08), (hz1=08), (base2=40 0b=0xb40=2880)
...
00000c0 691b 0112 6802 0101 de00
esc i 12 01 02 68 01 01 00
print color1, compress1, bits1, bytes2, lines2, data...
color1 = 0x12 = 18 = light cyan
compress1 = 1
bits1 (bits/pixel) = 0x2 = 2
bytes2 is ??? = 0x0168 = 360
lines2 is # lines to print = 0x0001 = 1
00000c9 de 1200 9a05 6959
00000d0 5999 a565 5999 6566 5996 9695 655a fd56
00000e0 1f66 9a59 6656 6566 5996 9665 9659 6666
00000f0 6559 9999 9565 6695 9965 a665 6666 6969
0000100 5566 95fe 9919 6596 5996 5696 9666 665a
0000110 5956 6669 0456 1044 0041 4110 0040 8140
0000120 9000 0d00
1b0c 1b40 5228 0008 5200 4d45
FF esc # esc ( R 00 REMOTE1
The difficulty I'm having is how to decode the data, starting at 00000c9, given 2 bits/pixel and the count of 360. It's my understanding this is some form of tiff or rle encoding, but I can't decode it in a way that makes sense. The output was produced by gutenprint plugin for GIMP.
Any help would be much appreciated.
The byte count is not a count of the bytes in the input stream; it is a count of the bytes in the input stream as expanded to an uncompressed form. So when expanded, there should be a total of 360 bytes. The input bytes are interpreted as either a count of bytes to follow, if positive, in which case the count is the byte value +1; and if negative the count is a count of the number of times the immediately following byte should be expanded, again, +1. The 0D at the end is a terminating carriage return for the line as a whole.
The input stream is only considered as a string of whole bytes, despite the fact that the individual pixel/nozzle controls are only 2 bits each. So it is not really possible to use a repeat count for something like a 3-nozzle sequence; a repeat count must always specify a full byte 4-nozzle combination.
The above example then specifies:
0xde00 => repeat 0x00 35 times
0x12 => use the next 19 bytes as is
0xfd66 => repeat 0x66 4 times
0x1f => use the next 32 bytes as is
etc.
Let's take this table with characters and HEX encodings in Unicode and UTF-8.
Does anyone know how it is possible to convert UTF-8 hex to Unicode code point using only math operations?
E.g. let's take the first row. Given 227, 129 130 how to get 12354?
Is there any simple way to do it by using only math operations?
Unicode code point
UTF-8
Char
30 42 (12354)
e3 (227) 81 (129) 82 (130)
あ
30 44 (12356)
e3 (227) 81 (129) 84 (132)
い
30 46 (12358)
e3 (227) 81 (129) 86 (134)
う
* Source: https://www.utf8-chartable.de/unicode-utf8-table.pl?start=12288&unicodeinhtml=hex
This video is the perfect source (watch from 6:15), but here is its summary and code sample in golang. With letters I mark bits taken from UTF-8 bytes, hopefully it makes sense. When you understand the logic it's easy to apply bitwise operators):
Bytes
Char
UTF-8 bytes
Unicode code point
Explanation
1-byte (ASCII)
E
1. 0xxx xxxx0100 0101 or 0x45
1. 0xxx xxxx0100 0101 or U+0045
no conversion needed, the same value in UTF-8 and unicode code point
2-byte
Ê
1. 110x xxxx2. 10yy yyyy1100 0011 1000 1010 or 0xC38A
0xxx xxyy yyyy0000 1100 1010 or U+00CA
1. First 5 bits of the 1st byte2. First 6 bits of the 2nd byte
3-byte
あ
1. 1110 xxxx2. 10yy yyyy3. 10zz zzzz1110 0011 1000 0001 1000 0010 or 0xE38182
xxxx yyyy yyzz zzzz0011 0000 0100 0010 or U+3042
1. First 4 bits of the 1st byte2. First 6 bits of the 2nd byte3. First 6 bits of the 3rd byte
4-byte
𐄟
1. 1111 0xxx2. 10yy yyyy3. 10zz zzzz4. 10ww wwww1111 0000 1001 0000 1000 0100 1001 1111 or 0xF090_849F
000x xxyy yyyy zzzz zzww wwww0000 0001 0000 0001 0001 1111 or U+1011F
1. First 3 bits of the 1st byte2. First 6 bits of the 2nd byte3. First 6 bits of the 3rd byte4. First 6 bits of the 4th byte
2-byte UTF-8
func get(byte1 byte, byte2 byte) {
int1 := uint16(byte1 & 0b_0001_1111) << 6
int2 := uint16(byte2 & 0b_0011_111)
return rune(int1 + int2)
}
3-byte UTF-8
func get(byte1 byte, byte2 byte, byte3 byte) {
int1 := uint16(byte1 & 0b_0000_1111) << 12
int2 := uint16(byte2 & 0b_0011_111) << 6
int3 := uint16(byte3 & 0b_0011_111)
return rune(int1 + int2 + int3)
}
4-byte UTF-8
func get(byte1 byte, byte2 byte, byte3 byt3, byte4 byte) {
int1 := uint(byte1 & 0b_0000_1111) << 18
int2 := uint(byte2 & 0b_0011_111) << 12
int3 := uint(byte3 & 0b_0011_111) << 6
int4 := uint(byte4 & 0b_0011_111)
return rune(int1 + int2 + int3 + int4)
}
I'm trying to read into R a test file encoded in Code page 437. Here is the file, and here is its hex-dump:
00000000: 0b0c 0e0f 1011 1213 1415 1617 1819 1a1b ................
00000010: 1c1d 1e1f 2021 2223 2425 2627 2829 2a2b .... !"#$%&'()*+
00000020: 2c2d 2e2f 3031 3233 3435 3637 3839 3a3b ,-./0123456789:;
00000030: 3c3d 3e3f 4041 4243 4445 4647 4849 4a4b <=>?#ABCDEFGHIJK
00000040: 4c4d 4e4f 5051 5253 5455 5657 5859 5a5b LMNOPQRSTUVWXYZ[
00000050: 5c5d 5e5f 6061 6263 6465 6667 6869 6a6b \]^_`abcdefghijk
00000060: 6c6d 6e6f 7071 7273 7475 7677 7879 7a7b lmnopqrstuvwxyz{
00000070: 7c7d 7e7f ffad 9b9c 9da6 aeaa f8f1 fde6 |}~.............
00000080: faa7 afac aba8 8e8f 9280 90a5 999a e185 ................
00000090: a083 8486 9187 8a82 8889 8da1 8c8b a495 ................
000000a0: a293 94f6 97a3 9681 989f e2e9 e4e8 eae0 ................
000000b0: ebee e3e5 e7ed fc9e f9fb ecef f7f0 f3f2 ................
000000c0: a9f4 f5c4 b3da bfc0 d9c3 b4c2 c1c5 cdba ................
000000d0: d5d6 c9b8 b7bb d4d3 c8be bdbc c6c7 ccb5 ................
000000e0: b6b9 d1d2 cbcf d0ca d8d7 cedf dcdb ddde ................
000000f0: b0b1 b2fe 0a .....
The file contains 245 characters (including the final newline), but R only reads 242 of them:
> test_text <- readLines(file('437__characters.txt', encoding='437'))
Warning message:
In readLines(file("437__characters.txt", :
incomplete final line found on '437__characters.txt'
> test_text
[1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037 !\"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\177 ¡¢£¥ª«¬°±²µ·º»¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬▀▄█▌▐░▒"
> nchar(test_text)
[1] 242
You'll note that R doesn't read the final characters "▓■\n".
My best guess is that this is something to do with how R determines the length of text files, because of the following:
Even though the file is terminated with a newline (0x0a), R gives an 'incomplete final line found' warning
Adding seven or more characters to the end of the file makes it read correctly
Similarly, the file is read correctly if you remove three characters from anywhere in the file
The same issue seems to occur with reading files encoded in other DOS code pages
This question might be related: R: read.table stops when meeting specific utf-16 characters.
It appears to be something wrong with readLines(), but could very well be an issue with the file connection for text, with something amiss happening in the encoding = part. Anyway, here's a workaround: Load the file as binary, and then convert. And stay away from bad voodoo 1980s code pages.
Using readLines()
This does not capture the last \n since that delimits the unit of text input by `readLines().
test_text2 <- readLines(file("~/Downloads/437__characters.txt", raw = TRUE))
test_text3 <- stringi::stri_conv(test_text2, "IBM437", "UTF-8")
stringi::stri_length(test_text3)
## [1] 244
test_text3
## [1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\034\033\177\035\036\037 !\"#$%&'()*+,-./
## 0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\032 ¡¢£¥ª«¬°±²μ·º
## »¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥
## ╦╧╨╩╪╫╬▀▄█▌▐░▒▓■"
Using readBin()
Captures everything including the \n.
test_text_bin <- readBin(file("~/Downloads/437__characters.txt", "rb"),
n = 245, what = "raw")
test_text_bin_UTF8 <- stringi::stri_conv(test_text_bin, "IBM437", "UTF-8")
stringi::stri_length(test_text_bin_UTF8)
## [1] 245
test_text_bin_UTF8
## [1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\034\033\177\035\036\037 !\"#$%&'()*+,-./
## 0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\032 ¡¢£¥ª«¬°±²μ·º
## »¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥
## ╦╧╨╩╪╫╬▀▄█▌▐░▒▓■\n"
Write a program to swap odd and even bits in an integer.
For exp, bit 0 and bit 1 are swapped, bit 2 and bit 3 are swapped.
The solution uses 0xaaaaaaaa and 0x55555555.
Can I know what does 0xaaaaaaaa and 0x55555555 means in binary number?
Each four bits constitutes a hex digit thus:
0000 0 1000 8
0001 1 1001 9
0010 2 1010 A
0011 3 1011 B
0100 4 1100 C
0101 5 1101 D
0110 6 1110 E
0111 7 1111 F
So, for example, 0x1234 would be 0001 0010 0011 01002.
For your specific examples:
0xaaaaaaaa = 1010 1010 ... 1010
0x55555555 = 0101 0101 ... 0101
The reason why a solution might use those two values is that, if you AND a value with 0xaaaaaaaa, you'll get only the odd bits (counting from the left), which you can then shift right to move them to the even bit positions.
Similarly, if you AND a value with 0x55555555, you'll get only the even bits, which you can then shift left to move them to the odd bit positions.
Then you can just OR those two values together and the bits have been swapped.
For example, let's start with the 16-bit value abcdefghijklmnop (each letter being a bit and with a zero bit being . to make it more readable):
abcdefghijklmnop abcdefghijklmnop
AND 1.1.1.1.1.1.1.1. AND .1.1.1.1.1.1.1.1
= a.c.e.g.i.k.m.o. = .b.d.f.h.j.l.n.p
>>1 = .a.c.e.g.i.k.m.o <<1 = b.d.f.h.j.l.n.p.
\___________ ___________/
\ /
.a.c.e.g.i.k.m.o
OR b.d.f.h.j.l.n.p.
= badcfehgjilknmpo
So each group of two bits has been swapped around. In C, that would be something like:
val = ((val & 0xAAAAAAAA) >> 1) | ((val & 0x55555555) << 1);
but, if this is classwork of some description, I'd suggest you work it out yourself by doing individual operations.
For an in-depth explanation of the bitwise operators that allow you to do this, see this excellent answer here.
I'm trying to discover devices, from a coordinator, in my network.
So I sent an ND command to the coordinator and I'm correctly receiving response from other Xbee.
The next step will be to store the information I've received in a web application, in oder to send commands and data.
However, what I'm still missing is some parts in the frame respose. So far I've mapped the frame like this:
1 7E start frame
===== =================== MESSAGE LENGHT
2-3 0x00 0x19 -> 25
===== =================== PACKET TYPE
4 88 -> response to a remote AT command
5 02 frame ID
===== =================== AT COMMAND
6-7 0x4E 0x44 "ND"
8 00 status byte (00 -> OK)
===== =================== MY - Remote Address
9-10 0x17 0x85
===== =================== SH - SERIAL NUMBER HIGH
11-14 0x00 0x13 0xA2 0x00
===== =================== SL - SERIAL NUMBER LOW
15-18 0x40 0xB4 0x50 0x23
===== =================== SIGNAL
19 20
= ======== NI - Node Identifier
20 00
21 FF
22 FE
23 01
24 00
25 C1
26 05
27 10
28 1E
===== ===== CHECKSUM (25th bytes from MESSAGE LENGHT)
29 19
So, where I can find in this response the address of the device ?
My guess is in the NI part of the message but, I haven't find any example/information of how the data are organised.
Could someone point me in the right direction?
As someone told me in the dig.com forum
NI<CR> (Variable length)
PARENT_NETWORK ADDRESS (2 Bytes)<CR>
DEVICE_TYPE (1 Byte: 0=Coord, 1=Router, 2=End Device)
STATUS (1 Byte: Reserved)
PROFILE_ID (2 Bytes)
MANUFACTURER_ID (2 Bytes
So, loking to my frame response:
00 --- Node Identifier variable, (here 1 byte = 00 because no value is set up).
FFFE --- parent network address (2 bytes)
01 --- device type
00 --- status
C105 --- profile id
101E --- manufacturing id
This, afaik, means that in this last part of the frame, no information about address of the device are given. Only information are the SL and SH.
The 16-bit network address is what you've labeled "MY" (0x1785), and the 64-bit MAC address is the combination of SH/SL (00 13 A2 00 40 B4 50 23).