I have written the following code to decrypt a file:
data, err := ioutil.ReadFile("file.encrypted")
if err != nil {
log.Fatal(err)
}
block, err := aes.NewCipher(key)
if err != nil {
log.Fatal(err)
}
mode := cipher.NewCBCDecrypter(block, iv)
mode.CryptBlocks(data, data)
err = ioutil.WriteFile("file.decrypted", data, 0644)
if err != nil {
log.Fatal(err)
}
I have also decrypted the file using OpenSSL:
openssl aes-128-cbc -d -in file.encrypted -out file.decrypted -iv $IV -K $KEY
Output file from Go program is 8 bytes larger than output file from from OpenSSL.
Tail of hexdump from file generated by OpenSSL:
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
ff ff ff ff ff ff ff ff |........|
Tail of hexdump from file generated by Go program:
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
ff ff ff ff ff ff ff ff 08 08 08 08 08 08 08 08 |................|
Why is 08 08 08 08 08 08 08 08 appended to file output from Go program?
EDIT:
As BJ Black explains, the reason for extra bytes in output from my Go program is PKCS padding.
The file is encrypted with AES in CBC mode and therefore the plain text input shall be a multiple of block size, padding is added to fulfill this requirement. AES has a block size of 16 bytes so the total number of padding bytes will always be between 1 and 16 bytes. Each padding byte has a value equal to the total number of padding bytes which in my case is 0x08.
So, to find out the amount of padding added to the file, one just have to read the last byte of decrypted file and convert that number to int:
paddingBytes := int(data[len(data)-1])
The WriteFile function can then be modified like this:
err = ioutil.WriteFile("file.decrypted", data[:len(data)-paddingBytes], 0644)
Now output from my Go program is identical to the output from OpenSSL.
What you're seeing is PKCS padding, which OSSL is removing for you and Go isn't by default. See the relevant Reddit post here.
Basically, follow the example and you're good to go.
Related
I am working on FPGA ETHERNET project. My problem is following :a UDP/IP packet sent from FPGA is captured by "wireshark" and it gives me a following warning : "BAD UDP LENGTH 26 > IP PAYLOAD LENGTH Len=18 (Malformed Packet)".
BAD_UDP_LENGTH(PICTURE)
Actually, I am trying to send the following packet :
55555555555555D598EECB9B6EF400123456789008004500002EB3FE0000801108BDA9FE1504A9FE1503FA00FA00001A45630 00102030405060708090A0B0C0D0E0F1011 06A07518
You can see that the last byte of data is 11 and that part of data is not displayed by "wireshark". That part of data is obviously transmitted as I have seen it on oscilloscope. I have tried the number of different Ethernet Packet generators(PacketETH,EthernetUDP,C#) and all of them have generated the same packets (I think no problem lies in packet or packet generators). Also I have captured packets by different network monitoring software "Omnipeek" which gave me the same result as gave "wireshark" : LAST BYTE IS NOT DISPLAYED
If the last byte displayed, I think "wireshark" wouldn't give me that error.
Does anybody know how to solve that problem ?
Here I will give you additional details :
1) I am using Digilent Anvyl FPGA which has LAN8720A-CP-TR tranceiver. I have written code in VHDL and also run simulation in ISIM which gave me a correct result, concretely: all bits are sent successively with relevant values. Besides, I have checked simulation result in reality by probing LAN8720A-CP-TR tranceiver transmit pins by Digilent Electronics Explorer
2) For Ethernet packet generation I am using a simple program which you can download from here EthernetUDP (fpga4fun.com) here is picture of packet generated by that program EthernetUDP picture
If I copy the frame data from the image, I get this:
0000 98 ee cb 9b 6e f4 00 12 34 56 78 90 08 00 45 00
0010 00 2e b3 fe 00 00 80 11 08 bd a9 fe 15 04 a9 fe
0020 15 03 fa 00 fa 00 00 1a 45 63 00 01 02 03 04 05
0030 06 07 08 09 0a 0b 0c 0d 0e 0f 10 11
0040
And if I save that in a file called packet.txt then run text2pcap packet.txt packet.pcap, then load the resulting capture file back into Wireshark, I get a completely valid packet including the trailing 0x11 byte and the "bytes on wire" is indicated as 60 bytes instead of 59 bytes.
So somehow in your setup, that last byte didn't get handed off to Wireshark; therefore it simply wasn't captured for some reason, which is the reason why it's not displayed. Why it wasn't captured is an open question. It may have been transmitted as you say, since you can see it on the oscilloscope, but something about it or your capture hardware isn't correct.
I'm using a microcontroller (PIC18F26J50) to interface with a G.Skill 4GB microSD card.
SD Card initialization is successful and I go from receiving 0x01 (Idle) R1 tokens to 0x00 (Ready) R1 tokens.
Reading a data block works, I am able to read the location of partition 1 and read the first sector of that partition
However when attempting to write a block, I never see a response token. Upon dumping the raw blocks on the card. I see that the data did indeed write but it is not aligned properly...the best way to explain is with an actual picture
This should be filled with 0x01, 0x02, 0x03, and so on up to 0xFF before repeating
This card works absolutely fine in windows. And I'm able to read and write data to it properly.
Investigating, I find that the response I get is 0XCA, if you right-shift that you get 0xE5, a proper response token. The data itself is misaligned one to the left. Additionally, it appears that the two dummy bytes and the token were also written. Correcting for the shift you get:
FF FF FE 00 01 02 03 04 05 06 07 08 09 0A 0B 0C
So I removed the code to write the 2 dummy bytes and the 0xFE token, and holy s*#$ the card starts writing data IMMEDIATELY after the command, which I believe violates spec! Can anyone confirm if this is intended behavior for SDHC cards? Or is this card just running a really s*#$ty SD controller? (The latter I suspect because I have a 16GB card which is working fine)
I just had the exact same problem, which turned out not to be a problem with the SD card, but with my SPI interfacing.
The particular chip I was using (Freescale KL03) will retain the current received byte in the data buffer until you read it, even after you have started sending the next one. I was out of sync, so that each time I was writing an SPI byte, waiting for transmission and then reading from the buffer, I was actually getting the previous result and not the current one. As a consequence there was a single byte lag in every SPI transaction.
Thus, my scope revealed that I was exchanging the following with the card:
MOSI: 58 00 00 00 01 00 00 00 00 7E nn nn nn ...
MISO: FF FF FF FF FF FF FF 00 FF FF FF FF FF ...
which resulted in the misalignment you encountered. It should have been like this:
MOSI: 58 00 00 00 01 00 00 00 7E nn nn nn nn ...
MISO: FF FF FF FF FF FF FF 00 FF FF FF FF FF ...
In summary, ensure that you are sending the 7E immediately after the 00 response to your write command.
We are doing 2 slices per frame encoding using our codec and we are getting good H264 file output when played on VLC player.
But when we RTP packetize that encoded data and stream to VLC, it shows artifacts. If we use one slice per frame, our packetization is ok and stream on VLC also looks good.
We are using FU-A fragmentation, and my encoded file configuration:
resolution: 640x480
framerate: 30fps
bitrate: 800 Kbps
Our encoder is configured to use High Profile, CBR, IDR every 10 frames.
Our encoder output bitstream looks like:
00 00 00 01 67 [DATA] 00 00 00 01 68 [DATA] 00 00 00 01 65 [DATA] 00 00 00 01 65 [DATA] 00 00 00 01 41 [DATA]
So here we have two successive slice NALUs (0x65).
In our RTP pcap, everything looks good -- FU-A fragmentation, marker bit, etc but VLC and ffplay both show a similar type of artifact, as if the upper half of the frame is stretched (vertically).
My pcap file link:
http://www.filedropper.com/rtp
So I reduced the test case to a small, low bitrate (50 Kbps) QCIF stream with no fragmentation and I am still seeing the same problem.
My pcap file link:
http://www.filedropper.com/rtpqcif
Can any expert please look at the pcap file and see what might be causing VLC such trouble to play stream?
Thank you,
Harshal Patel
Although it's a very old question, when aggregating multiple NAL-U with the same timestamp (like in your example), the packetizer should use STAP-A mode and not FU-A. FU-A is made for a single timestamp NAL-U that would not fit in a RTP packet.
You need to solve the packetizer issue and everything will go well.
I'm trying to understand how the /d affects the opcode.
Example:
FF /6 PUSH r/m16 M Valid Valid Push r/m16.
How meaning is expressed?
Can anyone give me an example of the difference?
Thanks!
There are actually many instructions using FF as opcode:
INC rm16 FF /0
INC rm32 FF /0
INC rm64 FF /0
DEC rm16 FF /1
DEC rm32 FF /1
DEC rm64 FF /1
CALL rm16 FF /2
CALL rm32 FF /2
CALL rm64 FF /2
CALL FAR mem16:16 FF /3
CALL FAR mem16:32 FF /3
JMP rm16 FF /4
JMP rm32 FF /4
JMP rm64 FF /4
JMP FAR mem16:16 FF /5
JMP FAR mem16:32 FF /5
PUSH rm16 FF /6
PUSH rm32 FF /6
PUSH rm64 FF /6
As you may see, the /d part is a 3 bit sequence held in the byte following the opcode (the so called ModR/M byte), which help discriminate the correct instruction.
From the Intel reference documentation:
Many instructions that refer to an operand in memory have an addressing-form spec-
ifier byte (called the ModR/M byte) following the primary opcode. The ModR/M byte
contains three fields of information:
• The mod field combines with the r/m field to form 32 possible values: eight
registers and 24 addressing modes.
• The reg/opcode field specifies either a register number or three more bits of
opcode information. The purpose of the reg/opcode field is specified in the
primary opcode.
• The r/m field can specify a register as an operand or it can be combined with the
mod field to encode an addressing mode. Sometimes, certain combinations of
the mod field and the r/m field is used to express opcode information for some
instructions.
So that /d value is actually extracted from the reg/opcode field. When the CPU loads up the first opcode, it knows that it should read an additional byte following it, and read that field in order to complete the instruction.
I am setting up a QUdpSocket broadcaster. when I view the output in wireshark, it says my packets are malformed. Inspecting the packets, it appears they are not emitted with an ethernet trailer. Do I need to emit this myself, or do you see another issue? My code below is slightly condensed. Note that if connected via crossover cable my receiving device (micro-controller) does see and respond to the packet (as seen on wireshark). I want to make sure I don't have malformed frames so I can use this on a switched network which allows UDP traffic.
Thanks
const quint16 s_packetHeader = 0x5A5A;
const quint16 s_sendReadBackRegisters = 0x0203;
m_udpSocketWriter= new QUdpSocket(this);
QByteArray datagram;
QDataStream ds(&datagram, QIODevice::WriteOnly);
ds.setVersion(QDataStream::Qt_4_8);
ds << s_packetHeader << s_sendReadBackRegisters;
m_udpSocketWriter->writeDatagram(datagram.data(), datagram.size(), QHostAddress::Broadcast, 5000);
and the output from wireshark
"1243","886.645245000","172.27.1.117","255.255.255.255","UDP","46","Source port: 58411 Destination port: 5000[Malformed Packet]"
0000 ff ff ff ff ff ff d4 3d 7e 31 e0 27 08 00 45 00 .......=~1.'..E.
0010 00 20 38 6b 00 00 80 11 54 d2 ac 1b 01 75 ff ff . 8k....T....u..
0020 ff ff e4 2b 13 88 00 0c fe 34 5a 5a 02 03 ...+.....4ZZ..
Note that the last four bytes correspond to the data I sent, 5A 5A 02 03.
According to a google image search, the packet is missing the trailer bits... although I am no network expert that is totally a guess.
Windows 7 x64, VS2010, QT 4.8-latest x64