I'm trying to encrypt some data in go but it's hardly ever the correct cipher.BlockSize.
Is there a "built-in" way to add padding or should I be using a function to add it manually?
This is my solution now:
// encrypt() encrypts the message, but sometimes the
// message isn't the proper length, so we add padding.
func encrypt(msg []byte, key []byte) []byte {
cipher, err := aes.NewCipher(key)
if err != nil {
log.Fatal(err)
}
if len(msg) < cipher.BlockSize() {
var endLength = cipher.BlockSize() - len(msg)
ending := make([]byte, endLength, endLength)
msg = append(msg[:], ending[:]...)
cipher.Encrypt(msg, msg)
} else {
var endLength = len(msg) % cipher.BlockSize()
ending := make([]byte, endLength, endLength)
msg = append(msg[:], ending[:]...)
cipher.Encrypt(msg, msg)
}
return msg
}
Looking at Package cipher it appears like you may have to add the padding yourself, see PKCS#7 padding.
Essentially add the required padding bytes with the value of each byte the number of padding byte added.
Note that you need to add padding consistently and that means that if the data to be encrypted is an exact multiple of the block size an entire block of padding must be added since there is no way to know from the data if padding has been added or not, it is a common mistake to try to out-smart this. Consider if the last byte is 0x00, is that padding or data?
here's my solution
// padOrTrim returns (size) bytes from input (bb)
// Short bb gets zeros prefixed, Long bb gets left/MSB bits trimmed
func padOrTrim(bb []byte, size int) []byte {
l := len(bb)
if l == size {
return bb
}
if l > size {
return bb[l-size:]
}
tmp := make([]byte, size)
copy(tmp[size-l:], bb)
return tmp
}
Related
I need to read responses from user provided URLs
I don't want them to overload my server with links to huge files.
I want to read N bytes max and return an error if there are more bytes to read.
I can read N bytes, but how I detect, that file is incomplete (assuming corner cases when remote file is exactly N bytes long)?
Additionally to Peter's answer, there is a ready solution in the net/http package: http.MaxBytesReader():
func MaxBytesReader(w ResponseWriter, r io.ReadCloser, n int64) io.ReadCloser
MaxBytesReader is similar to io.LimitReader but is intended for limiting the size of incoming request bodies. In contrast to io.LimitReader, MaxBytesReader's result is a ReadCloser, returns a non-EOF error for a Read beyond the limit, and closes the underlying reader when its Close method is called.
Originally it was "designed" for limiting the size of incoming request bodies, but it can be used to limit incoming response bodies as well. For that, simply pass nil for the ResponseWriter parameter.
Example using it:
{
body := ioutil.NopCloser(bytes.NewBuffer([]byte{0, 1, 2, 3, 4}))
r := http.MaxBytesReader(nil, body, 4)
buf, err := ioutil.ReadAll(r)
fmt.Println("When body is large:", buf, err)
}
{
body := ioutil.NopCloser(bytes.NewBuffer([]byte{0, 1, 2, 3, 4}))
r := http.MaxBytesReader(nil, body, 5)
buf, err := ioutil.ReadAll(r)
fmt.Println("When body is exact (OK):", buf, err)
}
{
body := ioutil.NopCloser(bytes.NewBuffer([]byte{0, 1, 2, 3, 4}))
r := http.MaxBytesReader(nil, body, 6)
buf, err := ioutil.ReadAll(r)
fmt.Println("When body is small (OK):", buf, err)
}
Output (try it on the Go Playground):
When body is large: [0 1 2 3] http: request body too large
When body is exact (OK): [0 1 2 3 4] <nil>
When body is small (OK): [0 1 2 3 4] <nil>
Simply try to read your maximum acceptable size plus 1 byte. For an acceptable size of 1MB:
var res *http.Response
b := make([]byte, 1<<20+1)
n, err := io.ReadFull(res.Body, b)
switch err {
case nil:
log.Fatal("Response larger than 1MB")
case io.ErrUnexpectedEOF:
// That's okay; the response is exactly 1MB or smaller.
b = b[:n]
default:
log.Fatal(err)
}
You can also do the same thing with an io.LimitedReader:
var res *http.Response
r := &io.LimitedReader{
R: res.Body,
N: 1<<20 + 1,
}
// handle response body somehow
io.Copy(ioutil.Discard, r)
if r.N == 0 {
log.Fatal("Response larger than 1MB")
}
Note that both methods limit the uncompressed size. Significantly fewer bytes may traverse the network if the response is compressed. You need be clear about whether you want to limit network or memory usage and adjust the limit accordingly, possibly on a case-by-case basis.
You can check the content length field in request header to get the total file size.
I am trying to understand why making the buffer size of a channel larger changes causes my code to run unexpectedly. If the buffer is smaller than my input (100 ints), the output is as expected, i.e., 7 goroutines each read a subset of the input and send output on another channel which prints it. If the buffer is the same size or larger than the input, I get no output and no error. Am I closing a channel at the wrong time? Do I have the wrong expectation about how buffers work? Or, something else?
package main
import (
"fmt"
"sync"
)
var wg1, wg2 sync.WaitGroup
func main() {
share := make(chan int, 10)
out := make(chan string)
go printChan(out)
for j:= 1; j<=7; j++ {
go readInt(share, out, j)
}
for i:=1; i<=100; i++ {
share <- i
}
close(share)
wg1.Wait()
close(out)
wg2.Wait()
}
func readInt(in chan int, out chan string, id int) {
wg1.Add(1)
for n := range in {
out <- fmt.Sprintf("goroutine:%d was sent %d", id, n)
}
wg1.Done()
}
func printChan(out chan string){
wg2.Add(1)
for l := range out {
fmt.Println(l)
}
wg2.Done()
}
To run this:
Small buffer, expected output. http://play.golang.org/p/4r7rTGypPO
Big buffer, no output. http://play.golang.org/p/S-BDsw7Ctu
This has nothing directly to do with the size of the buffer. Adding the buffer is exposing a bug in where you're calling waitGroup.Add(1)
You have to add to the WaitGroup before you dispatch the goroutine, otherwise you may end up calling Wait() before the waitGroup.Add(1) executes.
http://play.golang.org/p/YaDhc6n8_B
The reason it works in the first and not the second, is because the synchronous sends ensure that the gouroutines have executed at least that far. In the second example, the for loop fills up the channel, closes it and calls Wait before anything else can happen.
I have a problem with decryption when I try to decrypt the same byte slice again.
Example of code for clarification:
package main
import (
"fmt"
"crypto/cipher"
"crypto/des"
)
const (
// tripleKey is TripleDES key string (3x8 bytes)
tripleKey = "12345678asdfghjkzxcvbnmq"
)
var (
encrypter cipher.BlockMode
decrypter cipher.BlockMode
)
func init() {
// tripleDESChiper is chiper block based on tripleKey used for encryption/decryption
tripleDESChiper, err := des.NewTripleDESCipher([]byte(tripleKey))
if err != nil {
panic(err)
}
// iv is Initialization Vector used for encrypter/decrypter creation
ciphertext := []byte("0123456789qwerty")
iv := ciphertext[:des.BlockSize]
// create encrypter and decrypter
encrypter = cipher.NewCBCEncrypter(tripleDESChiper, iv)
decrypter = cipher.NewCBCDecrypter(tripleDESChiper, iv)
}
func main() {
message := "12345678qwertyuia12345678zxcvbnm,12345678poiuytr"
data := []byte(message)
hash := encrypt(data)
decoded1 := decrypt(hash)
decoded2 := decrypt(hash)
decoded3 := decrypt(hash)
decoded4 := decrypt(hash)
fmt.Printf("encrypted data : %x\n", data)
fmt.Printf("1 try of decryption result : %x\n", decoded1)
fmt.Printf("2 try of decryption result : %x\n", decoded2)
fmt.Printf("3 try of decryption result : %x\n", decoded3)
fmt.Printf("4 try of decryption result : %x\n", decoded4)
}
func encrypt(msg []byte) []byte {
encrypted := make([]byte, len(msg))
encrypter.CryptBlocks(encrypted, msg)
return encrypted
}
func decrypt(hash []byte) []byte {
decrypted := make([]byte, len(hash))
decrypter.CryptBlocks(decrypted, hash)
return decrypted
}
This code is also available and runnable
on the playground.
It gives the following result:
encrypted data : 313233343536373871776572747975696131323334353637387a786376626e6d2c3132333435363738706f6975797472
1 try of decryption result : 313233343536373871776572747975696131323334353637387a786376626e6d2c3132333435363738706f6975797472
2 try of decryption result : 5e66fa74456402c271776572747975696131323334353637387a786376626e6d2c3132333435363738706f6975797472
3 try of decryption result : 5e66fa74456402c271776572747975696131323334353637387a786376626e6d2c3132333435363738706f6975797472
4 try of decryption result : 5e66fa74456402c271776572747975696131323334353637387a786376626e6d2c3132333435363738706f6975797472
As you can see the first decryption works well and returns valid result,
but all other tries returns the wrong result.
The first 16 bytes of result is not as in source byte slice.
Can somebody describe what I am doing wrong?
Short version: don't reuse the decrypter object.
Longer version: You're using a cipher in CBC mode: when encrypting the data, the plaintext for block N is XOR-ed with the ciphertext for block N-1 (or the IV, on the first block). On decryption this is done in reverse.
This means that when you try and reuse your decrypter object you don't get the correct results because the state isn't correct - it is decrypting the blocks as if they were subsequent blocks in your message. A peculiarity of CBC is that an incorrect IV will only affect the first decrypted block.
Trying to emulate an algorithm in Go that is basically AES ECB Mode encryption.
Here's what I have so far
func Decrypt(data []byte) []byte {
cipher, err := aes.NewCipher([]byte(KEY))
if err == nil {
cipher.Decrypt(data, PKCS5Pad(data))
return data
}
return nil
}
I also have a PKCS5Padding algorithm, which is tested and working, which pads the data first. I cant find any information on how to switch the encryption mode in the Go AES package (it's definitely not in the docs).
I have this code in another language, which is how I know this algorithm isn't working quite correctly.
EDIT: Here is the method as I have interpreted from on the issue page
func AESECB(ciphertext []byte) []byte {
cipher, _ := aes.NewCipher([]byte(KEY))
fmt.Println("AESing the data")
bs := 16
if len(ciphertext)%bs != 0 {
panic("Need a multiple of the blocksize")
}
plaintext := make([]byte, len(ciphertext))
for len(plaintext) > 0 {
cipher.Decrypt(plaintext, ciphertext)
plaintext = plaintext[bs:]
ciphertext = ciphertext[bs:]
}
return plaintext
}
This is actually not returning any data, maybe I screwed something up when changing it from encripting to decripting
Electronic codebook ("ECB") is a very straightforward mode of operation. The data to be encrypted is divided into byte blocks, all having the same size. For each block, a cipher is applied, in this case AES, generating the encrypted block.
The code snippet below decrypts AES-128 data in ECB (note that the block size is 16 bytes):
package main
import (
"crypto/aes"
)
func DecryptAes128Ecb(data, key []byte) []byte {
cipher, _ := aes.NewCipher([]byte(key))
decrypted := make([]byte, len(data))
size := 16
for bs, be := 0, size; bs < len(data); bs, be = bs+size, be+size {
cipher.Decrypt(decrypted[bs:be], data[bs:be])
}
return decrypted
}
As mentioned by #OneOfOne, ECB is insecure and very easy to detect, as repeated blocks will always encrypt to the same encrypted blocks. This Crypto SE answer gives a very good explanation why.
Why? We left ECB out intentionally: it's insecure, and if needed it's
trivial to implement.
https://github.com/golang/go/issues/5597
I used your code so I feel the need to show you how I fixed it.
I am doing the cryptopals challenges for this problem in Go.
I'll walk you through the mistake since the code is mostly correct.
for len(plaintext) > 0 {
cipher.Decrypt(plaintext, ciphertext)
plaintext = plaintext[bs:]
ciphertext = ciphertext[bs:]
}
The loop does decrypt the data but does not put it anywhere. It simply shifts the two arrays along producing no output.
i := 0
plaintext := make([]byte, len(ciphertext))
finalplaintext := make([]byte, len(ciphertext))
for len(ciphertext) > 0 {
cipher.Decrypt(plaintext, ciphertext)
ciphertext = ciphertext[bs:]
decryptedBlock := plaintext[:bs]
for index, element := range decryptedBlock {
finalplaintext[(i*bs)+index] = element
}
i++
plaintext = plaintext[bs:]
}
return finalplaintext[:len(finalplaintext)-5]
What this new improvement does is store the decrypted data into a new []byte called finalplaintext. If you return that you get the data.
It's important to do it this way since the Decrypt function only works one block size at a time.
I return a slice because I suspect it's padded. I am new to cryptography and Go so anyone feel free to correct/revise this.
Ideally you want to implement the crypto/cipher#BlockMode interface. Since an official one doesn't exist, I used crypto/cipher#NewCBCEncrypter as a starting point:
package ecb
import "crypto/cipher"
type ecbEncrypter struct { cipher.Block }
func newECBEncrypter(b cipher.Block) cipher.BlockMode {
return ecbEncrypter{b}
}
func (x ecbEncrypter) BlockSize() int {
return x.Block.BlockSize()
}
func (x ecbEncrypter) CryptBlocks(dst, src []byte) {
size := x.BlockSize()
if len(src) % size != 0 {
panic("crypto/cipher: input not full blocks")
}
if len(dst) < len(src) {
panic("crypto/cipher: output smaller than input")
}
for len(src) > 0 {
x.Encrypt(dst, src)
src, dst = src[size:], dst[size:]
}
}
I was confused by a couple of things.
First i needed a aes-256 version of the above algorithm, but apparently the aes.Blocksize (which is 16) won't change when the given key has length 32. So it is enough to give a key of length 32 to make the algorithm aes-256
Second, the decrypted value still contains padding and the padding value changes depending on the length of the encrypted string. E.g. when there are 5 padding characters the padding character itself will be 5.
Here is my function which returns a string:
func DecryptAes256Ecb(hexString string, key string) string {
data, _ := hex.DecodeString(hexString)
cipher, _ := aes.NewCipher([]byte(key))
decrypted := make([]byte, len(data))
size := 16
for bs, be := 0, size; bs < len(data); bs, be = bs+size, be+size {
cipher.Decrypt(decrypted[bs:be], data[bs:be])
}
// remove the padding. The last character in the byte array is the number of padding chars
paddingSize := int(decrypted[len(decrypted)-1])
return string(decrypted[0 : len(decrypted)-paddingSize])
}
I am using a Vector type to store arrays of bytes (variable sizes)
store := vector.New(200);
...
rbuf := make([]byte, size);
...
store.Push(rbuf);
That all works well, but when I try to retrieve the values, the compiler tells me I need to use type assertions. So I add those in, and try
for i := 0; i < store.Len(); i++ {
el := store.At(i).([]byte);
...
But when I run this it bails out with:
interface is nil, not []uint8
throw: interface conversion
Any idea how I can 'cast'/convert from the empty Element interface that Vector uses to store its data to the actual []byte array that I then want to use subsequently?
Update (Go1): The vector package has been removed on 2011-10-18.
This works fine for me. Have you initialised the first 200 elements of your vector? If you didn't they will probably be nil, which would be the source of your error.
package main
import vector "container/vector"
import "fmt"
func main() {
vec := vector.New(0);
buf := make([]byte,10);
vec.Push(buf);
for i := 0; i < vec.Len(); i++ {
el := vec.At(i).([]byte);
fmt.Print(el,"\n");
}
}