Labstack Echo v4 has recently been updated to include ModifyResponse hook (ala golang's httputil ReverseProxy).
Most of the examples of using this seem to leverage ioutil.ReadAll().
For example:
func UpdateResponse(r *http.Response) error {
b, _ := ioutil.ReadAll(r.Body)
buf := bytes.NewBufferString("Monkey")
buf.Write(b)
r.Body = ioutil.NopCloser(buf)
r.Header["Content-Length"] = []string{fmt.Sprint(buf.Len())}
return nil
}
What I am looking to do is to avoid waiting for the entire response (from ReadAll) and monitor the stream for certain content (i.e. class='blue') then replace it with different text (i.e. class='blue-green')
How can this be done using streams efficiently and with as little allocations as possible?
Related
I am working on a recommendation engine with Apache Prediction IO. Before the event server i have an GO api that listens events from customer and importer. In a particular case when customer uses importer i collect the imported identitys and i send in a json from importer api to GO api. As an example if user imports a csv that contains 45000 data, i send those 45000 identity to GO api in a json like {"barcodes":[...]}. Prediction IO event server wants data in a particular shape.
type ItemEvent struct {
Event string `json:"event"`
EntityType string `json:"entityType"`
EntityId string `json:"entityId"`
Properties map[string][]string `json:"properties"`
EventTime time.Time `json:"eventTime"`
}
type ItemBulkEvent struct {
Event string `json:"event"`
Barcodes []string `json:"barcodes"`
EventTime time.Time `json:"eventTime"`
}
ItemEvent is the final data that i will send to event server from GO Api. ItemBulkEvent is the data that i receive from importer api.
func HandleItemBulkEvent(w http.ResponseWriter, r *http.Request) {
var itemBulk model.ItemBulkEvent
err := decode(r,&itemBulk)
if err != nil {
log.Fatalln("handleitembulkevent -> ",err)
util.RespondWithError(w,400,err.Error())
}else {
var item model.ItemEvent
item.EventTime = itemBulk.EventTime; item.EntityType = "item"; item.Event = itemBulk.Event
itemList := make([]model.ItemEvent,0,50)
for index, barcode := range itemBulk.Barcodes{
item.EntityId = barcode
if (index > 0 && (index % 49) == 0){
itemList = append(itemList, item)
go sendBulkItemToEventServer(w,r,itemList)
itemList = itemList[:0]
}else if index == len(itemBulk.Barcodes) - 1{
itemList = append(itemList, item)
itemList = itemList[:( (len(itemBulk.Barcodes) - 1) % 49)]
go sendBulkItemToEventServer(w,r,itemList) // line 116
itemList = itemList[:0]
} else{
itemList = append(itemList, item)
}
}
util.RespondWithJSON(w,200,"OK")
}
}
HandleItemBulkEvent is a handler function for bulk updates. In this step i should mention about prediction io's batch uploads. Via rest api prediction io event server takes 50 event per request. So i created a list with 50 cap and an item. I used same item and just changed identity part(barcode) in every turn and added to list. In every 50. item i used a handler function that sends that list to event server and after that cleaned the list so on.
func sendBulkItemToEventServer(w http.ResponseWriter, r *http.Request, itemList []model.ItemEvent){
jsonedItem,err := json.Marshal(itemList)
if err != nil{
log.Fatalln("err marshalling -> ",err.Error())
}
// todo: change url to event server url
resp, err2 := http.Post(fmt.Sprintf("http://localhost:7070/batch/events.json?accessKey=%s",
r.Header.Get("Authorization")),
"application/json",
bytes.NewBuffer(jsonedItem))
if err2 != nil{
log.Fatalln("err http -> " , err.Error()) // line 141
}
defer resp.Body.Close()
}
sendBulkItemToEventServer function marshals the incoming itemlist and makes an post request to prediction io's event server. In this part when i try with 5000+- item it does well but when i try with 45000 item application crashes with below error.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc05938]
goroutine 620 [running]:
api-test/service.sendBulkItemToEventServer(0x1187860, 0xc00028e0e0, 0xc00029c200, 0xc00011c000, 0x31, 0x32)
/home/kadirakinkorkunc/Desktop/playground/recommendation-engine/pio_api/service/CollectorService.go:141 +0x468
created by api-test/service.HandleItemBulkEvent
/home/kadirakinkorkunc/Desktop/playground/recommendation-engine/pio_api/service/CollectorService.go:116 +0x681
Debugger finished with exit code 0
Any idea how can i solve this problem?
edit: as Burak Serdar mentioned in the answers, i fixed the err, err2 confusion and the data race problem by using marshalling before send. Now it gives me the real error(res,err2) i guess.
2020/08/03 15:11:55 err http -> Post "http://localhost:7070/batch/events.json?accessKey=FJbGODbGzxD-CoLTdwTN2vwsuEEBJEZc4efrSUc6ekV1qUYAWPu5anDTyMGDoNq1": read tcp 127.0.0.1:54476->127.0.0.1:7070: read: connection reset by peer
Any idea on this?
There are several errors in your program. The runtime error is because you are checking if err2 is not nil, but then you're printing err, not err2. err is nil, thus the runtime error.
This means err2 is not nil, so you should see what that error is.
You mentioned you are sending messages in batches of 50, but that implementation is wrong. You add elements to the itemList, then start a goroutine with that itemList, then truncate it and start filling again. That is a data race, and your goroutines will see the itemList instances that are being modified by the handler. Instead of truncating, simply create a new itemList when you submit one to the goroutine, so each goroutine can have their own copy.
If you want to keep using the same slice, you can marshal the slice, and then pass the JSON message to the goroutine instead of the slice.
The error you are getting is the one sent by the server you are making the request to. Check this out for understanding more about the error.
Most likely the following for loop
for index, barcode := range itemBulk.Barcodes{
has too many iterations and because you are using separate go routines for creating the request, all the requests happen concurrently which either overloads the server or makes it deliberately close the connection.
I have some data in form of map and I'm converting it to []byt and signing it and when verifying, it gives True value even when data used for verifying and signing are different.
Here is what I did-
func main(){
n, _ := ioutil.ReadFile("privatekey")
private_key,_ := x509.ParseECPrivateKey(n)
public_key := private_key.PublicKey
data := map[string]string{
"data1": "somestring",
"data2": "12312",
"data3": "34fs4",
}
json_data, _ := json.Marshal(data)
data_2 := map[string]string{
"data1": "somestring",
"data2": "13312",
"data4": "fh34",
}
json_data_2,_ := json.Marshal(data_2)
r, s, _ := ecdsa.Sign(rand.Reader, private_key, json_data)
verifystatus := ecdsa.Verify(&public_key, json_data_2, r, s)
fmt.Println(verifystatus)
}
It is printing true. I tried changing the data and it seems that If json_data and json_data_2 have first 32 bytes common, then Verify returns true.
Is there some limit over the length of byte array I can send to ecdsa.Verify()? If so how can I use it for larger data?
The golang ecdsa.Sign and ecdsa.Verify functions are expected to take the output of a cryptographic hash function, rather than the message itself. So you are correct that only the first 32 bytes are being examined, in this case.
To resolve the problem first hash the messages using a cryptographic hash function such as SHA-2
I have a very strange problem, and i'm either really blind, or this is some kind of a bug. I have the following http.Handler:
func ServeHTTP(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
log.Println("Error while parsing form data")
return
}
log.Println("Printing r.PostForm:")
for key, values := range r.PostForm { // range over map
for _, value := range values { // range over []string
log.Println(key, value)
}
}
b, _ := ioutil.ReadAll(r.Body)
s := string(b)
log.Println("Printing body: ",s)
}
Now, when sending a PUT-Request to the url binded to this handler with the following FORM-Data:
Name=someName
Version=1.0.0
PLanguage=java
GitRepo=someRepo
This is ALWAYS the output:
Printing r.PostForm:
Printing body: Name=someName&Version=1.0.0&PLanguage=java&GitRepo=someRepo
I've been trying to find the cause for like 2 hours already and i just have no idea what the heck is wrong here. There is no error parsing the Form-Data, but the r.PostForm map is always empty (i also tried r.Form, with same result). So for debugging i added the part where i print the body, just to make sure there actually is some data in there - and it is. I would really appreciate any help here. Thanks in advance!
You need to set the 'Content-Type' header.
If no header is set "application/octet-stream" is used according to RFC 2616.
Long story short that is a binary format so your body will not be parsed into the Form.
Fairly new to Go, essentially in the actual code I'm writing I plan to read from a file which will contain environment variables, i.e. API_KEY=XYZ. Means I can keep them out of Version control. The below solution 'works' but I feel like there is probably a better way of doing it.
The end goal is to be able to access the elements from the file like so
m["API_KEY"] and that would print XYZ. This may even already exist and I'm re-inventing the wheel, I saw Go has environment variables but it didn't seem to be what I was after specifically.
So any help is appreciated.
Playground
Code:
package main
import (
"fmt"
"strings"
)
var m = make(map[string]string)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
arr := strings.Split(text, "\n")
for _, value := range arr {
tmp := strings.Split(value, "=")
m[strings.TrimSpace(tmp[0])] = strings.TrimSpace(tmp[1])
}
fmt.Println(m)
}
First, I would recommend to read this related question: How to handle configuration in Go
Next, I would really consider storing your configuration in another format. Because what you propose isn't a standard. It's close to Java's property file format (.properties), but even property files may contain Unicode sequences and thus your code is not a valid .properties format parser as it doesn't handle Unicode sequences at all.
Instead I would recommend to use JSON, so you can easily parse it with Go or with any other language, and there are many tools to edit JSON texts, and still it is human-friendly.
Going with the JSON format, decoding it into a map is just one function call: json.Unmarshal(). It could look like this:
text := `{"Var1":"Value1", "Var2":"Value2", "Var3":"Value3"}`
var m map[string]string
if err := json.Unmarshal([]byte(text), &m); err != nil {
fmt.Println("Invalid config file:", err)
return
}
fmt.Println(m)
Output (try it on the Go Playground):
map[Var1:Value1 Var2:Value2 Var3:Value3]
The json package will handle formatting and escaping for you, so you don't have to worry about any of those. It will also detect and report errors for you. Also JSON is more flexible, your config may contain numbers, texts, arrays, etc. All those come for "free" just because you chose the JSON format.
Another popular format for configuration is YAML, but the Go standard library does not include a YAML parser. See Go implementation github.com/go-yaml/yaml.
If you don't want to change your format, then I would just use the code you posted, because it does exactly what you want it to do: process input line-by-line, and parse a name = value pair from each line. And it does it in a clear and obvious way. Using a CSV or any other reader for this purpose is bad because they hide what's under the hood (they intentionally and rightfully hide format specific details and transformations). A CSV reader is a CSV reader first; even if you change the tabulator / comma symbol: it will interpret certain escape sequences and might give you different data than what you see in a plain text editor. This is an unintended behavior from your point of view, but hey, your input is not in CSV format and yet you asked a reader to interpret it as CSV!
One improvement I would add to your solution is the use of bufio.Scanner. It can be used to read an input line-by-line, and it handles different styles of newline sequences. It could look like this:
text := `Var1=Value1
Var2=Value2
Var3=Value3`
scanner := bufio.NewScanner(strings.NewReader(text))
m := map[string]string{}
for scanner.Scan() {
parts := strings.Split(scanner.Text(), "=")
if len(parts) == 2 {
m[strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
fmt.Println("Error encountered:", err)
}
fmt.Println(m)
Output is the same. Try it on the Go Playground.
Using bufio.Scanner has another advantage: bufio.NewScanner() accepts an io.Reader, the general interface for "all things being a source of bytes". This means if your config is stored in a file, you don't even have to read all the config into the memory, you can just open the file e.g. with os.Open() which returns a value of *os.File which also implements io.Reader, so you may directly pass the *os.File value to bufio.NewScanner() (and so the bufio.Scanner will read from the file and not from an in-memory buffer like in the example above).
1- You may read all with just one function call r.ReadAll() using csv.NewReader from encoding/csv with:
r.Comma = '='
r.TrimLeadingSpace = true
And result is [][]string, and input order is preserved, Try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := r.ReadAll()
if err != nil {
panic(err)
}
fmt.Println(all)
}
output:
[[Var1 Value1] [Var2 Value2] [Var3 Value3]]
2- You may fine-tune csv.ReadAll() to convert the output to the map, but the order is not preserved, try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"io"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := ReadAll(r)
if err != nil {
panic(err)
}
fmt.Println(all)
}
func ReadAll(r *csv.Reader) (map[string]string, error) {
m := make(map[string]string)
for {
tmp, err := r.Read()
if err == io.EOF {
return m, nil
}
if err != nil {
return nil, err
}
m[tmp[0]] = tmp[1]
}
}
output:
map[Var2:Value2 Var3:Value3 Var1:Value1]
I'm new to Go and I'm learning how to work with goroutines.
I have a function that downloads images:
func imageDownloader(uri string, filename string) {
fmt.Println("starting download for ", uri)
outFile, err := os.Create(filename)
defer outFile.Close()
if err != nil {
os.Exit(1)
}
client := &http.Client{}
req, err := http.NewRequest("GET", uri, nil)
resp, err := client.Do(req)
defer resp.Body.Close()
if err != nil {
panic(err)
}
header := resp.ContentLength
bar := pb.New(int(header))
rd := bar.NewProxyReader(resp.Body)
// and copy from reader
io.Copy(outFile, rd)
}
When I call by itself as part of another function, it downloads images completely and there is no truncated data.
However, when I try to modify it to make it a goroutine, images are often truncated or zero length files.
func imageDownloader(uri string, filename string, wg *sync.WaitGroup) {
...
io.Copy(outFile, rd)
wg.Done()
}
func main() {
var wg sync.WaitGroup
wg.Add(1)
go imageDownloader(url, file, &wg)
wg.Wait()
}
Am I using WaitGroups incorrectly? What could cause this and how can I fix it?
Update:
Solved it. I had placed the wg.add() function outside of a loop. :(
While I'm not sure exactly what's causing your issue, here's two options for how to get it back into working order.
First, looking to the example of how to use waitgroups from the sync library, try calling defer wg.Done() at the beginning of your function to ensure that even if the goroutine ends unexpectedly, that the waitgroup is properly decremented.
Second, io.Copy returns an error that you're not checking. That's not great practice anyway, but in your particular case it's preventing you from seeing if there is indeed an error in the copying routine. Check it and deal with it appropriately. It also returns the number of bytes written, which might help you as well.
Your example doesn't have anything obviously wrong with its use of WaitGroups. As long as you are calling wg.Add() with the same number as the number of goroutines you launch, or incrementing it by 1 every time you start a new goroutine, that should be correct.
However you call os.Exit and panic for certain errors conditions in the goroutine, so if you have more than one of these running, a failure in any one of them will terminate all of them, regardless of the use of WaitGroups. If it's failing without a panic message, I would take a look at the os.Exit(1) line.
It would also, be good practice in go to use defer wg.Done() at the start of your function, so that even if an error occurs, the goroutine still decrements its counter. That way your main thread won't hang on completion if one of the goroutines returns an error.
One change I would make in your example is leverage defer when you are Done. I think this defer ws.Done() should be the first statement in your function.
I like WaitGroup's simplicity. However, I do not like that we need to pass the reference to the goroutine because that would mean that the concurrency logic would be mixed with your business logic.
So I came up with this generic function to solve this problem for me:
// Parallelize parallelizes the function calls
func Parallelize(functions ...func()) {
var waitGroup sync.WaitGroup
waitGroup.Add(len(functions))
defer waitGroup.Wait()
for _, function := range functions {
go func(copy func()) {
defer waitGroup.Done()
copy()
}(function)
}
}
So your example could be solved this way:
func imageDownloader(uri string, filename string) {
...
io.Copy(outFile, rd)
}
func main() {
functions := []func(){}
list := make([]Object, 5)
for _, object := range list {
function := func(obj Object){
imageDownloader(object.uri, object.filename)
}(object)
functions = append(functions, function)
}
Parallelize(functions...)
fmt.Println("Done")
}
If you would like to use it, you can find it here https://github.com/shomali11/util