How to add URL query parameters to HTTP GET request? - http

I am trying to add a query parameter to a HTTP GET request but somehow methods pointed out on SO (e.g. here) don't work.
I have the following piece of code:
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
req, err := http.NewRequest("GET", "/callback", nil)
req.URL.Query().Add("code", "0xdead 0xbeef")
req.URL.Query().Set("code", "0xdead 0xbeef")
// this doesn't help
//req.URL.RawQuery = req.URL.Query().Encode()
if err != nil {
log.Fatal(err)
}
fmt.Printf("URL %+v\n", req.URL)
fmt.Printf("RawQuery %+v\n", req.URL.RawQuery)
fmt.Printf("Query %+v\n", req.URL.Query())
}
which prints:
URL /callback
RawQuery
Query map[]
Any suggestions on how to achieve this?
Playground example: https://play.golang.org/p/SYN4yNbCmo

Check the docs for req.URL.Query():
Query parses RawQuery and returns the corresponding values.
Since it "parses RawQuery and returns" the values what you get is just a copy of the URL query values, not a "live reference", so modifying that copy does nothing to the original query. In order to modify the original query you must assign to the original RawQuery.
q := req.URL.Query() // Get a copy of the query values.
q.Add("code", "0xdead 0xbeef") // Add a new value to the set.
req.URL.RawQuery = q.Encode() // Encode and assign back to the original query.
// URL /callback?code=0xdead+0xbeef
// RawQuery code=0xdead+0xbeef
// Query map[code:[0xdead 0xbeef]]
Note that your original attempt to do so didn't work because it simply parses the query values, encodes them, and assigns them right back to the URL:
req.URL.RawQuery = req.URL.Query().Encode()
// This is basically a noop!

You can directly build the query params using url.Values
func main() {
req, err := http.NewRequest("GET", "/callback", nil)
req.URL.RawQuery = url.Values{
"code": {"0xdead 0xbeef"},
}.Encode()
...
}
Notice the extra braces because each key can have multiple values.

Related

SQLite row returned via shell but not in Go

I have a SQLite query which returns expected results in the shell. However, when I run the same query in my Go program, no values are scanned.
Here is my query:
sqlite> select html, text from messages where id="17128ab240e7526e";
|Hey there
In this case, html is NULL and text has the string "Hey there". The table has other columns and indexes.
Here is my equivalent Go code:
package main
import (
"database/sql"
"log"
_ "github.com/mattn/go-sqlite3"
)
func main() {
filename := "emails.db"
conn, err := sql.Open("sqlite3", filename)
if err != nil {
log.Fatal(err)
}
row, err := conn.Query("select html, text from messages where id = ?", "17128ab240e7526e")
defer row.Close()
if err != nil {
log.Fatal(err)
}
hasRow := row.Next()
log.Println("Has row:", hasRow)
var html, text string
row.Scan(&html, &text)
log.Println("HTML:", html)
log.Println("TEXT:", text)
}
The output is:
$ go run main.go
2020/07/05 21:10:14 Has row: true
2020/07/05 21:10:14 HTML:
2020/07/05 21:10:14 TEXT:
Interestingly, this only happens when the column html is null. If html is not null, then the data is returned as expected, regardless of whether or not the value of the text column is null.
What might explain this behavior?
Based on the comments I modified the program using COALESCEand is working fine.
Key Point is : Cannot scan NULL, directly into string, can overcome this by utilize Coalesce function in Query.
row, err := conn.Query("select coalesce(html,'is-null'),text from messages where id =?", "17128ab240e7526e")
defer row.Close()
Output:
arun#debian:stackoverflow$ go run main.go
2020/07/06 10:08:08 Has row: true
HTML: is-null
TEXT: Hey there

r.PostForm and r.Form always empty

I have a very strange problem, and i'm either really blind, or this is some kind of a bug. I have the following http.Handler:
func ServeHTTP(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
log.Println("Error while parsing form data")
return
}
log.Println("Printing r.PostForm:")
for key, values := range r.PostForm { // range over map
for _, value := range values { // range over []string
log.Println(key, value)
}
}
b, _ := ioutil.ReadAll(r.Body)
s := string(b)
log.Println("Printing body: ",s)
}
Now, when sending a PUT-Request to the url binded to this handler with the following FORM-Data:
Name=someName
Version=1.0.0
PLanguage=java
GitRepo=someRepo
This is ALWAYS the output:
Printing r.PostForm:
Printing body: Name=someName&Version=1.0.0&PLanguage=java&GitRepo=someRepo
I've been trying to find the cause for like 2 hours already and i just have no idea what the heck is wrong here. There is no error parsing the Form-Data, but the r.PostForm map is always empty (i also tried r.Form, with same result). So for debugging i added the part where i print the body, just to make sure there actually is some data in there - and it is. I would really appreciate any help here. Thanks in advance!
You need to set the 'Content-Type' header.
If no header is set "application/octet-stream" is used according to RFC 2616.
Long story short that is a binary format so your body will not be parsed into the Form.

Nested values in url.Values

I'm working on an API client and I need to be able to send a nested JSON structure with a client.PostForm request. The issue I'm encountering is this:
reqBody := url.Values{
"method": {"server-method"},
"arguments": {
"download-dir": {"/path/to/downloads/dir"},
"filename": {variableWithURL},
"paused": {"false"},
},
}
When I try to go build this, I get the following errors:
./transmission.go:17: syntax error: unexpected :, expecting }
./transmission.go:24: non-declaration statement outside function body
./transmission.go:26: non-declaration statement outside function body
./transmission.go:27: non-declaration statement outside function body
./transmission.go:29: non-declaration statement outside function body
./transmission.go:38: non-declaration statement outside function body
./transmission.go:39: syntax error: unexpected }
I'm wondering what the correct way to created a nested set of values in this scenario. Thanks in advance!
I was able to figure this out on my own! The answer is to struct all-the-things!
type Command struct {
Method string `json:"method,omitempty"`
Arguments Arguments `json:"arguments,omitempty"`
}
type Arguments struct {
DownloadDir string `json:"download-dir,omitempty"`
Filename string `json:"filename,omitempty"`
Paused bool `json:"paused,omitempty"`
}
Then, when creating your PostForm:
jsonBody, err := json.Marshal(reqBody) // reqBody is a Command
if (err != nil) {
return false
}
req, err := http.NewRequest("POST", c.Url, strings.NewReader(string(jsonBody)))
Hope this helps!
You are not using properly the url.Values, according to the source code (url package, url.go):
// Values maps a string key to a list of values.
// It is typically used for query parameters and form values.
// Unlike in the http.Header map, the keys in a Values map
// are case-sensitive.
type Values map[string][]string
But arguments is not compliant with the definition because the object of arguments is not an array of strings.
I used NewRequest as Connor mentions in his answer but using a struct and then marshalling it seems an unnecessary step to me.
I passed my nested json string straight to strings.NewReader:
import (
"net/http"
"strings"
)
reqBody := strings.NewReader(`{
"method": {"server-method"},
"arguments": {
"download-dir": {"/path/to/downloads/dir"},
"paused": {"false"},
},
}`)
req, err := http.NewRequest("POST", "https://httpbin.org/post", reqBody)
Hope it helps those who are stuck with Go's http PostForm which only accepts url.Values as argument while url.Values cannot generate nested json.

downloading files with goroutines?

I'm new to Go and I'm learning how to work with goroutines.
I have a function that downloads images:
func imageDownloader(uri string, filename string) {
fmt.Println("starting download for ", uri)
outFile, err := os.Create(filename)
defer outFile.Close()
if err != nil {
os.Exit(1)
}
client := &http.Client{}
req, err := http.NewRequest("GET", uri, nil)
resp, err := client.Do(req)
defer resp.Body.Close()
if err != nil {
panic(err)
}
header := resp.ContentLength
bar := pb.New(int(header))
rd := bar.NewProxyReader(resp.Body)
// and copy from reader
io.Copy(outFile, rd)
}
When I call by itself as part of another function, it downloads images completely and there is no truncated data.
However, when I try to modify it to make it a goroutine, images are often truncated or zero length files.
func imageDownloader(uri string, filename string, wg *sync.WaitGroup) {
...
io.Copy(outFile, rd)
wg.Done()
}
func main() {
var wg sync.WaitGroup
wg.Add(1)
go imageDownloader(url, file, &wg)
wg.Wait()
}
Am I using WaitGroups incorrectly? What could cause this and how can I fix it?
Update:
Solved it. I had placed the wg.add() function outside of a loop. :(
While I'm not sure exactly what's causing your issue, here's two options for how to get it back into working order.
First, looking to the example of how to use waitgroups from the sync library, try calling defer wg.Done() at the beginning of your function to ensure that even if the goroutine ends unexpectedly, that the waitgroup is properly decremented.
Second, io.Copy returns an error that you're not checking. That's not great practice anyway, but in your particular case it's preventing you from seeing if there is indeed an error in the copying routine. Check it and deal with it appropriately. It also returns the number of bytes written, which might help you as well.
Your example doesn't have anything obviously wrong with its use of WaitGroups. As long as you are calling wg.Add() with the same number as the number of goroutines you launch, or incrementing it by 1 every time you start a new goroutine, that should be correct.
However you call os.Exit and panic for certain errors conditions in the goroutine, so if you have more than one of these running, a failure in any one of them will terminate all of them, regardless of the use of WaitGroups. If it's failing without a panic message, I would take a look at the os.Exit(1) line.
It would also, be good practice in go to use defer wg.Done() at the start of your function, so that even if an error occurs, the goroutine still decrements its counter. That way your main thread won't hang on completion if one of the goroutines returns an error.
One change I would make in your example is leverage defer when you are Done. I think this defer ws.Done() should be the first statement in your function.
I like WaitGroup's simplicity. However, I do not like that we need to pass the reference to the goroutine because that would mean that the concurrency logic would be mixed with your business logic.
So I came up with this generic function to solve this problem for me:
// Parallelize parallelizes the function calls
func Parallelize(functions ...func()) {
var waitGroup sync.WaitGroup
waitGroup.Add(len(functions))
defer waitGroup.Wait()
for _, function := range functions {
go func(copy func()) {
defer waitGroup.Done()
copy()
}(function)
}
}
So your example could be solved this way:
func imageDownloader(uri string, filename string) {
...
io.Copy(outFile, rd)
}
func main() {
functions := []func(){}
list := make([]Object, 5)
for _, object := range list {
function := func(obj Object){
imageDownloader(object.uri, object.filename)
}(object)
functions = append(functions, function)
}
Parallelize(functions...)
fmt.Println("Done")
}
If you would like to use it, you can find it here https://github.com/shomali11/util

Using reflection with structs to build generic handler function

I have some trouble building a function that can dynamically use parametrized structs. For that reason my code has 20+ functions that are similar except basically for one type that gets used. Most of my experience is with Java, and I'd just develop basic generic functions, or use plain Object as parameter to function (and reflection from that point on). I would need something similar, using Go.
I have several types like:
// The List structs are mostly needed for json marshalling
type OrangeList struct {
Oranges []Orange
}
type BananaList struct {
Bananas []Banana
}
type Orange struct {
Orange_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
type Banana struct {
Banana_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
Then I have function, basically for each list type:
// In the end there are 20+ of these, the only difference is basically in two types!
// This is very un-DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
oranges := OrangeList{} // This type changes
for rows.Next() {
orange := Orange{} // This type changes
err := rows.StructScan(&orange) // This can handle each case already, could also use reflect myself too
checkError(err, "rows.Scan")
oranges.Oranges = append(oranges.Oranges,orange)
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(oranges)
return jsontext, err
}
Yes, I could change the sql library to use more intelligent ORM or framework, but that's besides the point. I want to learn on how to build generic function that can handle similar function for all my different types.
I got this far, but it still doesn't work properly (target isn't expected struct I think):
func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
tgtValueOf := reflect.ValueOf(tgt)
tgtType := tgtValueOf.Type()
targets := reflect.SliceOf(tgtValueOf.Type())
for rows.Next() {
target := reflect.New(tgtType)
err := rows.StructScan(&target) // At this stage target still isn't 1:1 smilar struct so the StructScan fails... It's some perverted "Value" object instead. Meh.
// Removed appending to the list because the solutions for that would be similar
checkError(err, "rows.Scan")
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(targets)
return jsontext, err
}
So umm, I would need to give the list type, and the vanilla type as parameters, then build one of each, and the rest of my logic would be probably fixable quite easily.
Turns out there's an sqlx.StructScan(rows, &destSlice) function that will do your inner loop, given a slice of the appropriate type. The sqlx docs refer to caching results of reflection operations, so it may have some additional optimizations compared to writing one.
Sounds like the immediate question you're actually asking is "how do I get something out of my reflect.Value that rows.StructScan will accept?" And the direct answer is reflect.Interface(target); it should return an interface{} representing an *Orange you can pass directly to StructScan (no additional & operation needed). Then, I think targets = reflect.Append(targets, target.Indirect()) will turn your target into a reflect.Value representing an Orange and append it to the slice. targets.Interface() should get you an interface{} representing an []Orange that json.Marshal understands. I say all these 'should's and 'I think's because I haven't tried that route.
Reflection, in general, is verbose and slow. Sometimes it's the best or only way to get something done, but it's often worth looking for a way to get your task done without it when you can.
So, if it works in your app, you can also convert Rows straight to JSON, without going through intermediate structs. Here's a sample program (requires sqlite3 of course) that turns sql.Rows into map[string]string and then into JSON. (Note it doesn't try to handle NULL, represent numbers as JSON numbers, or generally handle anything that won't fit in a map[string]string.)
package main
import (
_ "code.google.com/p/go-sqlite/go1/sqlite3"
"database/sql"
"encoding/json"
"os"
)
func main() {
db, err := sql.Open("sqlite3", "foo")
if err != nil {
panic(err)
}
tryQuery := func(query string, args ...interface{}) *sql.Rows {
rows, err := db.Query(query, args...)
if err != nil {
panic(err)
}
return rows
}
tryQuery("drop table if exists t")
tryQuery("create table t(i integer, j integer)")
tryQuery("insert into t values(?, ?)", 1, 2)
tryQuery("insert into t values(?, ?)", 3, 1)
// now query and serialize
rows := tryQuery("select * from t")
names, err := rows.Columns()
if err != nil {
panic(err)
}
// vals stores the values from one row
vals := make([]interface{}, 0, len(names))
for _, _ = range names {
vals = append(vals, new(string))
}
// rowMaps stores all rows
rowMaps := make([]map[string]string, 0)
for rows.Next() {
rows.Scan(vals...)
// now make value list into name=>value map
currRow := make(map[string]string)
for i, name := range names {
currRow[name] = *(vals[i].(*string))
}
// accumulating rowMaps is the easy way out
rowMaps = append(rowMaps, currRow)
}
json, err := json.Marshal(rowMaps)
if err != nil {
panic(err)
}
os.Stdout.Write(json)
}
In theory, you could build this to do fewer allocations by not reusing the same rowMap each time and using a json.Encoder to append each row's JSON to a buffer. You could go a step further and not use a rowMap at all, just the lists of names and values. I should say I haven't compared the speed against a reflect-based approach, though I know reflect is slow enough it might be worth comparing them if you can put up with either strategy.

Resources