Failing to construct an HTTP GET request in Go - http

I'm able to get an HTTP GET request to work like so:
resp, err := http.Get("https://services.nvd.nist.gov/rest/json/cves/1.0/?modStartDate=2021-10-29T12%3A00%3A00%3A000%20UTC-00%3A00&modEndDate=2021-10-30T00%3A00%3A00%3A000%20UTC-00%3A00&resultsPerPage=5000")
I wanted to have an easier way to construct the query parameters so I created this:
req, err := http.NewRequest("GET", "https://services.nvd.nist.gov/rest/json/cves/1.0/", nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
q := req.URL.Query()
q.Set("modStartDate", "2021-10-29T12:00:00:000 UTC-00:00")
q.Set("modEndDate", "2021-10-30T00:00:000 UTC-00:00")
q.Set("resultsPerPage", "5000")
req.URL.RawQuery = q.Encode()
client := http.Client{}
resp, err := client.Do(req)
The response status is a 404. It's not clear to me what I'm missing. What is the first GET request doing that I'm missing in the second one?
For reference, the API I'm working with:
https://nvd.nist.gov/developers/vulnerabilities

As #JimB noted, comparing your original raw query with your generate query shows the formatting issue:
origURL := "https://services.nvd.nist.gov/rest/json/cves/1.0/?modStartDate=2021-10-29T12%3A00%3A00%3A000%20UTC-00%3A00&modEndDate=2021-10-30T00%3A00%3A00%3A000%20UTC-00%3A00&resultsPerPage=5000"
u, _ := url.Parse(origURL)
q, _ := url.ParseQuery(u.RawQuery)
q2 := url.Values{}
q2.Set("modStartDate", "2021-10-29T12:00:00:000 UTC-00:00")
q2.Set("modEndDate", "2021-10-30T00:00:000 UTC-00:00")
q2.Set("resultsPerPage", "5000")
fmt.Println(q) // map[modEndDate:[2021-10-30T00:00:00:000 UTC-00:00] modStartDate:[2021-10-29T12:00:00:000 UTC-00:00] resultsPerPage:[5000]]
fmt.Println(q2) // map[modEndDate:[2021-10-30T00:00:000 UTC-00:00] modStartDate:[2021-10-29T12:00:00:000 UTC-00:00] resultsPerPage:[5000]]
https://play.golang.org/p/36RNIb7Micu
So add the extra :00 to your time format:
q.Set("modStartDate", "2021-10-29T12:00:00:00:000 UTC-00:00")
q.Set("modEndDate", "2021-10-30T00:00:00:000 UTC-00:00")

Related

Downloading content with range request corrupts

I have set up a basic project on Github: https://github.com/kounelios13/range-download.
Essentially this project tries to download a file using HTTP Range requests, assemble it, and save it back to disk. I am trying to follow this article (apart from the goroutines for the time being). When I try to download the file using range requests the final size, after all requests data are combined, is bigger than the original size I would get have and the final file is corrupted.
Here is the code responsible for downloading the file
type Manager struct{
limit int
}
func NewManager(limit int) *Manager{
return &Manager{
limit: limit,
}
}
func (m *Manager) DownloadBody(url string ) ([]byte ,error){
// First we need to determine the filesize
body := make([]byte ,0)
response , err := http.Head(url) // We perform a Head request to get header information
if response.StatusCode != http.StatusOK{
return nil ,fmt.Errorf("received code %d",response.StatusCode)
}
if err != nil{
return nil , err
}
maxConnections := m.limit // Number of maximum concurrent co routines
bodySize , _ := strconv.Atoi(response.Header.Get("Content-Length"))
bufferSize :=(bodySize) / (maxConnections)
diff := bodySize % maxConnections
read := 0
for i:=0;i<maxConnections;i++{
min := bufferSize * i
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
log.Println("File size:",bodySize , "Downloaded size:",len(body)," Actual read:",read)
return body, nil
}
Also I noticed that the bigger the limit I set the more the difference between the original file content length and the actual size of all request bodies combined is.
Here is my main.go
func main() {
imgUrl := "https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg"
maxConnections := 4
manager := lib.NewManager(maxConnections)
data , e:= manager.DownloadBody(imgUrl)
if e!= nil{
log.Fatalln(e)
}
ioutil.WriteFile("foo.jpg" , data,0777)
}
Note: for the time being I am not interested in making the code concurrent.
Any ideas what I could be missing?
Note: I have confirmed that server returns a 206 partial content using the curl command below:
curl -I https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg
Thanks to #mh-cbon I managed to write a simple test that helped me find the solution . Here is the fixed code
for i:=0;i<maxConnections;i++{
min := bufferSize * i
if i != 0{
min++
}
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
The problem was that I didn't have a correct min value to begin with . So lets say I have the following ranges to download :
0-100
101 - 200
My code would download bytes from 0-100 and then again from 100-200 instead of 101-200
So I made sure on every iteration (except the first one) to increment the min by 1 so as not to overlap with the previous range
Here is a simple test I managed to fix from the docs provided as comments:
func TestManager_DownloadBody(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
http.ServeContent(writer,request ,"hey" ,time.Now() ,bytes.NewReader([]byte(`hello world!!!!`)))
}))
defer ts.Close()
m := NewManager(4)
data , err := m.DownloadBody(ts.URL)
if err != nil{
t.Errorf("%s",err)
}
if string(data) != "hello world!!!!"{
t.Errorf("Expected hello world!!!! . received : [%s]",data)
}
}
Sure there are more tests to be written but it is a good start

Sending files over http without actually creating any file

I need to send a POST request to some API which accepts only file as multipart/form-data. But I have the data as []byte. Now what I can do is write this []byte data to a temporary file and then send that file. After some googling I found this code to upload file:
fileDir, _ := os.Getwd()
fileName := "upload-file.txt"
filePath := path.Join(fileDir, fileName)
file, _ := os.Open(filePath)
defer file.Close()
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, _ := writer.CreateFormFile("file", filepath.Base(file.Name()))
io.Copy(part, file)
writer.Close()
r, _ := http.NewRequest("POST", "http://example.com", body)
r.Header.Add("Content-Type", writer.FormDataContentType())
client := &http.Client{}
client.Do(r)
After some more googling I learned this. Seems to me for sending files we only need file name and content (maybe size). All these data I can provide without creating a temporary file, writing to that file and then again reading back from that file.
Is it possible to do so? Can I send []bytes as a file somehow? A working example is much appreciated.
Write the []byte directly to the part. Use this code to write the slice content to the part and post the form:
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, _ := writer.CreateFormFile("file", "insert-name-here")
part.Write(content) // <-- content is the []byte
writer.Close()
r, _ := http.NewRequest("POST", "http://example.com", body)
r.Header.Add("Content-Type", writer.FormDataContentType())
err := http.DefaultClient.Do(r)
if err != nil {
// handle error
}

How to add URL query parameters to HTTP GET request?

I am trying to add a query parameter to a HTTP GET request but somehow methods pointed out on SO (e.g. here) don't work.
I have the following piece of code:
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
req, err := http.NewRequest("GET", "/callback", nil)
req.URL.Query().Add("code", "0xdead 0xbeef")
req.URL.Query().Set("code", "0xdead 0xbeef")
// this doesn't help
//req.URL.RawQuery = req.URL.Query().Encode()
if err != nil {
log.Fatal(err)
}
fmt.Printf("URL %+v\n", req.URL)
fmt.Printf("RawQuery %+v\n", req.URL.RawQuery)
fmt.Printf("Query %+v\n", req.URL.Query())
}
which prints:
URL /callback
RawQuery
Query map[]
Any suggestions on how to achieve this?
Playground example: https://play.golang.org/p/SYN4yNbCmo
Check the docs for req.URL.Query():
Query parses RawQuery and returns the corresponding values.
Since it "parses RawQuery and returns" the values what you get is just a copy of the URL query values, not a "live reference", so modifying that copy does nothing to the original query. In order to modify the original query you must assign to the original RawQuery.
q := req.URL.Query() // Get a copy of the query values.
q.Add("code", "0xdead 0xbeef") // Add a new value to the set.
req.URL.RawQuery = q.Encode() // Encode and assign back to the original query.
// URL /callback?code=0xdead+0xbeef
// RawQuery code=0xdead+0xbeef
// Query map[code:[0xdead 0xbeef]]
Note that your original attempt to do so didn't work because it simply parses the query values, encodes them, and assigns them right back to the URL:
req.URL.RawQuery = req.URL.Query().Encode()
// This is basically a noop!
You can directly build the query params using url.Values
func main() {
req, err := http.NewRequest("GET", "/callback", nil)
req.URL.RawQuery = url.Values{
"code": {"0xdead 0xbeef"},
}.Encode()
...
}
Notice the extra braces because each key can have multiple values.

GetDesignDocuments from golang SDK

I would like to retrieve all Design Documents of given bucket.
So I prepared short code
err := cbSrc.Connect()
if err != nil {
log.Println(err.Error())
os.Exit(2)
}
bm := cbSrc.Bucket.Manager(username, password)
dds, err := bm.GetDesignDocuments()
if err != nil {
log.Println(err.Error())
os.Exit(3)
}
log.Printf("%#v\n", dds)
for ind := range dds {
fmt.Println(dds[ind].Name)
}
and I'm always receiving slice of pointers with correct length, but the addresses of the pointers is always the same
[]*gocb.DesignDocument{(*gocb.DesignDocument)(0xc82011eb50), (*gocb.DesignDocument)(0xc82011eb50), (*gocb.DesignDocument)(0xc82011eb50)}
So basically, I receive 3 times the third design model.
And for range statement gives me 3 times the same value

Golang - what's the correct order to check error and defer an operation? [duplicate]

This question already has an answer here:
Do we need to close the response object if an error occurs while calling http.Get(url)?
(1 answer)
Closed 6 years ago.
I'm new to Go. If I'm doing an HTTP get request let this:
resp, err := http.Get("https://www.google.com")
Now I need to check whether err is nil and defer resp.Body.Close(). What's the correct order to do these two operations?
You need to check for error right after the call to Get. If Get fails, resp will be set to nil. This means that resp.Body would generate runtime nil pointer dereferenced error.
resp, err := http.Get("https://www.google.com")
if err != nil {
// process error
return err
}
defer resp.Body.Close()

Resources