GetDesignDocuments from golang SDK - pointers

I would like to retrieve all Design Documents of given bucket.
So I prepared short code
err := cbSrc.Connect()
if err != nil {
log.Println(err.Error())
os.Exit(2)
}
bm := cbSrc.Bucket.Manager(username, password)
dds, err := bm.GetDesignDocuments()
if err != nil {
log.Println(err.Error())
os.Exit(3)
}
log.Printf("%#v\n", dds)
for ind := range dds {
fmt.Println(dds[ind].Name)
}
and I'm always receiving slice of pointers with correct length, but the addresses of the pointers is always the same
[]*gocb.DesignDocument{(*gocb.DesignDocument)(0xc82011eb50), (*gocb.DesignDocument)(0xc82011eb50), (*gocb.DesignDocument)(0xc82011eb50)}
So basically, I receive 3 times the third design model.
And for range statement gives me 3 times the same value

Related

Failing to construct an HTTP GET request in Go

I'm able to get an HTTP GET request to work like so:
resp, err := http.Get("https://services.nvd.nist.gov/rest/json/cves/1.0/?modStartDate=2021-10-29T12%3A00%3A00%3A000%20UTC-00%3A00&modEndDate=2021-10-30T00%3A00%3A00%3A000%20UTC-00%3A00&resultsPerPage=5000")
I wanted to have an easier way to construct the query parameters so I created this:
req, err := http.NewRequest("GET", "https://services.nvd.nist.gov/rest/json/cves/1.0/", nil)
if err != nil {
fmt.Printf("Error: %v\n", err)
os.Exit(1)
}
q := req.URL.Query()
q.Set("modStartDate", "2021-10-29T12:00:00:000 UTC-00:00")
q.Set("modEndDate", "2021-10-30T00:00:000 UTC-00:00")
q.Set("resultsPerPage", "5000")
req.URL.RawQuery = q.Encode()
client := http.Client{}
resp, err := client.Do(req)
The response status is a 404. It's not clear to me what I'm missing. What is the first GET request doing that I'm missing in the second one?
For reference, the API I'm working with:
https://nvd.nist.gov/developers/vulnerabilities
As #JimB noted, comparing your original raw query with your generate query shows the formatting issue:
origURL := "https://services.nvd.nist.gov/rest/json/cves/1.0/?modStartDate=2021-10-29T12%3A00%3A00%3A000%20UTC-00%3A00&modEndDate=2021-10-30T00%3A00%3A00%3A000%20UTC-00%3A00&resultsPerPage=5000"
u, _ := url.Parse(origURL)
q, _ := url.ParseQuery(u.RawQuery)
q2 := url.Values{}
q2.Set("modStartDate", "2021-10-29T12:00:00:000 UTC-00:00")
q2.Set("modEndDate", "2021-10-30T00:00:000 UTC-00:00")
q2.Set("resultsPerPage", "5000")
fmt.Println(q) // map[modEndDate:[2021-10-30T00:00:00:000 UTC-00:00] modStartDate:[2021-10-29T12:00:00:000 UTC-00:00] resultsPerPage:[5000]]
fmt.Println(q2) // map[modEndDate:[2021-10-30T00:00:000 UTC-00:00] modStartDate:[2021-10-29T12:00:00:000 UTC-00:00] resultsPerPage:[5000]]
https://play.golang.org/p/36RNIb7Micu
So add the extra :00 to your time format:
q.Set("modStartDate", "2021-10-29T12:00:00:00:000 UTC-00:00")
q.Set("modEndDate", "2021-10-30T00:00:00:000 UTC-00:00")

Downloading content with range request corrupts

I have set up a basic project on Github: https://github.com/kounelios13/range-download.
Essentially this project tries to download a file using HTTP Range requests, assemble it, and save it back to disk. I am trying to follow this article (apart from the goroutines for the time being). When I try to download the file using range requests the final size, after all requests data are combined, is bigger than the original size I would get have and the final file is corrupted.
Here is the code responsible for downloading the file
type Manager struct{
limit int
}
func NewManager(limit int) *Manager{
return &Manager{
limit: limit,
}
}
func (m *Manager) DownloadBody(url string ) ([]byte ,error){
// First we need to determine the filesize
body := make([]byte ,0)
response , err := http.Head(url) // We perform a Head request to get header information
if response.StatusCode != http.StatusOK{
return nil ,fmt.Errorf("received code %d",response.StatusCode)
}
if err != nil{
return nil , err
}
maxConnections := m.limit // Number of maximum concurrent co routines
bodySize , _ := strconv.Atoi(response.Header.Get("Content-Length"))
bufferSize :=(bodySize) / (maxConnections)
diff := bodySize % maxConnections
read := 0
for i:=0;i<maxConnections;i++{
min := bufferSize * i
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
log.Println("File size:",bodySize , "Downloaded size:",len(body)," Actual read:",read)
return body, nil
}
Also I noticed that the bigger the limit I set the more the difference between the original file content length and the actual size of all request bodies combined is.
Here is my main.go
func main() {
imgUrl := "https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg"
maxConnections := 4
manager := lib.NewManager(maxConnections)
data , e:= manager.DownloadBody(imgUrl)
if e!= nil{
log.Fatalln(e)
}
ioutil.WriteFile("foo.jpg" , data,0777)
}
Note: for the time being I am not interested in making the code concurrent.
Any ideas what I could be missing?
Note: I have confirmed that server returns a 206 partial content using the curl command below:
curl -I https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg
Thanks to #mh-cbon I managed to write a simple test that helped me find the solution . Here is the fixed code
for i:=0;i<maxConnections;i++{
min := bufferSize * i
if i != 0{
min++
}
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
The problem was that I didn't have a correct min value to begin with . So lets say I have the following ranges to download :
0-100
101 - 200
My code would download bytes from 0-100 and then again from 100-200 instead of 101-200
So I made sure on every iteration (except the first one) to increment the min by 1 so as not to overlap with the previous range
Here is a simple test I managed to fix from the docs provided as comments:
func TestManager_DownloadBody(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
http.ServeContent(writer,request ,"hey" ,time.Now() ,bytes.NewReader([]byte(`hello world!!!!`)))
}))
defer ts.Close()
m := NewManager(4)
data , err := m.DownloadBody(ts.URL)
if err != nil{
t.Errorf("%s",err)
}
if string(data) != "hello world!!!!"{
t.Errorf("Expected hello world!!!! . received : [%s]",data)
}
}
Sure there are more tests to be written but it is a good start

Flexible date/time parsing in Go (Adding default values in parsing)

Further to this question, I want to parse a date/time passed on the command line to a Go program. At the moment, I use the flag package to populate a string variable ts and then the following code:
if ts == "" {
config.Until = time.Now()
} else {
const layout = "2006-01-02T15:04:05"
if config.Until, err = time.Parse(layout, ts); err != nil {
log.Errorf("Could not parse %s as a time string: %s. Using current date/time instead.", ts, err.Error())
config.Until = time.Now()
}
}
This works OK, provided the user passes exactly the right format - e.g. 2019-05-20T09:07:33 or some such.
However, what I would like, if possible, is the flexibility to pass e.g. 2019-05-20T09:07 or 2019-05-20T09 or maybe even 2019-05-20 and have the hours, minutes and seconds default to 0 where appropriate.
Is there a sane1 way to do this?
1 not requiring me to essentially write my own parser
UPDATE
I've kind of got a solution to this, although it's not particularly elegant, it does appear to work for most of the cases I am likely to encounter.
package main
import (
"fmt"
"time"
)
func main() {
const layout = "2006-01-02T15:04:05"
var l string
var input string
for _, input = range []string{"2019-05-30", "2019-05-30T16", "2019-05-30T16:04", "2019-05-30T16:04:34",
"This won't work", "This is extravagantly long and won't work either"} {
if len(input) < len(layout) {
l = layout[:len(input)]
} else {
l = layout
}
if d, err := time.Parse(l, input); err != nil {
fmt.Printf("Error %s\n", err.Error())
} else {
fmt.Printf("Layout %-20s gives time %v\n", l, d)
}
}
}
Just try each format, until one works. If none work, return an error.
var formats = []string{"2006-01-02T15:04:05", "2006-01-02", ...}
func parseTime(input string) (time.Time, error) {
for _, format := range formats {
t, err := time.Parse(format, input)
if err == nil {
return t, nil
}
}
return time.Time{}, errors.New("Unrecognized time format")
}
I think this library is what you are looking for https://github.com/araddon/dateparse
Parse many date strings without knowing the format in advance. Uses a scanner to read bytes and use a state machine to find format.
t, err := dateparse.ParseAny("3/1/2014")
In the specific scenario you describe, you could check the length of the input datestamp string, and add the proper length of zero stuff at the end of it to correspond to your layout. So basically you could append as much of the string "T00:00:00" (counting from the end), to the input as is missing in length compared to the layout format string.

How to solve concurrency access of Golang map?

Now i have a map with only one write/delete goroutine and many read goroutines, there are some solutions upon Map with concurrent access, such as RWMutex, sync.map, concurrent-map, sync.atomic, sync.Value, what's the best choice for me?
RWMutex's read lock is a little redundant
sync.map and concurrent-map focus on many write goroutines
Your question is a little vague - so I'll break it down.
What form of concurrent access should I use for a map?
The choice depends on the performance you require from the map. I would opt for a simple mutex (or a RWMutex) based approach.
Granted, you can get better performance from a concurrent map. sync.Mutex locks all of a maps buckets, whereas in a concurrent map, each bucket has it's own sync.Mutex.
Again - it all depends on the scale of your program and the performance you require.
How would I use a mutex for concurrent access?
To ensure the map is being used correctly, you can wrap this in a struct.
type Store struct {
Data map[T]T
}
This a more object-oriented solution, but it allows us to make sure any read/writes are performed concurrently. As well as this, we can easily store other information that may be useful for debugging or security, such as author.
Now, we would implement this with a set of methods like so:
mux sync.Mutex
// New initialises a Store type with an empty map
func New(t, h uint) *Store {
return &Store{
Data: map[T]T{},
}
}
// Insert adds a new key i to the store and places the value of x at this location
// If there is an error, this is returned - if not, this is nil
func (s *Store) Insert(i, x T) error {
mux.Lock()
defer mux.Unlock()
_, ok := s.Data[i]
if ok {
return fmt.Errorf("index %s already exists; use update", i)
}
s.Data[i] = x
return nil
}
// Update changes the value found at key i to x
// If there is an error, this is returned - if not, this is nil
func (s *Store) Update(i, x T) error {
mux.Lock()
defer mux.Unlock()
_, ok := s.Data[i]
if !ok {
return fmt.Errorf("value at index %s does not exist; use insert", i)
}
s.Data[i] = x
return nil
}
// Fetch returns the value found at index i in the store
// If there is an error, this is returned - if not, this is nil
func (s *Store) Fetch(i T) (T, error) {
mux.Lock()
defer mux.Unlock()
v, ok := s.Data[i]
if !ok {
return "", fmt.Errorf("no value for key %s exists", i)
}
return v, nil
}
// Delete removes the index i from store
// If there is an error, this is returned - if not, this is nil
func (s *Store) Delete(i T) (T, error) {
mux.Lock()
defer mux.Unlock()
v, ok := s.Data[i]
if !ok {
return "", fmt.Errorf("index %s already empty", i)
}
delete(s.Data, i)
return v, nil
}
In my solution, I've used a simple sync.Mutex - but you can simply change this code to accommodate RWMutex.
I recommend you take a look at How to use RWMutex in Golang?.

Using reflection with structs to build generic handler function

I have some trouble building a function that can dynamically use parametrized structs. For that reason my code has 20+ functions that are similar except basically for one type that gets used. Most of my experience is with Java, and I'd just develop basic generic functions, or use plain Object as parameter to function (and reflection from that point on). I would need something similar, using Go.
I have several types like:
// The List structs are mostly needed for json marshalling
type OrangeList struct {
Oranges []Orange
}
type BananaList struct {
Bananas []Banana
}
type Orange struct {
Orange_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
type Banana struct {
Banana_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
Then I have function, basically for each list type:
// In the end there are 20+ of these, the only difference is basically in two types!
// This is very un-DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
oranges := OrangeList{} // This type changes
for rows.Next() {
orange := Orange{} // This type changes
err := rows.StructScan(&orange) // This can handle each case already, could also use reflect myself too
checkError(err, "rows.Scan")
oranges.Oranges = append(oranges.Oranges,orange)
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(oranges)
return jsontext, err
}
Yes, I could change the sql library to use more intelligent ORM or framework, but that's besides the point. I want to learn on how to build generic function that can handle similar function for all my different types.
I got this far, but it still doesn't work properly (target isn't expected struct I think):
func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
tgtValueOf := reflect.ValueOf(tgt)
tgtType := tgtValueOf.Type()
targets := reflect.SliceOf(tgtValueOf.Type())
for rows.Next() {
target := reflect.New(tgtType)
err := rows.StructScan(&target) // At this stage target still isn't 1:1 smilar struct so the StructScan fails... It's some perverted "Value" object instead. Meh.
// Removed appending to the list because the solutions for that would be similar
checkError(err, "rows.Scan")
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(targets)
return jsontext, err
}
So umm, I would need to give the list type, and the vanilla type as parameters, then build one of each, and the rest of my logic would be probably fixable quite easily.
Turns out there's an sqlx.StructScan(rows, &destSlice) function that will do your inner loop, given a slice of the appropriate type. The sqlx docs refer to caching results of reflection operations, so it may have some additional optimizations compared to writing one.
Sounds like the immediate question you're actually asking is "how do I get something out of my reflect.Value that rows.StructScan will accept?" And the direct answer is reflect.Interface(target); it should return an interface{} representing an *Orange you can pass directly to StructScan (no additional & operation needed). Then, I think targets = reflect.Append(targets, target.Indirect()) will turn your target into a reflect.Value representing an Orange and append it to the slice. targets.Interface() should get you an interface{} representing an []Orange that json.Marshal understands. I say all these 'should's and 'I think's because I haven't tried that route.
Reflection, in general, is verbose and slow. Sometimes it's the best or only way to get something done, but it's often worth looking for a way to get your task done without it when you can.
So, if it works in your app, you can also convert Rows straight to JSON, without going through intermediate structs. Here's a sample program (requires sqlite3 of course) that turns sql.Rows into map[string]string and then into JSON. (Note it doesn't try to handle NULL, represent numbers as JSON numbers, or generally handle anything that won't fit in a map[string]string.)
package main
import (
_ "code.google.com/p/go-sqlite/go1/sqlite3"
"database/sql"
"encoding/json"
"os"
)
func main() {
db, err := sql.Open("sqlite3", "foo")
if err != nil {
panic(err)
}
tryQuery := func(query string, args ...interface{}) *sql.Rows {
rows, err := db.Query(query, args...)
if err != nil {
panic(err)
}
return rows
}
tryQuery("drop table if exists t")
tryQuery("create table t(i integer, j integer)")
tryQuery("insert into t values(?, ?)", 1, 2)
tryQuery("insert into t values(?, ?)", 3, 1)
// now query and serialize
rows := tryQuery("select * from t")
names, err := rows.Columns()
if err != nil {
panic(err)
}
// vals stores the values from one row
vals := make([]interface{}, 0, len(names))
for _, _ = range names {
vals = append(vals, new(string))
}
// rowMaps stores all rows
rowMaps := make([]map[string]string, 0)
for rows.Next() {
rows.Scan(vals...)
// now make value list into name=>value map
currRow := make(map[string]string)
for i, name := range names {
currRow[name] = *(vals[i].(*string))
}
// accumulating rowMaps is the easy way out
rowMaps = append(rowMaps, currRow)
}
json, err := json.Marshal(rowMaps)
if err != nil {
panic(err)
}
os.Stdout.Write(json)
}
In theory, you could build this to do fewer allocations by not reusing the same rowMap each time and using a json.Encoder to append each row's JSON to a buffer. You could go a step further and not use a rowMap at all, just the lists of names and values. I should say I haven't compared the speed against a reflect-based approach, though I know reflect is slow enough it might be worth comparing them if you can put up with either strategy.

Resources