Flexible date/time parsing in Go (Adding default values in parsing) - datetime

Further to this question, I want to parse a date/time passed on the command line to a Go program. At the moment, I use the flag package to populate a string variable ts and then the following code:
if ts == "" {
config.Until = time.Now()
} else {
const layout = "2006-01-02T15:04:05"
if config.Until, err = time.Parse(layout, ts); err != nil {
log.Errorf("Could not parse %s as a time string: %s. Using current date/time instead.", ts, err.Error())
config.Until = time.Now()
}
}
This works OK, provided the user passes exactly the right format - e.g. 2019-05-20T09:07:33 or some such.
However, what I would like, if possible, is the flexibility to pass e.g. 2019-05-20T09:07 or 2019-05-20T09 or maybe even 2019-05-20 and have the hours, minutes and seconds default to 0 where appropriate.
Is there a sane1 way to do this?
1 not requiring me to essentially write my own parser
UPDATE
I've kind of got a solution to this, although it's not particularly elegant, it does appear to work for most of the cases I am likely to encounter.
package main
import (
"fmt"
"time"
)
func main() {
const layout = "2006-01-02T15:04:05"
var l string
var input string
for _, input = range []string{"2019-05-30", "2019-05-30T16", "2019-05-30T16:04", "2019-05-30T16:04:34",
"This won't work", "This is extravagantly long and won't work either"} {
if len(input) < len(layout) {
l = layout[:len(input)]
} else {
l = layout
}
if d, err := time.Parse(l, input); err != nil {
fmt.Printf("Error %s\n", err.Error())
} else {
fmt.Printf("Layout %-20s gives time %v\n", l, d)
}
}
}

Just try each format, until one works. If none work, return an error.
var formats = []string{"2006-01-02T15:04:05", "2006-01-02", ...}
func parseTime(input string) (time.Time, error) {
for _, format := range formats {
t, err := time.Parse(format, input)
if err == nil {
return t, nil
}
}
return time.Time{}, errors.New("Unrecognized time format")
}

I think this library is what you are looking for https://github.com/araddon/dateparse
Parse many date strings without knowing the format in advance. Uses a scanner to read bytes and use a state machine to find format.
t, err := dateparse.ParseAny("3/1/2014")

In the specific scenario you describe, you could check the length of the input datestamp string, and add the proper length of zero stuff at the end of it to correspond to your layout. So basically you could append as much of the string "T00:00:00" (counting from the end), to the input as is missing in length compared to the layout format string.

Related

Get a value from map stored in struct

I have a trouble for gonna get a value from map stored in struct. Please, look at the next part of code (some strings skipped):
type Settings struct {
...
LcInfoData *[]LcInfodb
LcInfoLog *MapLcInfoLL
}
type MapLcInfoLL map[string]LcInfoLL
type LcInfoLL struct {
EnableLog string
FileLogPtr *os.File
}
...
func updLogInfo(cnf *Settings)(err) {
for _, t := range *cnf.LcInfoData {
fpPtr, err := logInit(t.FilepLog);
if err != nil {
exitMsg(1, err.Error());
}
lcMapVal := LcInfoLL{EnableLog: t.EnableLog, FileLogPtr: fpPtr}
lcMap[t.LocationID] = lcMapVal
}
cnf.uLcInfoLog(&lcMap) // at the end
...
}
At the end I got filled structure for using in another function (it's global settings). But. I can't get an access to elements inside a map (which stored in structure). I mean something like that:
v := *cnf.LcInfoLog["index"]
log.Println("ABOUT LOCATION: ", v.FileLogPtr)
Can you help me?
Thank you!

Is there a better way to parse this Map?

Fairly new to Go, essentially in the actual code I'm writing I plan to read from a file which will contain environment variables, i.e. API_KEY=XYZ. Means I can keep them out of Version control. The below solution 'works' but I feel like there is probably a better way of doing it.
The end goal is to be able to access the elements from the file like so
m["API_KEY"] and that would print XYZ. This may even already exist and I'm re-inventing the wheel, I saw Go has environment variables but it didn't seem to be what I was after specifically.
So any help is appreciated.
Playground
Code:
package main
import (
"fmt"
"strings"
)
var m = make(map[string]string)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
arr := strings.Split(text, "\n")
for _, value := range arr {
tmp := strings.Split(value, "=")
m[strings.TrimSpace(tmp[0])] = strings.TrimSpace(tmp[1])
}
fmt.Println(m)
}
First, I would recommend to read this related question: How to handle configuration in Go
Next, I would really consider storing your configuration in another format. Because what you propose isn't a standard. It's close to Java's property file format (.properties), but even property files may contain Unicode sequences and thus your code is not a valid .properties format parser as it doesn't handle Unicode sequences at all.
Instead I would recommend to use JSON, so you can easily parse it with Go or with any other language, and there are many tools to edit JSON texts, and still it is human-friendly.
Going with the JSON format, decoding it into a map is just one function call: json.Unmarshal(). It could look like this:
text := `{"Var1":"Value1", "Var2":"Value2", "Var3":"Value3"}`
var m map[string]string
if err := json.Unmarshal([]byte(text), &m); err != nil {
fmt.Println("Invalid config file:", err)
return
}
fmt.Println(m)
Output (try it on the Go Playground):
map[Var1:Value1 Var2:Value2 Var3:Value3]
The json package will handle formatting and escaping for you, so you don't have to worry about any of those. It will also detect and report errors for you. Also JSON is more flexible, your config may contain numbers, texts, arrays, etc. All those come for "free" just because you chose the JSON format.
Another popular format for configuration is YAML, but the Go standard library does not include a YAML parser. See Go implementation github.com/go-yaml/yaml.
If you don't want to change your format, then I would just use the code you posted, because it does exactly what you want it to do: process input line-by-line, and parse a name = value pair from each line. And it does it in a clear and obvious way. Using a CSV or any other reader for this purpose is bad because they hide what's under the hood (they intentionally and rightfully hide format specific details and transformations). A CSV reader is a CSV reader first; even if you change the tabulator / comma symbol: it will interpret certain escape sequences and might give you different data than what you see in a plain text editor. This is an unintended behavior from your point of view, but hey, your input is not in CSV format and yet you asked a reader to interpret it as CSV!
One improvement I would add to your solution is the use of bufio.Scanner. It can be used to read an input line-by-line, and it handles different styles of newline sequences. It could look like this:
text := `Var1=Value1
Var2=Value2
Var3=Value3`
scanner := bufio.NewScanner(strings.NewReader(text))
m := map[string]string{}
for scanner.Scan() {
parts := strings.Split(scanner.Text(), "=")
if len(parts) == 2 {
m[strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
fmt.Println("Error encountered:", err)
}
fmt.Println(m)
Output is the same. Try it on the Go Playground.
Using bufio.Scanner has another advantage: bufio.NewScanner() accepts an io.Reader, the general interface for "all things being a source of bytes". This means if your config is stored in a file, you don't even have to read all the config into the memory, you can just open the file e.g. with os.Open() which returns a value of *os.File which also implements io.Reader, so you may directly pass the *os.File value to bufio.NewScanner() (and so the bufio.Scanner will read from the file and not from an in-memory buffer like in the example above).
1- You may read all with just one function call r.ReadAll() using csv.NewReader from encoding/csv with:
r.Comma = '='
r.TrimLeadingSpace = true
And result is [][]string, and input order is preserved, Try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := r.ReadAll()
if err != nil {
panic(err)
}
fmt.Println(all)
}
output:
[[Var1 Value1] [Var2 Value2] [Var3 Value3]]
2- You may fine-tune csv.ReadAll() to convert the output to the map, but the order is not preserved, try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"io"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := ReadAll(r)
if err != nil {
panic(err)
}
fmt.Println(all)
}
func ReadAll(r *csv.Reader) (map[string]string, error) {
m := make(map[string]string)
for {
tmp, err := r.Read()
if err == io.EOF {
return m, nil
}
if err != nil {
return nil, err
}
m[tmp[0]] = tmp[1]
}
}
output:
map[Var2:Value2 Var3:Value3 Var1:Value1]

Pointer problems

TL;DR Somehow, I am appending a pointer to a list instead of the object within a for loop of objects so at the end the entire slice is composed of the same object multiple times. I just don't know how to fix that.
The Long Way
I am still having a super hard time trying to figure out pointers in go. I posted a question yesterday and got some help but now I am stuck on a slightly different issue in the same piece of code.
I am working with gocql and cqlr go packages to try and bit a small object mapper for my Cassandra queries. Essentially the problem I am having is I am appending what appears to be a pointer to an object, not a new instance of the obj to the array. How do I fix that? I have tried adding & and * in front of value but that doesn't seem to work. How do I fix these? The bind function needs an & according to their docs.
Code
type Query struct {
query string
values interface{}
attempts int
maxAttempts int
structType reflect.Type
}
func (query Query) RetryingQuery() (results []interface{}) {
var q *gocql.Query
if query.values != nil {
q = c.Session.Query(query.query, query.values)
} else {
q = c.Session.Query(query.query)
}
bindQuery := cqlr.BindQuery(q)
value := reflect.New(query.structType).Interface()
for bindQuery.Scan(value) {
fmt.Println(value)
results = append(results, value)
}
return
}
The docs ask for var value type then in bind you would pass &value. I quoted the docs below.
var t Tweet
var s []Tweet
for b.Scan(&t) {
// Application specific code goes here
append(s, t)
}
The issue is I cannot directly go var value query.structType to define its type then pass the reference of that to bindQuery.Scan().
What is printed
&{result1 x86_64 24 3.2.0-74-generic Linux}
&{result2 x86_64 24 3.19.0-25-generic Linux}
&{result3 x86_64 4 3.13.0-48-generic Linux}
&{result4 x86_64 2 3.13.0-62-generic Linux}
&{result5 x86_64 4 3.13.0-48-generic Linux}
What is in the slice
Spoiler, it is result5 repeated over and over. I understand that I am just appending the pointer to same object to the list and that every loop iteration the object is changed and that changes all the results in the slice to that new object. I just don't know how to fix it.
[{"hostname":"result5","machine":"x86_64","num_cpus":4,"release":"3.13.0-48-generic","sysname":"Linux"},{"hostname":"result5","machine":"x86_64","num_cpus":4,"release":"3.13.0-48-generic","sysname":"Linux"},{"hostname":"result5","machine":"x86_64","num_cpus":4,"release":"3.13.0-48-generic","sysname":"Linux"},{"hostname":"result5","machine":"x86_64","num_cpus":4,"release":"3.13.0-48-generic","sysname":"Linux"},{"hostname":"result5","machine":"x86_64","num_cpus":4,"release":"3.13.0-48-generic","sysname":"Linux"}]
Well I can at least tell you what you're doing. bindQuery takes a pointer. It changes the value stored at the address.
What you're essentially doing is this:
package main
import "fmt"
func main() {
var q int
myInts := make([]*int, 0, 5)
for i := 0; i < 5; i++ {
q = i
fmt.Printf("%d ", q)
myInts = append(myInts, &q)
}
fmt.Printf("\n")
for _, value := range myInts {
fmt.Printf("%d ", *value)
}
fmt.Printf("\n")
fmt.Println(myInts)
}
Which, as you can probably guess, gives you this:
0 1 2 3 4
4 4 4 4 4
[0x104382e0 0x104382e0 0x104382e0 0x104382e0 0x104382e0]
Things get a little more confusing with reflect. You can get your type as an interface, but that is it (unless you want to play with unsafe). An interface, in simple terms, contains a pointer to the original type underneath (and some other stuff). So in your function you are passing a pointer (and some other stuff). Then you're appending the pointer. It might be nice just to get concrete and type switch your interface. I assume you know what types it could be. In which case you'd have to have something along these lines:
package main
import (
"fmt"
"reflect"
)
type foo struct {
fooval string
}
type bar struct {
barval string
}
func main() {
f1 := foo{"hi"}
f2 := &foo{"hi"}
b1 := bar{"bye"}
b2 := &bar{"bye"}
doSomething(f1)
doSomething(f2)
doSomething(b1)
doSomething(b2)
}
func doSomething(i interface{}) {
n := reflect.TypeOf(i)
// get a new one
newn := reflect.New(n).Interface()
// find out what we got and handle each case
switch t := newn.(type) {
case **foo:
*t = &foo{"hi!"}
fmt.Printf("It was a **foo, here is the address %p and here is the value %v\n", *t, **t)
case **bar:
*t = &bar{"bye :("}
fmt.Printf("It was a **bar, here is the address %p and here is the value %v\n", *t, **t)
case *foo:
t = &foo{"hey!"}
fmt.Printf("It was a *foo, here is the address %p and here is the value %v\n", t, *t)
case *bar:
t = &bar{"ahh!"}
fmt.Printf("It was a *bar, here is the address %p and here is the value %v\n", t, *t)
default:
panic("AHHHH")
}
}
You could also just keep calling value = reflect.New(query.structType).Interface() inside of the loop which will give you new interfaces every time. Reassigning value after every append. Last time through the loop would make one extra though..

How to discover all package types at runtime?

As far as I'm aware (see here, and here) there is no type discovery mechanism in the reflect package, which expects that you already have an instance of the type or value you want to inspect.
Is there any other way to discover all exported types (especially the structs) in a running go package?
Here's what I wish I had (but it doesn't exist):
import "time"
import "fmt"
func main() {
var types []reflect.Type
types = reflect.DiscoverTypes(time)
fmt.Println(types)
}
The end goal is to be able to discover all the structs of a package that meet certain criteria, then be able to instantiate new instances of those structs.
BTW, a registration function that identifies the types is not a valid approach for my use case.
Whether you think it's a good idea or not, here's why I want this capability (because I know you're going to ask):
I've written a code generation utility that loads go source files and builds an AST to scan for types that embed a specified type. The output of the utility is a set of go test functions based on the discovered types. I invoke this utility using go generate to create the test functions then run go test to execute the generated test functions. Every time the tests change (or a new type is added) I must re-run go generate before re-running go test. This is why a registration function is not a valid option. I'd like to avoid the go generate step but that would require my utility to become a library that is imported by the running package. The library code would need to somehow scan the running namespace during init() for types that embed the expected library type.
In Go 1.5, you can use the new package types and importer to inspect binary and source packages. For example:
package main
import (
"fmt"
"go/importer"
)
func main() {
pkg, err := importer.Default().Import("time")
if err != nil {
fmt.Printf("error: %s\n", err.Error())
return
}
for _, declName := range pkg.Scope().Names() {
fmt.Println(declName)
}
}
You can use the package go/build to extract all the packages installed. Or you can configure the Lookup importer to inspect binaries outside the environment.
Before 1.5, the only no-hacky way is to use the package ast to compile the source code.
(see bottom for 2019 update)
Warning: untested and hacky. Can break whenever a new version of Go is released.
It is possible to get all types the runtime knows of by hacking around Go's runtime a little. Include a small assembly file in your own package, containing:
TEXT yourpackage·typelinks(SB), NOSPLIT, $0-0
JMP reflect·typelinks(SB)
In yourpackage, declare the function prototype (without body):
func typelinks() []*typeDefDummy
Alongside a type definition:
type typeDefDummy struct {
_ uintptr // padding
_ uint64 // padding
_ [3]uintptr // padding
StrPtr *string
}
Then just call typelinks, iterate over the slice and read each StrPtr for the name. Seek those starting with yourpackage. Note that if there are two packages called yourpackage in different paths, this method won't work unambiguously.
can I somehow hook into the reflect package to instantiate new instances of those names?
Yeah, assuming d is a value of type *typeDefDummy (note the asterisk, very important):
t := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&d)))
Now t is a reflect.Type value which you can use to instantiate reflect.Values.
Edit: I tested and executed this code successfully and have uploaded it as a gist.
Adjust package names and include paths as necessary.
Update 2019
A lot has changed since I originally posted this answer. Here's a short description of how the same can be done with Go 1.11 in 2019.
$GOPATH/src/tl/tl.go
package tl
import (
"unsafe"
)
func Typelinks() (sections []unsafe.Pointer, offset [][]int32) {
return typelinks()
}
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
func Add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return add(p, x, whySafe)
}
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
$GOPATH/src/tl/tl.s
TEXT tl·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
TEXT tl·add(SB), $0-0
JMP reflect·add(SB)
main.go
package main
import (
"fmt"
"reflect"
"tl"
"unsafe"
)
func main() {
sections, offsets := tl.Typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := tl.Add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Happy hacking!
Update 2022 with Go 1.18
With Go 1.18 the accepted answer doesn't work anymore, but I could adapt it to use go:linkname. Using this directive and the unsafe package these internal functions can now be accessed without any extra assembly code.
package main
import (
"fmt"
"reflect"
"unsafe"
)
//go:linkname typelinks reflect.typelinks
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
//go:linkname add reflect.add
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
func main() {
sections, offsets := typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Unfortunately, I don't think this is possible. Packages are not "actionable" in Go, you can't "call a function" on it. You can't call a function on a type either, but you can call reflect.TypeOf on an instance of the type and get reflect.Type which is a runtime abstraction of a type. There just isn't such mechanism for packages, there isn't a reflect.Package.
With that said, you could file an issue about the absence of (and practicality of adding) reflect.PackageOf etc.
Thanks #thwd and #icio, follow your direction it still worked on 1.13.6 today.
Follow your way the tl.s will be:
TEXT ·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
yes, no package name and no "add" function in it.
then follow #icio's way change "add" function to:
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return unsafe.Pointer(uintptr(p) + x)
}
then all worked now. :)
Version for go 1.16(tested for go version go1.16.7 linux/amd64)
This can only generate code and strings. You have to paste generated code somewhere then compile it again
Works if only sources are available.
import (
"fmt"
"go/ast"
"golang.org/x/tools/go/packages"
"reflect"
"time"
"unicode"
)
func printTypes(){
config := &packages.Config{
Mode: packages.NeedSyntax,
}
pkgs, _ := packages.Load(config, "package_name")
pkg := pkgs[0]
for _, s := range pkg.Syntax {
for n, o := range s.Scope.Objects {
if o.Kind == ast.Typ {
// check if type is exported(only need for non-local types)
if unicode.IsUpper([]rune(n)[0]) {
// note that reflect.ValueOf(*new(%s)) won't work with interfaces
fmt.Printf("ProcessType(new(package_name.%s)),\n", n)
}
}
}
}
}
full example of possible use case: https://pastebin.com/ut0zNEAc (doesn't work in online repls, but works locally)
After go 1.11 dwarf debugging symbols added runtime type information, you can get the runtime type by using this address
DW_AT_go_runtime_type
gort You can see more content
package main
import (
"debug/dwarf"
"fmt"
"log"
"os"
"reflect"
"runtime"
"unsafe"
"github.com/go-delve/delve/pkg/dwarf/godwarf"
"github.com/go-delve/delve/pkg/proc"
)
func main() {
path, err := os.Executable()
if err != nil {
log.Fatalln(err)
}
bi := proc.NewBinaryInfo(runtime.GOOS, runtime.GOARCH)
err = bi.LoadBinaryInfo(path, 0, nil)
if err != nil {
log.Fatalln(err)
}
mds, err := loadModuleData(bi, new(localMemory))
if err != nil {
log.Fatalln(err)
}
types, err := bi.Types()
if err != nil {
log.Fatalln(err)
}
for _, name := range types {
dwarfType, err := findType(bi, name)
if err != nil {
continue
}
typeAddr, err := dwarfToRuntimeType(bi, mds, dwarfType, name)
if err != nil {
continue
}
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
log.Printf("load type name:%s type:%s\n", name, typ)
}
}
// delve counterpart to runtime.moduledata
type moduleData struct {
text, etext uint64
types, etypes uint64
typemapVar *proc.Variable
}
//go:linkname findType github.com/go-delve/delve/pkg/proc.(*BinaryInfo).findType
func findType(bi *proc.BinaryInfo, name string) (godwarf.Type, error)
//go:linkname loadModuleData github.com/go-delve/delve/pkg/proc.loadModuleData
func loadModuleData(bi *proc.BinaryInfo, mem proc.MemoryReadWriter) ([]moduleData, error)
//go:linkname imageToModuleData github.com/go-delve/delve/pkg/proc.(*BinaryInfo).imageToModuleData
func imageToModuleData(bi *proc.BinaryInfo, image *proc.Image, mds []moduleData) *moduleData
type localMemory int
func (mem *localMemory) ReadMemory(data []byte, addr uint64) (int, error) {
buf := *(*[]byte)(unsafe.Pointer(&reflect.SliceHeader{Data: uintptr(addr), Len: len(data), Cap: len(data)}))
copy(data, buf)
return len(data), nil
}
func (mem *localMemory) WriteMemory(addr uint64, data []byte) (int, error) {
return 0, fmt.Errorf("not support")
}
func dwarfToRuntimeType(bi *proc.BinaryInfo, mds []moduleData, typ godwarf.Type, name string) (typeAddr uint64, err error) {
if typ.Common().Index >= len(bi.Images) {
return 0, fmt.Errorf("could not find image for type %s", name)
}
img := bi.Images[typ.Common().Index]
rdr := img.DwarfReader()
rdr.Seek(typ.Common().Offset)
e, err := rdr.Next()
if err != nil {
return 0, fmt.Errorf("could not find dwarf entry for type:%s err:%s", name, err)
}
entryName, ok := e.Val(dwarf.AttrName).(string)
if !ok || entryName != name {
return 0, fmt.Errorf("could not find name for type:%s entry:%s", name, entryName)
}
off, ok := e.Val(godwarf.AttrGoRuntimeType).(uint64)
if !ok || off == 0 {
return 0, fmt.Errorf("could not find runtime type for type:%s", name)
}
md := imageToModuleData(bi, img, mds)
if md == nil {
return 0, fmt.Errorf("could not find module data for type %s", name)
}
typeAddr = md.types + off
if typeAddr < md.types || typeAddr >= md.etypes {
return off, nil
}
return typeAddr, nil
}
No there is not.
If you want to 'know' your types you'll have to register them.

Using reflection with structs to build generic handler function

I have some trouble building a function that can dynamically use parametrized structs. For that reason my code has 20+ functions that are similar except basically for one type that gets used. Most of my experience is with Java, and I'd just develop basic generic functions, or use plain Object as parameter to function (and reflection from that point on). I would need something similar, using Go.
I have several types like:
// The List structs are mostly needed for json marshalling
type OrangeList struct {
Oranges []Orange
}
type BananaList struct {
Bananas []Banana
}
type Orange struct {
Orange_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
type Banana struct {
Banana_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
Then I have function, basically for each list type:
// In the end there are 20+ of these, the only difference is basically in two types!
// This is very un-DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
oranges := OrangeList{} // This type changes
for rows.Next() {
orange := Orange{} // This type changes
err := rows.StructScan(&orange) // This can handle each case already, could also use reflect myself too
checkError(err, "rows.Scan")
oranges.Oranges = append(oranges.Oranges,orange)
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(oranges)
return jsontext, err
}
Yes, I could change the sql library to use more intelligent ORM or framework, but that's besides the point. I want to learn on how to build generic function that can handle similar function for all my different types.
I got this far, but it still doesn't work properly (target isn't expected struct I think):
func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
tgtValueOf := reflect.ValueOf(tgt)
tgtType := tgtValueOf.Type()
targets := reflect.SliceOf(tgtValueOf.Type())
for rows.Next() {
target := reflect.New(tgtType)
err := rows.StructScan(&target) // At this stage target still isn't 1:1 smilar struct so the StructScan fails... It's some perverted "Value" object instead. Meh.
// Removed appending to the list because the solutions for that would be similar
checkError(err, "rows.Scan")
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(targets)
return jsontext, err
}
So umm, I would need to give the list type, and the vanilla type as parameters, then build one of each, and the rest of my logic would be probably fixable quite easily.
Turns out there's an sqlx.StructScan(rows, &destSlice) function that will do your inner loop, given a slice of the appropriate type. The sqlx docs refer to caching results of reflection operations, so it may have some additional optimizations compared to writing one.
Sounds like the immediate question you're actually asking is "how do I get something out of my reflect.Value that rows.StructScan will accept?" And the direct answer is reflect.Interface(target); it should return an interface{} representing an *Orange you can pass directly to StructScan (no additional & operation needed). Then, I think targets = reflect.Append(targets, target.Indirect()) will turn your target into a reflect.Value representing an Orange and append it to the slice. targets.Interface() should get you an interface{} representing an []Orange that json.Marshal understands. I say all these 'should's and 'I think's because I haven't tried that route.
Reflection, in general, is verbose and slow. Sometimes it's the best or only way to get something done, but it's often worth looking for a way to get your task done without it when you can.
So, if it works in your app, you can also convert Rows straight to JSON, without going through intermediate structs. Here's a sample program (requires sqlite3 of course) that turns sql.Rows into map[string]string and then into JSON. (Note it doesn't try to handle NULL, represent numbers as JSON numbers, or generally handle anything that won't fit in a map[string]string.)
package main
import (
_ "code.google.com/p/go-sqlite/go1/sqlite3"
"database/sql"
"encoding/json"
"os"
)
func main() {
db, err := sql.Open("sqlite3", "foo")
if err != nil {
panic(err)
}
tryQuery := func(query string, args ...interface{}) *sql.Rows {
rows, err := db.Query(query, args...)
if err != nil {
panic(err)
}
return rows
}
tryQuery("drop table if exists t")
tryQuery("create table t(i integer, j integer)")
tryQuery("insert into t values(?, ?)", 1, 2)
tryQuery("insert into t values(?, ?)", 3, 1)
// now query and serialize
rows := tryQuery("select * from t")
names, err := rows.Columns()
if err != nil {
panic(err)
}
// vals stores the values from one row
vals := make([]interface{}, 0, len(names))
for _, _ = range names {
vals = append(vals, new(string))
}
// rowMaps stores all rows
rowMaps := make([]map[string]string, 0)
for rows.Next() {
rows.Scan(vals...)
// now make value list into name=>value map
currRow := make(map[string]string)
for i, name := range names {
currRow[name] = *(vals[i].(*string))
}
// accumulating rowMaps is the easy way out
rowMaps = append(rowMaps, currRow)
}
json, err := json.Marshal(rowMaps)
if err != nil {
panic(err)
}
os.Stdout.Write(json)
}
In theory, you could build this to do fewer allocations by not reusing the same rowMap each time and using a json.Encoder to append each row's JSON to a buffer. You could go a step further and not use a rowMap at all, just the lists of names and values. I should say I haven't compared the speed against a reflect-based approach, though I know reflect is slow enough it might be worth comparing them if you can put up with either strategy.

Resources