How can I make this reflect based GO code simpler? - reflection

I am encoding a rather complex structure using a very complicated protocol that's a mix of ASN and a variant of XDR and other encodings.
I based the implementation on xdr encoder available on github. The code is reflection based and it works, but I don't like how I implemented target type switch:
st := ve.Type().String()
switch st {
case "time.Time":
I think the following approach might be better, but I could not get it to work properly:
switch ve.(type) {
case time.Time:
The reason it does not work is that ve is of the same reflection type and not the target type.
The following function provides full context of the code:
func (enc *encoderState) encode(v reflect.Value) {
ve := enc.indirect(v)
st := ve.Type().String()
switch st {
case "time.Time":
log.Println("Handling time.Time")
t, ok := ve.Interface().(time.Time)
if !ok {
enc.err = errors.New("Failed to type assert to time.Time")
return
}
enc.encodeTime(t)
return
case "[]uint8":
log.Println("Handling []uint8")
enc.writeOctetString(ve.Bytes())
return
default:
log.Printf("Handling type: %v by kind: %v\n", st, ve.Kind())
}
// Handle native Go types.
switch ve.Kind() {
case reflect.Uint8: // , reflect.Int8
enc.writeUint8(uint8(v.Uint()))
return
case reflect.Uint16: // , reflect.Int16
enc.writeUint16(uint16(v.Uint()))
return
case reflect.Bool:
enc.writeBool(ve.Bool())
return
case reflect.Struct:
enc.encodeStruct(ve)
return
case reflect.Interface:
enc.encodeInterface(ve)
return
}
// The only unhandled types left are unsupported. At the time of this
// writing the only remaining unsupported types that exist are
// reflect.Uintptr and reflect.UnsafePointer.
enc.err = errors.New(fmt.Sprintf("unsupported Go type '%s'", ve.Kind().String()))
}
If you know of a better example that can switch by type and kind better, please, let me know.
Thank you
UPDATE
After reading the solution I adjusted to a variation that works:
vi := ve.Interface()
switch st := vi.(type) {
case time.Time:
enc.encodeTime(vi.(time.Time))
return
case []uint8:
enc.writeOctetString(vi.([]byte))
return
default:
log.Printf("Handling type: %v by kind: %v\n", st, ve.Kind())
}

Use a type switch on the underlying value:
switch v := ve.Interface().(type) {
case time.Time:
log.Println("Handling time.Time")
enc.encodeTime(v)
return
case []byte:
log.Println("Handling []uint8")
enc.writeOctetString(v)
return
case byte:
enc.writeUint8(v)
return
// ... and more types here
default:
log.Printf("Handling type: %v by kind: %v\n", ve.Type(), ve.Kind())
}
playground example
You can also switch on the reflect.Type instead of the string:
switch ve.Type() {
case reflect.TypeOf(time.Time{}):
log.Println("Handling time.Time")
...
case reflect.TypeOf([]byte{}):
log.Println("Handling []uint8")
...
case reflect.TypeOf(uint8(0)):
...
}
playground example

Related

How to define a function that receives a pointer as argument and returns a pointer to one of the argument's children?

I'm having my first contact with pointers now that I'm learning Go.
But this is getting a little tricky and I'm starting to question whether I'm doing it right or wrong.
The title is my best guess to try and explain what I'm trying to do in a foreign language, so if it's unclear, I can try to explain in a different way.
This is a simplified example of the code: https://play.golang.org/p/eultYp7Cq12
func hasCity(element string, state *State) (bool, *City) {
for _, city := range (*state).Cities {
if (city.Name == element) {
return true, &city
}
}
return false, nil
}
As you can see, the output is:
true &{Campinas}
[{SP [{São Paulo} {Barueri}]}]
But what I'm actually trying to get is:
true &{Campinas}
[{SP [{São Paulo} {Campinas}]}]
So, what am i doing wrong here?
The function returns the address of the local variable cities. Change the code to return the address of the slice element:
func hasCity(element string, state *State) (bool, *City) {
for i, city := range state.Cities {
if city.Name == element {
return true, &state.Cities[i]
}
}
return false, nil
}

Get a value from map stored in struct

I have a trouble for gonna get a value from map stored in struct. Please, look at the next part of code (some strings skipped):
type Settings struct {
...
LcInfoData *[]LcInfodb
LcInfoLog *MapLcInfoLL
}
type MapLcInfoLL map[string]LcInfoLL
type LcInfoLL struct {
EnableLog string
FileLogPtr *os.File
}
...
func updLogInfo(cnf *Settings)(err) {
for _, t := range *cnf.LcInfoData {
fpPtr, err := logInit(t.FilepLog);
if err != nil {
exitMsg(1, err.Error());
}
lcMapVal := LcInfoLL{EnableLog: t.EnableLog, FileLogPtr: fpPtr}
lcMap[t.LocationID] = lcMapVal
}
cnf.uLcInfoLog(&lcMap) // at the end
...
}
At the end I got filled structure for using in another function (it's global settings). But. I can't get an access to elements inside a map (which stored in structure). I mean something like that:
v := *cnf.LcInfoLog["index"]
log.Println("ABOUT LOCATION: ", v.FileLogPtr)
Can you help me?
Thank you!

Concise nil checks for struct field pointers?

Say I have this struct:
type Foo struct {
Bar *string `json:"bar"`
Baz *int64 `json:"baz,omitempty"`
Qux *string `json:"qux"`
Quux string `json:"quux"`
}
After unmarshalling the json, I check for nil like so:
switch {
case f.Bar == nil:
return errors.New("Missing 'bar'")
case f.Baz == nil:
f.Baz = 42
case f.Qux == nil:
return errors.New("Missing 'qux'")
}
(or through a series of if statements, etc...)
I understand that I can put all the nil comparisons in one comma separated case, but each nil check will have differing returns.
My question: is there a less verbose way of doing the nil checks?
A question to you: how less verbose you want to get? Because you want to do different things on different conditions (different fields being nil). Your code contains these different things and the different conditions. Beyond that what's "redundant" in your code are just the switch and case keywords. You want to leave them out? Because the rest is not "redundant", they are required.
Also note that in Go cases do not fall through even without a break (unlike in other languages), so in your above example if f.Baz is nil, you will set it to 42 and f.Qux will not be checked (so no error will be returned), but if f.Baz is non-nil and f.Qux is nil, an error will be returned. I know it's just an example, but this is something to keep in mind. You should handle errors first if you use a switch! Or use if statements and then the error will be detected and returned regardless of the order of field checks.
Your code with switch is clean and efficient. If you want to make it less verbose, readability (and performance) will suffer.
You may use a helper function which checks if a pointer value is nil:
func n(i interface{}) bool {
v := reflect.ValueOf(i)
return v.Kind() == reflect.Ptr && v.IsNil()
}
And using it:
func check(f *Foo) error {
switch {
case n(f.Bar):
return errors.New("Missing 'bar'")
case n(f.Qux):
return errors.New("Missing 'qux'")
case n(f.Baz):
x := int64(42)
f.Baz = &x
}
return nil
}
Or using if statements:
func check2(f *Foo) error {
if n(f.Bar) {
return errors.New("Missing 'bar'")
}
if n(f.Qux) {
return errors.New("Missing 'qux'")
}
if n(f.Baz) {
x := int64(42)
f.Baz = &x
}
return nil
}
Try these on the Go Playground.

How to force passing parameter as a pointer in Go?

I am implementing an application layer network protocol which uses JSON in Go.
func ReadMessage(conn net.Conn, returnMessage interface{}) bool {
messageBytes := // read from conn
error := json.Unmarshal(messageBytes, &returnMessage)
if error != nil {
return false
}
return true
}
The function takes a struct as its second parameter where the message is unmarshalled. The function can be called like this:
msg := MessageType1{}
ok := ReadMessage(conn, &msg)
Or without the ampersand (&)
msg := MessageType1{}
ok := ReadMessage(conn, msg)
which will compile, but not do what is should as the struct is passed as a copy, not as a reference and the original msg will remain empty. So I'd like to force passing the struct by reference and catch this error at compile time.
Changing the parameter type to *interface{} will not compile:
cannot use &msg (type *MessageType1) as type *interface {} in function argument:
*interface {} is pointer to interface, not interface
Is there some Go style way of doing this correctly?
There is not a way to do this in the function declaration.
You can use reflection though and panic at runtime when the argument is not a pointer.
However maybe you should consider changing the design of your code. The concrete type of the argument should not matter. It either implements the interface you need or not.
Demo: http://play.golang.org/p/7Dw0EkFzbx
Since Go 1.18 you can do this using generics:
func test[T any](dst *T) {
//Do something with dst
}
You can't enforce this as *T always has the method set of T. Thus both implement the interface.
From the spec:
The method set of any other type T consists of all methods with receiver type T. The method set of the corresponding pointer type *T is the set of all methods with receiver *T or T (that is, it also contains the method set of T).
What you can do instead is to use the language's ability to return multiple values in your function, as Volker already stated:
func ReadMessage(conn net.Conn) (interface{}, bool) {
var returnMessage interface{}
messageBytes := // read from conn
error := json.Unmarshal(messageBytes, &returnMessage)
if error != nil {
return nil, false
}
return returnMessage, true
}
You should also consider not returning type interface{} but some meaningful type.

Using reflection with structs to build generic handler function

I have some trouble building a function that can dynamically use parametrized structs. For that reason my code has 20+ functions that are similar except basically for one type that gets used. Most of my experience is with Java, and I'd just develop basic generic functions, or use plain Object as parameter to function (and reflection from that point on). I would need something similar, using Go.
I have several types like:
// The List structs are mostly needed for json marshalling
type OrangeList struct {
Oranges []Orange
}
type BananaList struct {
Bananas []Banana
}
type Orange struct {
Orange_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
type Banana struct {
Banana_id string
Field_1 int
// The fields are different for different types, I am simplifying the code example
}
Then I have function, basically for each list type:
// In the end there are 20+ of these, the only difference is basically in two types!
// This is very un-DRY!
func buildOranges(rows *sqlx.Rows) ([]byte, error) {
oranges := OrangeList{} // This type changes
for rows.Next() {
orange := Orange{} // This type changes
err := rows.StructScan(&orange) // This can handle each case already, could also use reflect myself too
checkError(err, "rows.Scan")
oranges.Oranges = append(oranges.Oranges,orange)
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(oranges)
return jsontext, err
}
Yes, I could change the sql library to use more intelligent ORM or framework, but that's besides the point. I want to learn on how to build generic function that can handle similar function for all my different types.
I got this far, but it still doesn't work properly (target isn't expected struct I think):
func buildWhatever(rows *sqlx.Rows, tgt interface{}) ([]byte, error) {
tgtValueOf := reflect.ValueOf(tgt)
tgtType := tgtValueOf.Type()
targets := reflect.SliceOf(tgtValueOf.Type())
for rows.Next() {
target := reflect.New(tgtType)
err := rows.StructScan(&target) // At this stage target still isn't 1:1 smilar struct so the StructScan fails... It's some perverted "Value" object instead. Meh.
// Removed appending to the list because the solutions for that would be similar
checkError(err, "rows.Scan")
}
checkError(rows.Err(), "rows.Err")
jsontext, err := json.Marshal(targets)
return jsontext, err
}
So umm, I would need to give the list type, and the vanilla type as parameters, then build one of each, and the rest of my logic would be probably fixable quite easily.
Turns out there's an sqlx.StructScan(rows, &destSlice) function that will do your inner loop, given a slice of the appropriate type. The sqlx docs refer to caching results of reflection operations, so it may have some additional optimizations compared to writing one.
Sounds like the immediate question you're actually asking is "how do I get something out of my reflect.Value that rows.StructScan will accept?" And the direct answer is reflect.Interface(target); it should return an interface{} representing an *Orange you can pass directly to StructScan (no additional & operation needed). Then, I think targets = reflect.Append(targets, target.Indirect()) will turn your target into a reflect.Value representing an Orange and append it to the slice. targets.Interface() should get you an interface{} representing an []Orange that json.Marshal understands. I say all these 'should's and 'I think's because I haven't tried that route.
Reflection, in general, is verbose and slow. Sometimes it's the best or only way to get something done, but it's often worth looking for a way to get your task done without it when you can.
So, if it works in your app, you can also convert Rows straight to JSON, without going through intermediate structs. Here's a sample program (requires sqlite3 of course) that turns sql.Rows into map[string]string and then into JSON. (Note it doesn't try to handle NULL, represent numbers as JSON numbers, or generally handle anything that won't fit in a map[string]string.)
package main
import (
_ "code.google.com/p/go-sqlite/go1/sqlite3"
"database/sql"
"encoding/json"
"os"
)
func main() {
db, err := sql.Open("sqlite3", "foo")
if err != nil {
panic(err)
}
tryQuery := func(query string, args ...interface{}) *sql.Rows {
rows, err := db.Query(query, args...)
if err != nil {
panic(err)
}
return rows
}
tryQuery("drop table if exists t")
tryQuery("create table t(i integer, j integer)")
tryQuery("insert into t values(?, ?)", 1, 2)
tryQuery("insert into t values(?, ?)", 3, 1)
// now query and serialize
rows := tryQuery("select * from t")
names, err := rows.Columns()
if err != nil {
panic(err)
}
// vals stores the values from one row
vals := make([]interface{}, 0, len(names))
for _, _ = range names {
vals = append(vals, new(string))
}
// rowMaps stores all rows
rowMaps := make([]map[string]string, 0)
for rows.Next() {
rows.Scan(vals...)
// now make value list into name=>value map
currRow := make(map[string]string)
for i, name := range names {
currRow[name] = *(vals[i].(*string))
}
// accumulating rowMaps is the easy way out
rowMaps = append(rowMaps, currRow)
}
json, err := json.Marshal(rowMaps)
if err != nil {
panic(err)
}
os.Stdout.Write(json)
}
In theory, you could build this to do fewer allocations by not reusing the same rowMap each time and using a json.Encoder to append each row's JSON to a buffer. You could go a step further and not use a rowMap at all, just the lists of names and values. I should say I haven't compared the speed against a reflect-based approach, though I know reflect is slow enough it might be worth comparing them if you can put up with either strategy.

Resources