Convert a MySQL datetime string to time.Time format - datetime

I just cant manage to parse an SQL datetime (MySQL) value into a time.Time value. I cant find the layout fitting sql datetime. And also not really understand how this works.
I do imagine I'am not the first struggling with this, though i cant really find how I should make this work.
Input:
2015-12-23 00:00:00
Desired output:
1450825200
Code
time, err := time.Parse(time.SomeSqlDateTimeLayout, "2015-12-23 00:00:00")
timestamp := time.Unix()

You can create your own time format for parsing, if one does not exist in standard library.
package main
import (
"fmt"
"time"
)
func main() {
layout := "2006-01-02 15:04:05"
str := "2015-12-23 00:00:00"
t, err := time.Parse(layout, str)
if err != nil {
fmt.Println(err)
}
fmt.Println(t.Unix())
}
Output
1450828800
I do not know were official documentation for time format is, but you can find it here, from line 64.

Indeed, I'm not aware of any ISO-8601 parsing support in Go's standard libraries.
Let us use RFC-3309, which is the closest:
package main
import (
"fmt"
"time"
"strings"
)
func main() {
// convert iso-8601 into rfc-3339 format
rfc3339t := strings.Replace("2015-12-23 00:00:00", " ", "T", 1) + "Z"
// parse rfc-3339 datetime
t, err := time.Parse(time.RFC3339, rfc3339t)
if err != nil {
panic(err)
}
// convert into unix time
ut := t.UnixNano() / int64(time.Millisecond)
fmt.Println(ut)
}
Output
1450828800000
Playground: http://play.golang.org/p/HxZCpxmjvg
Hope this helps!

Related

Flexible date/time parsing in Go (Adding default values in parsing)

Further to this question, I want to parse a date/time passed on the command line to a Go program. At the moment, I use the flag package to populate a string variable ts and then the following code:
if ts == "" {
config.Until = time.Now()
} else {
const layout = "2006-01-02T15:04:05"
if config.Until, err = time.Parse(layout, ts); err != nil {
log.Errorf("Could not parse %s as a time string: %s. Using current date/time instead.", ts, err.Error())
config.Until = time.Now()
}
}
This works OK, provided the user passes exactly the right format - e.g. 2019-05-20T09:07:33 or some such.
However, what I would like, if possible, is the flexibility to pass e.g. 2019-05-20T09:07 or 2019-05-20T09 or maybe even 2019-05-20 and have the hours, minutes and seconds default to 0 where appropriate.
Is there a sane1 way to do this?
1 not requiring me to essentially write my own parser
UPDATE
I've kind of got a solution to this, although it's not particularly elegant, it does appear to work for most of the cases I am likely to encounter.
package main
import (
"fmt"
"time"
)
func main() {
const layout = "2006-01-02T15:04:05"
var l string
var input string
for _, input = range []string{"2019-05-30", "2019-05-30T16", "2019-05-30T16:04", "2019-05-30T16:04:34",
"This won't work", "This is extravagantly long and won't work either"} {
if len(input) < len(layout) {
l = layout[:len(input)]
} else {
l = layout
}
if d, err := time.Parse(l, input); err != nil {
fmt.Printf("Error %s\n", err.Error())
} else {
fmt.Printf("Layout %-20s gives time %v\n", l, d)
}
}
}
Just try each format, until one works. If none work, return an error.
var formats = []string{"2006-01-02T15:04:05", "2006-01-02", ...}
func parseTime(input string) (time.Time, error) {
for _, format := range formats {
t, err := time.Parse(format, input)
if err == nil {
return t, nil
}
}
return time.Time{}, errors.New("Unrecognized time format")
}
I think this library is what you are looking for https://github.com/araddon/dateparse
Parse many date strings without knowing the format in advance. Uses a scanner to read bytes and use a state machine to find format.
t, err := dateparse.ParseAny("3/1/2014")
In the specific scenario you describe, you could check the length of the input datestamp string, and add the proper length of zero stuff at the end of it to correspond to your layout. So basically you could append as much of the string "T00:00:00" (counting from the end), to the input as is missing in length compared to the layout format string.

Is there a better way to parse this Map?

Fairly new to Go, essentially in the actual code I'm writing I plan to read from a file which will contain environment variables, i.e. API_KEY=XYZ. Means I can keep them out of Version control. The below solution 'works' but I feel like there is probably a better way of doing it.
The end goal is to be able to access the elements from the file like so
m["API_KEY"] and that would print XYZ. This may even already exist and I'm re-inventing the wheel, I saw Go has environment variables but it didn't seem to be what I was after specifically.
So any help is appreciated.
Playground
Code:
package main
import (
"fmt"
"strings"
)
var m = make(map[string]string)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
arr := strings.Split(text, "\n")
for _, value := range arr {
tmp := strings.Split(value, "=")
m[strings.TrimSpace(tmp[0])] = strings.TrimSpace(tmp[1])
}
fmt.Println(m)
}
First, I would recommend to read this related question: How to handle configuration in Go
Next, I would really consider storing your configuration in another format. Because what you propose isn't a standard. It's close to Java's property file format (.properties), but even property files may contain Unicode sequences and thus your code is not a valid .properties format parser as it doesn't handle Unicode sequences at all.
Instead I would recommend to use JSON, so you can easily parse it with Go or with any other language, and there are many tools to edit JSON texts, and still it is human-friendly.
Going with the JSON format, decoding it into a map is just one function call: json.Unmarshal(). It could look like this:
text := `{"Var1":"Value1", "Var2":"Value2", "Var3":"Value3"}`
var m map[string]string
if err := json.Unmarshal([]byte(text), &m); err != nil {
fmt.Println("Invalid config file:", err)
return
}
fmt.Println(m)
Output (try it on the Go Playground):
map[Var1:Value1 Var2:Value2 Var3:Value3]
The json package will handle formatting and escaping for you, so you don't have to worry about any of those. It will also detect and report errors for you. Also JSON is more flexible, your config may contain numbers, texts, arrays, etc. All those come for "free" just because you chose the JSON format.
Another popular format for configuration is YAML, but the Go standard library does not include a YAML parser. See Go implementation github.com/go-yaml/yaml.
If you don't want to change your format, then I would just use the code you posted, because it does exactly what you want it to do: process input line-by-line, and parse a name = value pair from each line. And it does it in a clear and obvious way. Using a CSV or any other reader for this purpose is bad because they hide what's under the hood (they intentionally and rightfully hide format specific details and transformations). A CSV reader is a CSV reader first; even if you change the tabulator / comma symbol: it will interpret certain escape sequences and might give you different data than what you see in a plain text editor. This is an unintended behavior from your point of view, but hey, your input is not in CSV format and yet you asked a reader to interpret it as CSV!
One improvement I would add to your solution is the use of bufio.Scanner. It can be used to read an input line-by-line, and it handles different styles of newline sequences. It could look like this:
text := `Var1=Value1
Var2=Value2
Var3=Value3`
scanner := bufio.NewScanner(strings.NewReader(text))
m := map[string]string{}
for scanner.Scan() {
parts := strings.Split(scanner.Text(), "=")
if len(parts) == 2 {
m[strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
fmt.Println("Error encountered:", err)
}
fmt.Println(m)
Output is the same. Try it on the Go Playground.
Using bufio.Scanner has another advantage: bufio.NewScanner() accepts an io.Reader, the general interface for "all things being a source of bytes". This means if your config is stored in a file, you don't even have to read all the config into the memory, you can just open the file e.g. with os.Open() which returns a value of *os.File which also implements io.Reader, so you may directly pass the *os.File value to bufio.NewScanner() (and so the bufio.Scanner will read from the file and not from an in-memory buffer like in the example above).
1- You may read all with just one function call r.ReadAll() using csv.NewReader from encoding/csv with:
r.Comma = '='
r.TrimLeadingSpace = true
And result is [][]string, and input order is preserved, Try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := r.ReadAll()
if err != nil {
panic(err)
}
fmt.Println(all)
}
output:
[[Var1 Value1] [Var2 Value2] [Var3 Value3]]
2- You may fine-tune csv.ReadAll() to convert the output to the map, but the order is not preserved, try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"io"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := ReadAll(r)
if err != nil {
panic(err)
}
fmt.Println(all)
}
func ReadAll(r *csv.Reader) (map[string]string, error) {
m := make(map[string]string)
for {
tmp, err := r.Read()
if err == io.EOF {
return m, nil
}
if err != nil {
return nil, err
}
m[tmp[0]] = tmp[1]
}
}
output:
map[Var2:Value2 Var3:Value3 Var1:Value1]

How to discover all package types at runtime?

As far as I'm aware (see here, and here) there is no type discovery mechanism in the reflect package, which expects that you already have an instance of the type or value you want to inspect.
Is there any other way to discover all exported types (especially the structs) in a running go package?
Here's what I wish I had (but it doesn't exist):
import "time"
import "fmt"
func main() {
var types []reflect.Type
types = reflect.DiscoverTypes(time)
fmt.Println(types)
}
The end goal is to be able to discover all the structs of a package that meet certain criteria, then be able to instantiate new instances of those structs.
BTW, a registration function that identifies the types is not a valid approach for my use case.
Whether you think it's a good idea or not, here's why I want this capability (because I know you're going to ask):
I've written a code generation utility that loads go source files and builds an AST to scan for types that embed a specified type. The output of the utility is a set of go test functions based on the discovered types. I invoke this utility using go generate to create the test functions then run go test to execute the generated test functions. Every time the tests change (or a new type is added) I must re-run go generate before re-running go test. This is why a registration function is not a valid option. I'd like to avoid the go generate step but that would require my utility to become a library that is imported by the running package. The library code would need to somehow scan the running namespace during init() for types that embed the expected library type.
In Go 1.5, you can use the new package types and importer to inspect binary and source packages. For example:
package main
import (
"fmt"
"go/importer"
)
func main() {
pkg, err := importer.Default().Import("time")
if err != nil {
fmt.Printf("error: %s\n", err.Error())
return
}
for _, declName := range pkg.Scope().Names() {
fmt.Println(declName)
}
}
You can use the package go/build to extract all the packages installed. Or you can configure the Lookup importer to inspect binaries outside the environment.
Before 1.5, the only no-hacky way is to use the package ast to compile the source code.
(see bottom for 2019 update)
Warning: untested and hacky. Can break whenever a new version of Go is released.
It is possible to get all types the runtime knows of by hacking around Go's runtime a little. Include a small assembly file in your own package, containing:
TEXT yourpackage·typelinks(SB), NOSPLIT, $0-0
JMP reflect·typelinks(SB)
In yourpackage, declare the function prototype (without body):
func typelinks() []*typeDefDummy
Alongside a type definition:
type typeDefDummy struct {
_ uintptr // padding
_ uint64 // padding
_ [3]uintptr // padding
StrPtr *string
}
Then just call typelinks, iterate over the slice and read each StrPtr for the name. Seek those starting with yourpackage. Note that if there are two packages called yourpackage in different paths, this method won't work unambiguously.
can I somehow hook into the reflect package to instantiate new instances of those names?
Yeah, assuming d is a value of type *typeDefDummy (note the asterisk, very important):
t := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&d)))
Now t is a reflect.Type value which you can use to instantiate reflect.Values.
Edit: I tested and executed this code successfully and have uploaded it as a gist.
Adjust package names and include paths as necessary.
Update 2019
A lot has changed since I originally posted this answer. Here's a short description of how the same can be done with Go 1.11 in 2019.
$GOPATH/src/tl/tl.go
package tl
import (
"unsafe"
)
func Typelinks() (sections []unsafe.Pointer, offset [][]int32) {
return typelinks()
}
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
func Add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return add(p, x, whySafe)
}
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
$GOPATH/src/tl/tl.s
TEXT tl·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
TEXT tl·add(SB), $0-0
JMP reflect·add(SB)
main.go
package main
import (
"fmt"
"reflect"
"tl"
"unsafe"
)
func main() {
sections, offsets := tl.Typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := tl.Add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Happy hacking!
Update 2022 with Go 1.18
With Go 1.18 the accepted answer doesn't work anymore, but I could adapt it to use go:linkname. Using this directive and the unsafe package these internal functions can now be accessed without any extra assembly code.
package main
import (
"fmt"
"reflect"
"unsafe"
)
//go:linkname typelinks reflect.typelinks
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
//go:linkname add reflect.add
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
func main() {
sections, offsets := typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Unfortunately, I don't think this is possible. Packages are not "actionable" in Go, you can't "call a function" on it. You can't call a function on a type either, but you can call reflect.TypeOf on an instance of the type and get reflect.Type which is a runtime abstraction of a type. There just isn't such mechanism for packages, there isn't a reflect.Package.
With that said, you could file an issue about the absence of (and practicality of adding) reflect.PackageOf etc.
Thanks #thwd and #icio, follow your direction it still worked on 1.13.6 today.
Follow your way the tl.s will be:
TEXT ·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
yes, no package name and no "add" function in it.
then follow #icio's way change "add" function to:
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return unsafe.Pointer(uintptr(p) + x)
}
then all worked now. :)
Version for go 1.16(tested for go version go1.16.7 linux/amd64)
This can only generate code and strings. You have to paste generated code somewhere then compile it again
Works if only sources are available.
import (
"fmt"
"go/ast"
"golang.org/x/tools/go/packages"
"reflect"
"time"
"unicode"
)
func printTypes(){
config := &packages.Config{
Mode: packages.NeedSyntax,
}
pkgs, _ := packages.Load(config, "package_name")
pkg := pkgs[0]
for _, s := range pkg.Syntax {
for n, o := range s.Scope.Objects {
if o.Kind == ast.Typ {
// check if type is exported(only need for non-local types)
if unicode.IsUpper([]rune(n)[0]) {
// note that reflect.ValueOf(*new(%s)) won't work with interfaces
fmt.Printf("ProcessType(new(package_name.%s)),\n", n)
}
}
}
}
}
full example of possible use case: https://pastebin.com/ut0zNEAc (doesn't work in online repls, but works locally)
After go 1.11 dwarf debugging symbols added runtime type information, you can get the runtime type by using this address
DW_AT_go_runtime_type
gort You can see more content
package main
import (
"debug/dwarf"
"fmt"
"log"
"os"
"reflect"
"runtime"
"unsafe"
"github.com/go-delve/delve/pkg/dwarf/godwarf"
"github.com/go-delve/delve/pkg/proc"
)
func main() {
path, err := os.Executable()
if err != nil {
log.Fatalln(err)
}
bi := proc.NewBinaryInfo(runtime.GOOS, runtime.GOARCH)
err = bi.LoadBinaryInfo(path, 0, nil)
if err != nil {
log.Fatalln(err)
}
mds, err := loadModuleData(bi, new(localMemory))
if err != nil {
log.Fatalln(err)
}
types, err := bi.Types()
if err != nil {
log.Fatalln(err)
}
for _, name := range types {
dwarfType, err := findType(bi, name)
if err != nil {
continue
}
typeAddr, err := dwarfToRuntimeType(bi, mds, dwarfType, name)
if err != nil {
continue
}
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
log.Printf("load type name:%s type:%s\n", name, typ)
}
}
// delve counterpart to runtime.moduledata
type moduleData struct {
text, etext uint64
types, etypes uint64
typemapVar *proc.Variable
}
//go:linkname findType github.com/go-delve/delve/pkg/proc.(*BinaryInfo).findType
func findType(bi *proc.BinaryInfo, name string) (godwarf.Type, error)
//go:linkname loadModuleData github.com/go-delve/delve/pkg/proc.loadModuleData
func loadModuleData(bi *proc.BinaryInfo, mem proc.MemoryReadWriter) ([]moduleData, error)
//go:linkname imageToModuleData github.com/go-delve/delve/pkg/proc.(*BinaryInfo).imageToModuleData
func imageToModuleData(bi *proc.BinaryInfo, image *proc.Image, mds []moduleData) *moduleData
type localMemory int
func (mem *localMemory) ReadMemory(data []byte, addr uint64) (int, error) {
buf := *(*[]byte)(unsafe.Pointer(&reflect.SliceHeader{Data: uintptr(addr), Len: len(data), Cap: len(data)}))
copy(data, buf)
return len(data), nil
}
func (mem *localMemory) WriteMemory(addr uint64, data []byte) (int, error) {
return 0, fmt.Errorf("not support")
}
func dwarfToRuntimeType(bi *proc.BinaryInfo, mds []moduleData, typ godwarf.Type, name string) (typeAddr uint64, err error) {
if typ.Common().Index >= len(bi.Images) {
return 0, fmt.Errorf("could not find image for type %s", name)
}
img := bi.Images[typ.Common().Index]
rdr := img.DwarfReader()
rdr.Seek(typ.Common().Offset)
e, err := rdr.Next()
if err != nil {
return 0, fmt.Errorf("could not find dwarf entry for type:%s err:%s", name, err)
}
entryName, ok := e.Val(dwarf.AttrName).(string)
if !ok || entryName != name {
return 0, fmt.Errorf("could not find name for type:%s entry:%s", name, entryName)
}
off, ok := e.Val(godwarf.AttrGoRuntimeType).(uint64)
if !ok || off == 0 {
return 0, fmt.Errorf("could not find runtime type for type:%s", name)
}
md := imageToModuleData(bi, img, mds)
if md == nil {
return 0, fmt.Errorf("could not find module data for type %s", name)
}
typeAddr = md.types + off
if typeAddr < md.types || typeAddr >= md.etypes {
return off, nil
}
return typeAddr, nil
}
No there is not.
If you want to 'know' your types you'll have to register them.

Convert formatted time to UTC milliseconds

How to convert time in format
2009-01-01T01:02:01.111+02:00
to UTC in milliseconds?
Is there already package for this conversion? I looked at the https://golang.org/src/time/format.go but couldn't find same format to convert.
Use time.Parse.
Demo: http://play.golang.org/p/ouiDtIVjQI
package main
import (
"fmt"
"time"
)
func main() {
t, e := time.Parse(`2006-01-02T15:04:05.000-07:00`, `2009-01-01T01:02:01.111+02:00`)
if e != nil {
panic(e)
}
fmt.Println(t.UTC().UnixNano() / 1000000)
}
Use the format string 2006-01-02T15:04:05.000-07:00 for the reference date.
the format is pretty standard ISO8601, so you can use the time.RFC3339 layout, e.g.
t, e := time.Parse(time.RFC3339, "2009-01-01T01:02:01.111+02:00")
playground example
...and proceed with .UnixNano() as in thwd's answer. More predefined layouts can be found in src/time/format.go.

Obtaining a Unix Timestamp in Go Language (current time in seconds since epoch)

I have some code written in Go which I am trying to update to work with the latest weekly builds. (It was last built under r60). Everything is now working except for the following bit:
if t, _, err := os.Time(); err == nil {
port[5] = int32(t)
}
Any advice on how to update this to work with the current Go implementation?
import "time"
...
port[5] = time.Now().Unix()
If you want it as string just convert it via strconv:
package main
import (
"fmt"
"strconv"
"time"
)
func main() {
timestamp := strconv.FormatInt(time.Now().UTC().UnixNano(), 10)
fmt.Println(timestamp) // prints: 1436773875771421417
}
Another tip. time.Now().UnixNano()(godoc) will give you nanoseconds since the epoch. It's not strictly Unix time, but it gives you sub second precision using the same epoch, which can be handy.
Edit: Changed to match current golang api
Building on the idea from another answer here, to get a human-readable interpretation, you can use:
package main
import (
"fmt"
"time"
)
func main() {
timestamp := time.Unix(time.Now().Unix(), 0)
fmt.Printf("%v", timestamp) // prints: 2009-11-10 23:00:00 +0000 UTC
}
Try it in The Go Playground.

Resources