How to update sqlite using go without other libraries - sqlite

Hi ive been in trouble all the day finding a way to update secretQuestion and secretAnswer in my user database in sqlite using go, what i have in my actual file is:
r.ParseForm()
id := r.URL.Query().Get("id")
secretQuestion := r.Form.Get("question")
secretAnswer, _ := bcrypt.GenerateFromPassword([]byte(r.Form.Get("answer")), 14)
//
database.Db, err = sql.Open("sqlite3", "./database/database.db")
if err != nil {
panic(err)
}
//
result, _ := database.Db.Prepare("UPDATE users SET secretQuestion = ?,secretAnswer = ? WHERE id=?")
result.Exec(secretQuestion, secretAnswer, id)
I didnt found a single way that work and ive tried a good amount, those like this one compile and dont give error (tryed by recovering the err) but after opening my database secretQuestion and secretAnswer are still nill, note that what I gave them is not nill already checked that.
Thanks per advance for the help ! I'm not used to used forum so feel free to tell me if I need to add something.

This works for me:
package main
import (
"database/sql"
_ "github.com/mattn/go-sqlite3"
)
func main() {
d, e := sql.Open("sqlite3", "file.db")
if e != nil {
panic(e)
}
defer d.Close()
d.Exec("UPDATE artist_t SET check_s = ? WHERE artist_n = ?", "2021-05-20", 42)
}
https://github.com/mattn/go-sqlite3

Related

Downloading content with range request corrupts

I have set up a basic project on Github: https://github.com/kounelios13/range-download.
Essentially this project tries to download a file using HTTP Range requests, assemble it, and save it back to disk. I am trying to follow this article (apart from the goroutines for the time being). When I try to download the file using range requests the final size, after all requests data are combined, is bigger than the original size I would get have and the final file is corrupted.
Here is the code responsible for downloading the file
type Manager struct{
limit int
}
func NewManager(limit int) *Manager{
return &Manager{
limit: limit,
}
}
func (m *Manager) DownloadBody(url string ) ([]byte ,error){
// First we need to determine the filesize
body := make([]byte ,0)
response , err := http.Head(url) // We perform a Head request to get header information
if response.StatusCode != http.StatusOK{
return nil ,fmt.Errorf("received code %d",response.StatusCode)
}
if err != nil{
return nil , err
}
maxConnections := m.limit // Number of maximum concurrent co routines
bodySize , _ := strconv.Atoi(response.Header.Get("Content-Length"))
bufferSize :=(bodySize) / (maxConnections)
diff := bodySize % maxConnections
read := 0
for i:=0;i<maxConnections;i++{
min := bufferSize * i
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
log.Println("File size:",bodySize , "Downloaded size:",len(body)," Actual read:",read)
return body, nil
}
Also I noticed that the bigger the limit I set the more the difference between the original file content length and the actual size of all request bodies combined is.
Here is my main.go
func main() {
imgUrl := "https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg"
maxConnections := 4
manager := lib.NewManager(maxConnections)
data , e:= manager.DownloadBody(imgUrl)
if e!= nil{
log.Fatalln(e)
}
ioutil.WriteFile("foo.jpg" , data,0777)
}
Note: for the time being I am not interested in making the code concurrent.
Any ideas what I could be missing?
Note: I have confirmed that server returns a 206 partial content using the curl command below:
curl -I https://media.wired.com/photos/5a593a7ff11e325008172bc2/16:9/w_2400,h_1350,c_limit/pulsar-831502910.jpg
Thanks to #mh-cbon I managed to write a simple test that helped me find the solution . Here is the fixed code
for i:=0;i<maxConnections;i++{
min := bufferSize * i
if i != 0{
min++
}
max := bufferSize * (i+1)
if i==maxConnections-1{
max+=diff // Check to see if we have any leftover data to retrieve for the last request
}
req , _ := http.NewRequest("GET" , url, nil)
req.Header.Add("Range" ,fmt.Sprintf("bytes=%d-%d",min,max))
res , e := http.DefaultClient.Do(req)
if e != nil{
return body , e
}
log.Printf("Index:%d . Range:bytes=%d-%d",i,min,max)
data , e :=ioutil.ReadAll(res.Body)
res.Body.Close()
if e != nil{
return body,e
}
log.Println("Data for request: ",len(data))
read = read + len(data)
body = append(body, data...)
}
The problem was that I didn't have a correct min value to begin with . So lets say I have the following ranges to download :
0-100
101 - 200
My code would download bytes from 0-100 and then again from 100-200 instead of 101-200
So I made sure on every iteration (except the first one) to increment the min by 1 so as not to overlap with the previous range
Here is a simple test I managed to fix from the docs provided as comments:
func TestManager_DownloadBody(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
http.ServeContent(writer,request ,"hey" ,time.Now() ,bytes.NewReader([]byte(`hello world!!!!`)))
}))
defer ts.Close()
m := NewManager(4)
data , err := m.DownloadBody(ts.URL)
if err != nil{
t.Errorf("%s",err)
}
if string(data) != "hello world!!!!"{
t.Errorf("Expected hello world!!!! . received : [%s]",data)
}
}
Sure there are more tests to be written but it is a good start

Flexible date/time parsing in Go (Adding default values in parsing)

Further to this question, I want to parse a date/time passed on the command line to a Go program. At the moment, I use the flag package to populate a string variable ts and then the following code:
if ts == "" {
config.Until = time.Now()
} else {
const layout = "2006-01-02T15:04:05"
if config.Until, err = time.Parse(layout, ts); err != nil {
log.Errorf("Could not parse %s as a time string: %s. Using current date/time instead.", ts, err.Error())
config.Until = time.Now()
}
}
This works OK, provided the user passes exactly the right format - e.g. 2019-05-20T09:07:33 or some such.
However, what I would like, if possible, is the flexibility to pass e.g. 2019-05-20T09:07 or 2019-05-20T09 or maybe even 2019-05-20 and have the hours, minutes and seconds default to 0 where appropriate.
Is there a sane1 way to do this?
1 not requiring me to essentially write my own parser
UPDATE
I've kind of got a solution to this, although it's not particularly elegant, it does appear to work for most of the cases I am likely to encounter.
package main
import (
"fmt"
"time"
)
func main() {
const layout = "2006-01-02T15:04:05"
var l string
var input string
for _, input = range []string{"2019-05-30", "2019-05-30T16", "2019-05-30T16:04", "2019-05-30T16:04:34",
"This won't work", "This is extravagantly long and won't work either"} {
if len(input) < len(layout) {
l = layout[:len(input)]
} else {
l = layout
}
if d, err := time.Parse(l, input); err != nil {
fmt.Printf("Error %s\n", err.Error())
} else {
fmt.Printf("Layout %-20s gives time %v\n", l, d)
}
}
}
Just try each format, until one works. If none work, return an error.
var formats = []string{"2006-01-02T15:04:05", "2006-01-02", ...}
func parseTime(input string) (time.Time, error) {
for _, format := range formats {
t, err := time.Parse(format, input)
if err == nil {
return t, nil
}
}
return time.Time{}, errors.New("Unrecognized time format")
}
I think this library is what you are looking for https://github.com/araddon/dateparse
Parse many date strings without knowing the format in advance. Uses a scanner to read bytes and use a state machine to find format.
t, err := dateparse.ParseAny("3/1/2014")
In the specific scenario you describe, you could check the length of the input datestamp string, and add the proper length of zero stuff at the end of it to correspond to your layout. So basically you could append as much of the string "T00:00:00" (counting from the end), to the input as is missing in length compared to the layout format string.

Is there a better way to parse this Map?

Fairly new to Go, essentially in the actual code I'm writing I plan to read from a file which will contain environment variables, i.e. API_KEY=XYZ. Means I can keep them out of Version control. The below solution 'works' but I feel like there is probably a better way of doing it.
The end goal is to be able to access the elements from the file like so
m["API_KEY"] and that would print XYZ. This may even already exist and I'm re-inventing the wheel, I saw Go has environment variables but it didn't seem to be what I was after specifically.
So any help is appreciated.
Playground
Code:
package main
import (
"fmt"
"strings"
)
var m = make(map[string]string)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
arr := strings.Split(text, "\n")
for _, value := range arr {
tmp := strings.Split(value, "=")
m[strings.TrimSpace(tmp[0])] = strings.TrimSpace(tmp[1])
}
fmt.Println(m)
}
First, I would recommend to read this related question: How to handle configuration in Go
Next, I would really consider storing your configuration in another format. Because what you propose isn't a standard. It's close to Java's property file format (.properties), but even property files may contain Unicode sequences and thus your code is not a valid .properties format parser as it doesn't handle Unicode sequences at all.
Instead I would recommend to use JSON, so you can easily parse it with Go or with any other language, and there are many tools to edit JSON texts, and still it is human-friendly.
Going with the JSON format, decoding it into a map is just one function call: json.Unmarshal(). It could look like this:
text := `{"Var1":"Value1", "Var2":"Value2", "Var3":"Value3"}`
var m map[string]string
if err := json.Unmarshal([]byte(text), &m); err != nil {
fmt.Println("Invalid config file:", err)
return
}
fmt.Println(m)
Output (try it on the Go Playground):
map[Var1:Value1 Var2:Value2 Var3:Value3]
The json package will handle formatting and escaping for you, so you don't have to worry about any of those. It will also detect and report errors for you. Also JSON is more flexible, your config may contain numbers, texts, arrays, etc. All those come for "free" just because you chose the JSON format.
Another popular format for configuration is YAML, but the Go standard library does not include a YAML parser. See Go implementation github.com/go-yaml/yaml.
If you don't want to change your format, then I would just use the code you posted, because it does exactly what you want it to do: process input line-by-line, and parse a name = value pair from each line. And it does it in a clear and obvious way. Using a CSV or any other reader for this purpose is bad because they hide what's under the hood (they intentionally and rightfully hide format specific details and transformations). A CSV reader is a CSV reader first; even if you change the tabulator / comma symbol: it will interpret certain escape sequences and might give you different data than what you see in a plain text editor. This is an unintended behavior from your point of view, but hey, your input is not in CSV format and yet you asked a reader to interpret it as CSV!
One improvement I would add to your solution is the use of bufio.Scanner. It can be used to read an input line-by-line, and it handles different styles of newline sequences. It could look like this:
text := `Var1=Value1
Var2=Value2
Var3=Value3`
scanner := bufio.NewScanner(strings.NewReader(text))
m := map[string]string{}
for scanner.Scan() {
parts := strings.Split(scanner.Text(), "=")
if len(parts) == 2 {
m[strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
fmt.Println("Error encountered:", err)
}
fmt.Println(m)
Output is the same. Try it on the Go Playground.
Using bufio.Scanner has another advantage: bufio.NewScanner() accepts an io.Reader, the general interface for "all things being a source of bytes". This means if your config is stored in a file, you don't even have to read all the config into the memory, you can just open the file e.g. with os.Open() which returns a value of *os.File which also implements io.Reader, so you may directly pass the *os.File value to bufio.NewScanner() (and so the bufio.Scanner will read from the file and not from an in-memory buffer like in the example above).
1- You may read all with just one function call r.ReadAll() using csv.NewReader from encoding/csv with:
r.Comma = '='
r.TrimLeadingSpace = true
And result is [][]string, and input order is preserved, Try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := r.ReadAll()
if err != nil {
panic(err)
}
fmt.Println(all)
}
output:
[[Var1 Value1] [Var2 Value2] [Var3 Value3]]
2- You may fine-tune csv.ReadAll() to convert the output to the map, but the order is not preserved, try it on The Go Playground:
package main
import (
"encoding/csv"
"fmt"
"io"
"strings"
)
func main() {
text := `Var1=Value1
Var2=Value2
Var3=Value3`
r := csv.NewReader(strings.NewReader(text))
r.Comma = '='
r.TrimLeadingSpace = true
all, err := ReadAll(r)
if err != nil {
panic(err)
}
fmt.Println(all)
}
func ReadAll(r *csv.Reader) (map[string]string, error) {
m := make(map[string]string)
for {
tmp, err := r.Read()
if err == io.EOF {
return m, nil
}
if err != nil {
return nil, err
}
m[tmp[0]] = tmp[1]
}
}
output:
map[Var2:Value2 Var3:Value3 Var1:Value1]

How to discover all package types at runtime?

As far as I'm aware (see here, and here) there is no type discovery mechanism in the reflect package, which expects that you already have an instance of the type or value you want to inspect.
Is there any other way to discover all exported types (especially the structs) in a running go package?
Here's what I wish I had (but it doesn't exist):
import "time"
import "fmt"
func main() {
var types []reflect.Type
types = reflect.DiscoverTypes(time)
fmt.Println(types)
}
The end goal is to be able to discover all the structs of a package that meet certain criteria, then be able to instantiate new instances of those structs.
BTW, a registration function that identifies the types is not a valid approach for my use case.
Whether you think it's a good idea or not, here's why I want this capability (because I know you're going to ask):
I've written a code generation utility that loads go source files and builds an AST to scan for types that embed a specified type. The output of the utility is a set of go test functions based on the discovered types. I invoke this utility using go generate to create the test functions then run go test to execute the generated test functions. Every time the tests change (or a new type is added) I must re-run go generate before re-running go test. This is why a registration function is not a valid option. I'd like to avoid the go generate step but that would require my utility to become a library that is imported by the running package. The library code would need to somehow scan the running namespace during init() for types that embed the expected library type.
In Go 1.5, you can use the new package types and importer to inspect binary and source packages. For example:
package main
import (
"fmt"
"go/importer"
)
func main() {
pkg, err := importer.Default().Import("time")
if err != nil {
fmt.Printf("error: %s\n", err.Error())
return
}
for _, declName := range pkg.Scope().Names() {
fmt.Println(declName)
}
}
You can use the package go/build to extract all the packages installed. Or you can configure the Lookup importer to inspect binaries outside the environment.
Before 1.5, the only no-hacky way is to use the package ast to compile the source code.
(see bottom for 2019 update)
Warning: untested and hacky. Can break whenever a new version of Go is released.
It is possible to get all types the runtime knows of by hacking around Go's runtime a little. Include a small assembly file in your own package, containing:
TEXT yourpackage·typelinks(SB), NOSPLIT, $0-0
JMP reflect·typelinks(SB)
In yourpackage, declare the function prototype (without body):
func typelinks() []*typeDefDummy
Alongside a type definition:
type typeDefDummy struct {
_ uintptr // padding
_ uint64 // padding
_ [3]uintptr // padding
StrPtr *string
}
Then just call typelinks, iterate over the slice and read each StrPtr for the name. Seek those starting with yourpackage. Note that if there are two packages called yourpackage in different paths, this method won't work unambiguously.
can I somehow hook into the reflect package to instantiate new instances of those names?
Yeah, assuming d is a value of type *typeDefDummy (note the asterisk, very important):
t := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&d)))
Now t is a reflect.Type value which you can use to instantiate reflect.Values.
Edit: I tested and executed this code successfully and have uploaded it as a gist.
Adjust package names and include paths as necessary.
Update 2019
A lot has changed since I originally posted this answer. Here's a short description of how the same can be done with Go 1.11 in 2019.
$GOPATH/src/tl/tl.go
package tl
import (
"unsafe"
)
func Typelinks() (sections []unsafe.Pointer, offset [][]int32) {
return typelinks()
}
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
func Add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return add(p, x, whySafe)
}
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
$GOPATH/src/tl/tl.s
TEXT tl·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
TEXT tl·add(SB), $0-0
JMP reflect·add(SB)
main.go
package main
import (
"fmt"
"reflect"
"tl"
"unsafe"
)
func main() {
sections, offsets := tl.Typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := tl.Add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Happy hacking!
Update 2022 with Go 1.18
With Go 1.18 the accepted answer doesn't work anymore, but I could adapt it to use go:linkname. Using this directive and the unsafe package these internal functions can now be accessed without any extra assembly code.
package main
import (
"fmt"
"reflect"
"unsafe"
)
//go:linkname typelinks reflect.typelinks
func typelinks() (sections []unsafe.Pointer, offset [][]int32)
//go:linkname add reflect.add
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer
func main() {
sections, offsets := typelinks()
for i, base := range sections {
for _, offset := range offsets[i] {
typeAddr := add(base, uintptr(offset), "")
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
fmt.Println(typ)
}
}
}
Unfortunately, I don't think this is possible. Packages are not "actionable" in Go, you can't "call a function" on it. You can't call a function on a type either, but you can call reflect.TypeOf on an instance of the type and get reflect.Type which is a runtime abstraction of a type. There just isn't such mechanism for packages, there isn't a reflect.Package.
With that said, you could file an issue about the absence of (and practicality of adding) reflect.PackageOf etc.
Thanks #thwd and #icio, follow your direction it still worked on 1.13.6 today.
Follow your way the tl.s will be:
TEXT ·typelinks(SB), $0-0
JMP reflect·typelinks(SB)
yes, no package name and no "add" function in it.
then follow #icio's way change "add" function to:
func add(p unsafe.Pointer, x uintptr, whySafe string) unsafe.Pointer {
return unsafe.Pointer(uintptr(p) + x)
}
then all worked now. :)
Version for go 1.16(tested for go version go1.16.7 linux/amd64)
This can only generate code and strings. You have to paste generated code somewhere then compile it again
Works if only sources are available.
import (
"fmt"
"go/ast"
"golang.org/x/tools/go/packages"
"reflect"
"time"
"unicode"
)
func printTypes(){
config := &packages.Config{
Mode: packages.NeedSyntax,
}
pkgs, _ := packages.Load(config, "package_name")
pkg := pkgs[0]
for _, s := range pkg.Syntax {
for n, o := range s.Scope.Objects {
if o.Kind == ast.Typ {
// check if type is exported(only need for non-local types)
if unicode.IsUpper([]rune(n)[0]) {
// note that reflect.ValueOf(*new(%s)) won't work with interfaces
fmt.Printf("ProcessType(new(package_name.%s)),\n", n)
}
}
}
}
}
full example of possible use case: https://pastebin.com/ut0zNEAc (doesn't work in online repls, but works locally)
After go 1.11 dwarf debugging symbols added runtime type information, you can get the runtime type by using this address
DW_AT_go_runtime_type
gort You can see more content
package main
import (
"debug/dwarf"
"fmt"
"log"
"os"
"reflect"
"runtime"
"unsafe"
"github.com/go-delve/delve/pkg/dwarf/godwarf"
"github.com/go-delve/delve/pkg/proc"
)
func main() {
path, err := os.Executable()
if err != nil {
log.Fatalln(err)
}
bi := proc.NewBinaryInfo(runtime.GOOS, runtime.GOARCH)
err = bi.LoadBinaryInfo(path, 0, nil)
if err != nil {
log.Fatalln(err)
}
mds, err := loadModuleData(bi, new(localMemory))
if err != nil {
log.Fatalln(err)
}
types, err := bi.Types()
if err != nil {
log.Fatalln(err)
}
for _, name := range types {
dwarfType, err := findType(bi, name)
if err != nil {
continue
}
typeAddr, err := dwarfToRuntimeType(bi, mds, dwarfType, name)
if err != nil {
continue
}
typ := reflect.TypeOf(*(*interface{})(unsafe.Pointer(&typeAddr)))
log.Printf("load type name:%s type:%s\n", name, typ)
}
}
// delve counterpart to runtime.moduledata
type moduleData struct {
text, etext uint64
types, etypes uint64
typemapVar *proc.Variable
}
//go:linkname findType github.com/go-delve/delve/pkg/proc.(*BinaryInfo).findType
func findType(bi *proc.BinaryInfo, name string) (godwarf.Type, error)
//go:linkname loadModuleData github.com/go-delve/delve/pkg/proc.loadModuleData
func loadModuleData(bi *proc.BinaryInfo, mem proc.MemoryReadWriter) ([]moduleData, error)
//go:linkname imageToModuleData github.com/go-delve/delve/pkg/proc.(*BinaryInfo).imageToModuleData
func imageToModuleData(bi *proc.BinaryInfo, image *proc.Image, mds []moduleData) *moduleData
type localMemory int
func (mem *localMemory) ReadMemory(data []byte, addr uint64) (int, error) {
buf := *(*[]byte)(unsafe.Pointer(&reflect.SliceHeader{Data: uintptr(addr), Len: len(data), Cap: len(data)}))
copy(data, buf)
return len(data), nil
}
func (mem *localMemory) WriteMemory(addr uint64, data []byte) (int, error) {
return 0, fmt.Errorf("not support")
}
func dwarfToRuntimeType(bi *proc.BinaryInfo, mds []moduleData, typ godwarf.Type, name string) (typeAddr uint64, err error) {
if typ.Common().Index >= len(bi.Images) {
return 0, fmt.Errorf("could not find image for type %s", name)
}
img := bi.Images[typ.Common().Index]
rdr := img.DwarfReader()
rdr.Seek(typ.Common().Offset)
e, err := rdr.Next()
if err != nil {
return 0, fmt.Errorf("could not find dwarf entry for type:%s err:%s", name, err)
}
entryName, ok := e.Val(dwarf.AttrName).(string)
if !ok || entryName != name {
return 0, fmt.Errorf("could not find name for type:%s entry:%s", name, entryName)
}
off, ok := e.Val(godwarf.AttrGoRuntimeType).(uint64)
if !ok || off == 0 {
return 0, fmt.Errorf("could not find runtime type for type:%s", name)
}
md := imageToModuleData(bi, img, mds)
if md == nil {
return 0, fmt.Errorf("could not find module data for type %s", name)
}
typeAddr = md.types + off
if typeAddr < md.types || typeAddr >= md.etypes {
return off, nil
}
return typeAddr, nil
}
No there is not.
If you want to 'know' your types you'll have to register them.

Get file inode in Go

How can I get a file inode in Go?
I already can print it like this:
file := "/tmp/system.log"
fileinfo, _ := os.Stat(file)
fmt.Println(fileinfo.Sys())
fmt.Println(fileinfo)
Looking at Go implementation it was obvious looking for some stat method, but I still did not manage to find the structure definition for a Unix system.
How can I get the inode value directly?
Which file/s in the source code define the structure of Sys()?
You can use a type assertion to get the underlying syscall.Stat_t from the fileinfo like this
package main
import (
"fmt"
"os"
"syscall"
)
func main() {
file := "/etc/passwd"
fileinfo, _ := os.Stat(file)
fmt.Printf("fileinfo.Sys() = %#v\n", fileinfo.Sys())
fmt.Printf("fileinfo = %#v\n", fileinfo)
stat, ok := fileinfo.Sys().(*syscall.Stat_t)
if !ok {
fmt.Printf("Not a syscall.Stat_t")
return
}
fmt.Printf("stat = %#v\n", stat)
fmt.Printf("stat.Ino = %#v\n", stat.Ino)
}
You can do the following:
file := "/tmp/system.log"
var stat syscall.Stat_t
if err := syscall.Stat(file, &stat); err != nil {
panic(err)
}
fmt.Println(stat.Ino)
Where stat.Ino is the inode you are looking for.
Package syscall is now deprecated. See https://pkg.go.dev/golang.org/x/sys instead.

Resources