I'm trying to send a int64 over a TCP in golang, however, my receiver prints gets a different number then what I've sent out. What is the proper way to accomplish this?
//Buffer on both client and server
buffer := make([]byte, 1024)
//Sender
fileInfo, error := os.Stat(fileName)
if error != nil {
fmt.Println("Error opening file")
}
var fSize int = int(fileInfo.Size())
connection.Write([]byte(string(fSize)))
//Receiver
connection.Read(buffer)
fileSize := new(big.Int).SetBytes(bytes.Trim(buffer, "\x00")).Int64()
if err != nil {
fmt.Println("not a valid filesize")
fileSize = 0
}
Using binary.Write / binary.Read:
//sender
err := binary.Write(connection, binary.LittleEndian, fileInfo.Size())
if err != nil {
fmt.Println("err:", err)
}
//receiver
var size int64
err := binary.Read(connection, binary.LittleEndian, &size)
if err != nil {
fmt.Println("err:", err)
}
[]byte(string(fSize)) doesn't do what you think it does, it treats the number as unicode character, it doesn't return the string representation of it.
If you want the string representation of a number, use strconv.Itoa, if you want the binary represention then use:
num := make([]byte, 8) // or 4 for int32 or 2 for int16
binary.LittleEndian.PutUint64(num, 1<<64-1)
Use binary.BigEndian or binary.LittleEndian to encode the integer:
var size int64
// Send
var buf [8]byte
binary.BigEndian.PutUint64(buf[:], uint64(size))
_, err := w.Write(buf[:])
// Receive
var buf [8]byte
_, err := io.ReadFull(r, buf[:])
if err != nil {
// handle error
}
size = int64(binary.BigEndian.Uint64(buf[:])
You can also use the binary.Read and binary.Write. Your application code will be a little shorter at the cost of type switches and other goo inside these functions.
A couple of points about the code in the question. The conversion
string(fSize)
returns the UTF-8 representation of the rune fSize. It does not return a decimal encoding or binary encoding the value. Use the strconv packate to convert a numeric value to a decimal representation. Use the above mentioned binary package to convert to binary representation.
The sequence
connection.Read(buffer)
buffer = bytes.Trim(buffer, "\x00")
trims away real data if the data happens to include a 0 byte at the ends. Read returns the number of bytes read. Use that length to slice the buffer:
n, err := connection.Read(buffer)
buffer = buffer[:n]
You can't use string() to cast from an int, you need to use the strconv package.
connection.Write([]byte(strconv.FormatInt(fileInfo.Size(), 10))
Related
I couldn't find anything helpful online on this one.
I am writing an REST API, and I want to log the size of the body of the request in bytes for metrics. Go net/http API does not provide that directly. http.Request does have Content-Length field, but that field can be empty or the client might send false data.
Is there a way to get that in the middlware level? The bruteforce method would be to read the full body and check the size. But if I do that in the middleware, the handler will not have access to the body because it would have been read and closed.
Why do you want a middle in here?
The simple way is b, err = io.Copy(anyWriterOrMultiwriter, r.Body)
b is total content length of request when err == nil
Use request body as you want. Also b, err = io.Copy(ioutil.Discard, r.Body)
You could write a custom ReadCloser that proxies an existing one and counts bytes as it goes. Something like:
type LengthReader struct {
Source io.ReadCloser
Length int
}
func (r *LengthReader) Read(b []byte) (int, error) {
n, err := r.Source.Read(b)
r.Length += n
return n, err
}
func (r *LengthReader) Close() error {
var buf [32]byte
var n int
var err error
for err == nil {
n, err = r.Source.Read(buf[:])
r.Length += n
}
closeerr := r.Source.Close()
if err != nil && err != io.EOF {
return err
}
return closeerr
}
This will count bytes as you read them from the stream, and when closed it will consume and count all remaining unread bytes first. After you're finished with the stream, you can then access the length.
Option 1
Use TeeReader and this is scalable. It splits reader into two and one of them calculates the size using allocated memory. Also, in the first case
maxmem := 4096
var buf bytes.Buffer
// comment this line out if you want to disable gathering metrics
resp.Body = io.TeeReader(resp.Body, &buf)
readsize := func(r io.Reader) int {
bytes := make([]byte, maxmem)
var size int
for {
read, err := r.Read(bytes)
if err == io.EOF {
break
}
size += read
}
return size
}
log.Printf("Size is %d", readsize(&buf))
Option 2 unscalable way (original answer)
You can just read the body, calculate the size, then unmarshal into struct, so that it becomes:
b, _ := ioutil.ReadAll(r.Body)
size := len(b) // can be nil so check err in your app
if err := json.Unmarshal(b, &input); err != nil {
s.BadReq(w, errors.New("error reading body"))
return
}
How do you copy the Item struct and all pointers to a new struct?
type Item struct {
A []*ASet `json:"a,omitempty"`
B []*BSet. `json:"b,omitempty"`
C []*CSet. `json:"c,omitempty"`
}
type ASet struct {
UID string `json:"uid,omitempty"`
Items []*ItemA `json:"member,omitempty"`
}
type ItemA struct {
UID string `json:"uid,omitempty"`
Portset []*PortSet `json:"portset,omitempty"`
}
type PortSet struct {
UID string `json:"uid,omitempty"`
Ports []*Port `json:"member,omitempty"`
}
type Port struct {
UID string `json:"uid,omitempty"`
Port int `json:"port,omitempty"`
}
I don't want the new struct to reference the old struct.
What you want is essentially a deep copy which is not supported by the standard library.
Your choices:
Do the copy "manually", e.g. create a new struct and copy the fields, where pointers or slices/maps/channels/etc must be duplicated manually, in a recursive manner.
This is easiest done by assigning your struct to another one which copies all fields, so you essentially only need to nurture pointers/maps/slices etc. (but recursively).
Use an external library, e.g. github.com/mohae/deepcopy, github.com/ulule/deepcopier or github.com/mitchellh/copystructure
Marshal your struct to some format (e.g. JSON), then unmarshal into another variable.
The last option could look like this:
var i1 Item
data, err := json.Marshal(i1)
if err != nil {
panic(err)
}
var i2 Item
if err := json.Unmarshal(data, &i2); err != nil {
panic(err)
}
// i2 holds a deep copy of i1
Note that marshaling/unmarshaling isn't particularly efficient, but easy and compact. Also note that this might not handle recursive data structures well, might even hang or panic (e.g. a field points to the containing struct), but handling recursive structures may be a problem to all solutions. Also note that this won't clone unexported fields.
The good thing about this marshaling / unmarshaling is that you can easily create a helper function to deep-copy "any" values:
func deepCopy(v interface{}) (interface{}, error) {
data, err := json.Marshal(v)
if err != nil {
return nil, err
}
vptr := reflect.New(reflect.TypeOf(v))
err = json.Unmarshal(data, vptr.Interface())
if err != nil {
return nil, err
}
return vptr.Elem().Interface(), err
}
Testing it:
p1 := image.Point{X: 1, Y: 2}
fmt.Printf("p1 %T %+v\n", p1, p1)
p2, err := deepCopy(p1)
if err != nil {
panic(err)
}
p1.X = 11
fmt.Printf("p1 %T %+v\n", p1, p1)
fmt.Printf("p2 %T %+v\n", p2, p2)
Output (try it on the Go Playground):
p1 image.Point (1,2)
p1 image.Point (11,2)
p2 image.Point (1,2)
Im trying to build a small website, I use the html/template to create dynamic pages. One thing on the pages is a list of URL's inside those urls sometimes I need character encoding. for special characters like ô (%C3%B4).
When i try to parse the variables into a page using html/template i get the following as a result: %!c(MISSING)3%!b(MISSING)4. I have no clue what is wrong here
type Search_list struct {
Search_name string
Search_url string
Search_price float64
}
func generateSearchPage(language int, q string) (string, error) {
/* ommited, fetshing data from elasticsrearch*/
sl := []Search_list{}
var urle *url.URL
//looping through ES results and putting them in a custom List
for _, res := range data.Hits.Hits {
//
//Encode Url
var err error
urle, err = url.Parse(res.Source.URL)
if err != nil {
continue
// TODO: add log
}
//I've tried already the following:
fmt.Println(res.Source.URL) //ô
fmt.Println(url.QueryUnescape(res.Source.URL)) //ô
fmt.Println(urle.String()) //%C3%B4
u, _ := url.QueryUnescape(res.Source.URL)
sl = append(sl, Search_list{res.Source.Name, u, res.Source.Price})
}
var buffer bytes.Buffer
t := template.New("Index template")
t, err = t.Parse(page_layout[language][PageTypeSearch])
if err != nil {
panic(err)
}
err = t.Execute(&buffer, Search_data{
Title: translations[language]["homepage"],
Page_title: WebSiteName,
Listed_items: sl,
})
if err != nil {
panic(err)
}
return buffer.String(), nil // %!c(MISSING)3%!b(MISSING)4
}
# Moshe Revah
thanks for the help, in the meantime I found the error
Later in the code I send my generated page to the http client with
fmt.Fprintf(w, page) // Here was the error b/c of the % symbols
I just changed it to
fmt.Fprint(w, page)
and it works perfect
My code:
func getSourceUrl(url string) (string, error) {
resp, err := http.Get(url)
if err != nil {
fmt.Println("Error getSourceUrl: ")
return "", err
}
defer resp.Body.Close()
body := resp.Body
// time = 0
sourcePage, err := ioutil.ReadAll(body)
// time > 5 minutes
return string(sourcePage), err
}
I have a website link with a source of around> 100000 lines. Using ioutil.ReadAll made me get very long (about> 5 minutes for 1 link). Is there a way to get Source website faster? Thank you!
#Minato try this code, play with M throttling parameter. Play with it if you get too errors (reduce it).
package main
import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"runtime"
"time"
)
// Token is an empty struct for signalling
type Token struct{}
// N files to get
var N = 301 // at the source 00000 - 00300
// M max go routines
var M = runtime.NumCPU() * 16
// Throttle to max M go routines
var Throttle = make(chan Token, M)
// DoneStatus is used to signal end of
type DoneStatus struct {
length int
sequence string
duration float64
err error
}
// ExitOK is simple exit counter
var ExitOK = make(chan DoneStatus)
// TotalBytes read
var TotalBytes = 0
// TotalErrors captured
var TotalErrors = 0
// URLTempl is templte for URL construction
var URLTempl = "https://virusshare.com/hashes/VirusShare_%05d.md5"
func close(c io.Closer) {
err := c.Close()
if err != nil {
log.Fatal(err)
}
}
func main() {
log.Printf("start main. M=%d\n", M)
startTime := time.Now()
for i := 0; i < N; i++ {
go func(idx int) {
// slow ramp up fire getData after i seconds
time.Sleep(time.Duration(i) * time.Second)
url := fmt.Sprintf(URLTempl, idx)
_, _ = getData(url) // errors captured as data
}(i)
}
// Count N byte count signals
for i := 0; i < N; i++ {
status := <-ExitOK
TotalBytes += status.length
if status.err != nil {
TotalErrors++
log.Printf("[%d] : %v\n", i, status.err)
continue
}
log.Printf("[%d] file %s, %.1f MByte, %.1f min, %.1f KByte/sec\n",
i, status.sequence,
float64(status.length)/(1024*1024),
status.duration/60,
float64(status.length)/(1024)/status.duration)
}
// totals
duration := time.Since(startTime).Seconds()
log.Printf("Totals: %.1f MByte, %.1f min, %.1f KByte/sec\n",
float64(TotalBytes)/(1024*1024),
duration/60,
float64(TotalBytes)/(1024)/duration)
// using fatal to verify only one go routine is running at the end
log.Fatalf("TotalErrors: %d\n", TotalErrors)
}
func getData(url string) (data []byte, err error) {
var startTime time.Time
defer func() {
// release token
<-Throttle
// signal end of go routine, with some status info
ExitOK <- DoneStatus{
len(data),
url[41:46],
time.Since(startTime).Seconds(),
err,
}
}()
// acquire one of M tokens
Throttle <- Token{}
log.Printf("Started file: %s\n", url[41:46])
startTime = time.Now()
resp, err := http.Get(url)
if err != nil {
return
}
defer close(resp.Body)
data, err = ioutil.ReadAll(resp.Body)
if err != nil {
return
}
return
}
Per transfer variation is about 10-40KByte/sec and final total for all 301 files I get 928MB, 11.1min at 1425 KByte/sec. I believe you should be able to get similar results.
// outside the scope of the question but maybe useful
Also give this a try http://www.dslreports.com/speedtest/ go to settings and select bunch of US servers for testing and set duration to 60sec. This will tell you what your actual effective total rate is to US.
Good luck!
You could iterate sections of the response at a time, something like;
responseSection := make([]byte, 128)
body.Read(responseSection)
return string(responseSection), err
Which would read 128 bytes at a time. However would suggest confirming the download speed is not causing the slow load.
The 5 minutes is probably network time.
That said, you generally would not want to buffer enormous objects in memory.
resp.Body is a Reader.
So you cold use io.Copy to copy its contents into a file.
Converting sourcePage into a string is a bad idea as it forces another allocation.
I'm reading a JSON file that contains Unix Epoch dates, but they are strings in the JSON. In Go, can I convert a string in the form "1490846400" into a Go time.Time?
There is no such function in time package, but it's easy to write:
func stringToTime(s string) (time.Time, error) {
sec, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(sec, 0), nil
}
Playground: https://play.golang.org/p/2h0Vd7plgk.
There's nothing wrong, or incorrect about the answer provided by #Ainar-G, but likely a better way to do this is with a custom JSON unmarshaler:
type EpochTime time.Time
func (et *EpochTime) UnmarshalJSON(data []byte) error {
t := strings.Trim(string(data), `"`) // Remove quote marks from around the JSON string
sec, err := strconv.ParseInt(t, 10, 64)
if err != nil {
return err
}
epochTime := time.Unix(sec,0)
*et = EpochTime(epochTime)
return nil
}
Then in your struct, replace time.Time with EpochTime:
type SomeDocument struct {
Timestamp EpochTime `json:"time"`
// other fields
}