GO API with SQLITE3 can't DELETE tuple from db - sqlite

Hello fellow developers.
I am trying to learn GO while constructing a simple web API using sqlite3. I got stuck at somepoint where i am unable to delete rows from my table by sending a DELETE request from postman. I am trying to use the code below to delete a row. I have already verified that I have access to db and I can also delete rows by using command tool of sqlite3. I do not understand what is wrong!
func deleteArticle(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
params := mux.Vars(r) // get any params
db := connectToDB(dbName)
defer db.Close()
_, err := db.Query("DELETE FROM article WHERE id=" + params["id"])
if err != nil {
fmt.Fprintf(w, "article couldn't be found in db")
}
}
Here is the navigation part:
myRouter.HandleFunc("/articles/{id}", deleteArticle).Methods("DELETE")
No mather what I do I cannot delete an article from db using postman.
Thanks bunches.

Thanks to #mkopriva 's comments I have learned that
1.
It is very important that you do not use Query nor QueryRow for SQL
queries that do not return any rows, for these cases use the Exec
method. When you use Query you always have to assign the result to a
non-blank identifier, i.e. anything but _, and then invoke the Close
method on that once you're done with the result. If you do not do that
then your application will leak db connections and very soon will
start crashing.
2.
when you want to pass user input (including record ids) to your
queries you have to utilize, at all times, the parameter-reference
syntax supported by the sql dialect and/or dirver you are using, and
then pass the input separately. That means that you should never do
Exec("DELETE FROM article WHERE id=" + params["id"]),
instead you should always do
Exec("DELETE FROM article WHERE id= ?",params["id"])
If you do not do it the proper way and instead continue
using plain string concatenation your app will be vulnerable to SQL
injection attacks.
Regarding this information I have changed my code into:
func deleteArticle(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
params := mux.Vars(r) // get any params
db := connectToDB(dbName)
defer db.Close()
fmt.Printf("%q\n", params["id"])
statement, err := db.Prepare("DELETE FROM article WHERE id= ?")
if err != nil {
fmt.Fprintf(w, "article couldn't be found in db")
}
statement.Exec(params["id"])
}
Which has solved my problem. So thank you #mkopriva

Related

Why I receive "too many documents writes sent to bulkwriter" when using the 'serverTimestamp' tag?

I wanted to test the batch use case and I'm a bit confused.
Initially I've used the Batch() method from the firestore GO SDK. When I use this method I don't receive any error.
I've seen a message on my IDE when I was using Batch():
Deprecated: The WriteBatch API has been replaced with the transaction and the bulk writer API. For atomic transaction operations, use Transaction. For bulk read and write operations, use BulkWriter.
By reading the message I started using the BulkWriter but when I'm using it I receive an error:
too many document writes sent to bulkwriter
When I remove the serverTimestamp there is no error.
The code:
// Storer persists tickets in Firestore.
type Storer struct {
client *firestore.Client
}
// createTicket contains the data needed to create a Ticket in Firestore.
type createTicket struct {
Title string `firestore:"title"`
Price float64 `firestore:"price"`
DateCreated time.Time `firestore:"dateCreated,serverTimestamp"`
}
func (s *Storer) CreateTicket(ctx context.Context, ticket tixer.Ticket) error {
bulk := s.client.BulkWriter(ctx)
ticketRef := s.client.Collection("tickets").Doc(ticket.ID.String())
_, err := bulk.Create(ticketRef, createTicket{
Title: ticket.Title,
Price: ticket.Price,
})
if err != nil {
return err
}
bulk.Flush()
return nil
}

Golang: Integration tests hitting the same db instance even upon db.Close()

I have a test suite that uses an in-memory sqlite instance to run db queries. After adding some tests, I suddenly started to get tons of "UNIQUE constraint failed..." errors in every line that tries to perform an insertion. That makes it seem like all my tests are connecting to, writing to, and reading from the same db instance. Here's how the test db instance is produced
const DBProvider = "sqlite3"
// DBConnection is the connection string to use for testing
const DBConnection = "file::memory:?cache=shared"
// NewMigratedDB returns a new connection to a migrated database
func NewMigratedDB(provider string, connection string, models ...interface{}) (*gorm.DB, error) {
db, err := gorm.Open(provider, connection)
if err != nil {
return nil, err
}
db = db.AutoMigrate(models...)
if db.Error != nil {
return nil, err
}
return db, nil
}
And here's how it's used in tests -
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// read/write anything
How can I make each call to NewMigratedDB to produce a different instance of SQLite that only listens to queries from the unit test that instantiated it
First of all, it would be helpful to see a full example of how you're using NewMigratedDB in a test. Here's my suggestions based on what I can see (and what I can't).
Setup a Named In-memory Database
Each of your unit tests appear to be using the same in-memory copy of the SQLite database. Use the following syntax to create separate named instances of the in-memory database. If your tests are running in parallel, they'd most likely share the same database. Replace test1 with a unique name for each test that is running in parallel (see the sqlite documentation for more info).
If two or more distinct but shareable in-memory databases are needed in a single process, then the mode=memory query parameter can be used with a URI filename to create a named in-memory database
const DBConnection = "file:test1?mode=memory&cache=shared"
Open and Close Between Tests
You also need to make sure you're initializing a fresh database per-test, not once at the top of your testing function. For example:
func TestSomething(t *testing.T) {
// Don't initialize it here
t.Run("test1", func(t *testing.T) {
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// Do something
})
t.Run("test2", func(t *testing.T) {
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// Do something
})
}
If you are opening and closing before and after each test and you do not have any tests running concurrently that also connect to sqlite, then closing the connection should clear the sqlite database and you don't need to use named, in-memory databases as a fresh database will be re-created on the next call to gorm.Open. There's a lot of "ifs" there so pick your poison.

How to create an OpenTelemetry span from ctx in gRPC server stub

My Go gRPC server is instrumented with
Google Tracing span exporter:
import texporter "github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/trace"
...
gcp, err := texporter.NewExporter()
...
tracer := trace.NewTracerProvider(trace.WithSyncer(traceExporter),
trace.WithSampler(trace.AlwaysSample()))
otel.SetTracerProvider(tracer)
otelgrpc interceptors registered on the gRPC server.
unaryInterceptors := grpc_middleware.WithUnaryServerChain(
otelgrpc.UnaryServerInterceptor(),
)
streamInterceptors := grpc_middleware.WithStreamServerChain(
otelgrpc.StreamServerInterceptor(),
)
Now I'm trying to create a trace span inside the RPC implementation so I can have child spans for the method e.g.:
func (s *srv) Test(ctx context.Context, req *pb.Request) (*pb.TestResponse, error) {
// create a span here
span1 := [??????].Start()
doWork1()
span1.End()
span2 := [??????].Start()
doWork2()
span2.End()
...
}
However it is wildly unclear from the OpenTelemetry docs how does one do that.
Closest I've gotten is otel.GetTracerProvider().Tracer("some string here???") providing a Start(ctx)(ctx,Span). But it is not clear to me what string to provide here (my exporter doesn't have a url like the docs indicate) and this seems quite inconvenient.
I'm thinking there's something like a otelgrpc.SpanFromCtx(ctx) method somewhere that pulls the tracer+creates a span with the rpc ctx that I'm not finding. Sadly the docs are quite lacking on OT+gRPC+Go.
Since you are using github.com/GoogleCloudPlatform/opentelemetry-operations-go, you should use itself to create span rather than opentelemetry.
You can create a new span like this:
// Create custom span.
tracer := otel.TraceProvider().Tracer("example.com/trace")
err = func(ctx context.Context) error {
ctx, span := tracer.Start(ctx, "foo")
defer span.End()
// Do some work.
return nil
}(ctx)
For more detail, you can reference OpenTelemetry Google Cloud Trace Exporter

Parse a url with # in GO

So I'm receiving a request to my server that looks a little something like this
http://localhost:8080/#access_token=tokenhere&scope=scopeshere
and I can't seem to find a way to parse the token from the url.
If the # were a ? I could just parse it a standard query param.
I tried to just getting everything after the / and even the full URL, but with no luck.
Any help is greatly appreciated.
edit:
So I've solved the issue now, and the correct answer is you can't really do it in GO. So I made a simple package that will do it on the browser side and then send the token back to the server.
Check it out if you're trying to do local twitch API stuff in GO:
https://github.com/SimplySerenity/twitchOAuth
Anchor part is not even (generally) sent by a client to the server.
Eg, browsers don't send it.
For parse urls use the golang net/url package: https://golang.org/pkg/net/url/
OBS: You should use the Authorization header for send auth tokens.
Example code with extracted data from your example url:
package main
import (
"fmt"
"net"
"net/url"
)
func main() {
// Your url with hash
s := "http://localhost:8080/#access_token=tokenhere&scope=scopeshere"
// Parse the URL and ensure there are no errors.
u, err := url.Parse(s)
if err != nil {
panic(err)
}
// ---> here is where you will get the url hash #
fmt.Println(u.Fragment)
fragments, _ := url.ParseQuery(u.Fragment)
fmt.Println("Fragments:", fragments)
if fragments["access_token"] != nil {
fmt.Println("Access token:", fragments["access_token"][0])
} else {
fmt.Println("Access token not found")
}
// ---> Others data get from URL:
fmt.Println("\n\nOther data:\n")
// Accessing the scheme is straightforward.
fmt.Println("Scheme:", u.Scheme)
// The `Host` contains both the hostname and the port,
// if present. Use `SplitHostPort` to extract them.
fmt.Println("Host:", u.Host)
host, port, _ := net.SplitHostPort(u.Host)
fmt.Println("Host without port:", host)
fmt.Println("Port:",port)
// To get query params in a string of `k=v` format,
// use `RawQuery`. You can also parse query params
// into a map. The parsed query param maps are from
// strings to slices of strings, so index into `[0]`
// if you only want the first value.
fmt.Println("Raw query:", u.RawQuery)
m, _ := url.ParseQuery(u.RawQuery)
fmt.Println(m)
}
// part of this code was get from: https://gobyexample.com/url-parsing

Check errors when calling http.ResponseWriter.Write()

Say I have this http handler:
func SomeHandler(w http.ResponseWriter, r *http.Request) {
data := GetSomeData()
_, err := w.Write(data)
}
Should I check the error returned by w.Write? Examples I've seen just ignore it and do nothing. Also, functions like http.Error() do not return an error to be handled.
It's up to you. My advice is that unless the documentation of some method / function explicitly states that it never returns a non-nil error (such as bytes.Buffer.Write()), always check the error and the least you can do is log it, so if an error occurs, it will leave some mark which you can investigate should it become a problem later.
This is also true for writing to http.ResponseWriter.
You might think ResponseWriter.Write() may only return errors if sending the data fails (e.g. connection closed), but that is not true. The concrete type that implements http.ResponseWriter is the unexported http.response type, and if you check the unexported response.write() method, you'll see it might return a non-nil error for a bunch of other reasons.
Reasons why ResponseWriter.Write() may return a non-nil error:
If the connection was hijacked (see http.Hijacker): http.ErrHijacked
If content length was specified, and you attempt to write more than that: http.ErrContentLength
If the HTTP method and / or HTTP status does not allow a response body at all, and you attempt to write more than 0 bytes: http.ErrBodyNotAllowed
If writing data to the actual connection fails.
Even if you can't do anything with the error, logging it may be of great help debugging the error later on. E.g. you (or someone else in the handler chain) hijacked the connection, and you attempt to write to it later; you get an error (http.ErrHijacked), logging it will reveal the cause immediately.
Tip for "easy" logging errors
If you can't do anything with the occasional error and it's not a "showstopper", you may create and use a simple function that does the check and logging, something like this:
func logerr(n int, err error) {
if err != nil {
log.Printf("Write failed: %v", err)
}
}
Using it:
logerr(w.Write(data))
Tip for "auto-logging" errors
If you don't even want to use the logerr() function all the time, you may create a wrapper for http.ResponseWriter which does this "automatically":
type LogWriter struct {
http.ResponseWriter
}
func (w LogWriter) Write(p []byte) (n int, err error) {
n, err = w.ResponseWriter.Write(p)
if err != nil {
log.Printf("Write failed: %v", err)
}
return
}
Using it:
func SomeHandler(w http.ResponseWriter, r *http.Request) {
w = LogWriter{w}
w.Write([]byte("hi"))
}
Using LogWriter as a wrapper around http.ResponseWriter, should writes to the original http.ResponseWriter fail, it will be logged automatically.
This also has the great benefit of not expecting a logger function to be called, so you can pass a value of your LogWriter "down" the chain, and everyone who attempts to write to it will be monitored and logged, they don't have to worry or even know about this.
But care must be taken when passing LogWriter down the chain, as there's also a downside to this: a value of LogWriter will not implement other interfaces the original http.ResponseWriter might also do, e.g. http.Hijacker or http.Pusher.
Here's an example on the Go Playground that shows this in action, and also shows that LogWriter will not implement other interfaces; and also shows a way (using 2 "nested" type assertions) how to still get out what we want from LogWriter (an http.Pusher in the example).
I want to add to #icza solution. You don't need to create logging structure, you can use simple function:
func logWrite(write func([]byte) (int, error), body []byte) {
_, err := write(body)
if err != nil {
log.Printf("Write failed: %v", err)
}
}
Take a look on that approach based on the #icza code: https://play.golang.org/p/PAetVixCgv4

Resources