Golang: Integration tests hitting the same db instance even upon db.Close() - sqlite

I have a test suite that uses an in-memory sqlite instance to run db queries. After adding some tests, I suddenly started to get tons of "UNIQUE constraint failed..." errors in every line that tries to perform an insertion. That makes it seem like all my tests are connecting to, writing to, and reading from the same db instance. Here's how the test db instance is produced
const DBProvider = "sqlite3"
// DBConnection is the connection string to use for testing
const DBConnection = "file::memory:?cache=shared"
// NewMigratedDB returns a new connection to a migrated database
func NewMigratedDB(provider string, connection string, models ...interface{}) (*gorm.DB, error) {
db, err := gorm.Open(provider, connection)
if err != nil {
return nil, err
}
db = db.AutoMigrate(models...)
if db.Error != nil {
return nil, err
}
return db, nil
}
And here's how it's used in tests -
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// read/write anything
How can I make each call to NewMigratedDB to produce a different instance of SQLite that only listens to queries from the unit test that instantiated it

First of all, it would be helpful to see a full example of how you're using NewMigratedDB in a test. Here's my suggestions based on what I can see (and what I can't).
Setup a Named In-memory Database
Each of your unit tests appear to be using the same in-memory copy of the SQLite database. Use the following syntax to create separate named instances of the in-memory database. If your tests are running in parallel, they'd most likely share the same database. Replace test1 with a unique name for each test that is running in parallel (see the sqlite documentation for more info).
If two or more distinct but shareable in-memory databases are needed in a single process, then the mode=memory query parameter can be used with a URI filename to create a named in-memory database
const DBConnection = "file:test1?mode=memory&cache=shared"
Open and Close Between Tests
You also need to make sure you're initializing a fresh database per-test, not once at the top of your testing function. For example:
func TestSomething(t *testing.T) {
// Don't initialize it here
t.Run("test1", func(t *testing.T) {
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// Do something
})
t.Run("test2", func(t *testing.T) {
db, err := test.NewMigratedDB(test.DBProvider, test.DBConnection, models...)
defer db.Close()
// Do something
})
}
If you are opening and closing before and after each test and you do not have any tests running concurrently that also connect to sqlite, then closing the connection should clear the sqlite database and you don't need to use named, in-memory databases as a fresh database will be re-created on the next call to gorm.Open. There's a lot of "ifs" there so pick your poison.

Related

Why I receive "too many documents writes sent to bulkwriter" when using the 'serverTimestamp' tag?

I wanted to test the batch use case and I'm a bit confused.
Initially I've used the Batch() method from the firestore GO SDK. When I use this method I don't receive any error.
I've seen a message on my IDE when I was using Batch():
Deprecated: The WriteBatch API has been replaced with the transaction and the bulk writer API. For atomic transaction operations, use Transaction. For bulk read and write operations, use BulkWriter.
By reading the message I started using the BulkWriter but when I'm using it I receive an error:
too many document writes sent to bulkwriter
When I remove the serverTimestamp there is no error.
The code:
// Storer persists tickets in Firestore.
type Storer struct {
client *firestore.Client
}
// createTicket contains the data needed to create a Ticket in Firestore.
type createTicket struct {
Title string `firestore:"title"`
Price float64 `firestore:"price"`
DateCreated time.Time `firestore:"dateCreated,serverTimestamp"`
}
func (s *Storer) CreateTicket(ctx context.Context, ticket tixer.Ticket) error {
bulk := s.client.BulkWriter(ctx)
ticketRef := s.client.Collection("tickets").Doc(ticket.ID.String())
_, err := bulk.Create(ticketRef, createTicket{
Title: ticket.Title,
Price: ticket.Price,
})
if err != nil {
return err
}
bulk.Flush()
return nil
}

Reliable thread-safe map

I was making a WaitForResponse function for my Discord bot, and it works, but the user can still use commands even when the bot is expecting a response. I combated this by using a map with the user and channel IDs, but I was then hit with the dreaded fatal error: concurrent map read and write. So I tried using a sync.Map, however it wouldn't always work when I spammed the command. I could sometimes still run commands when the bot was expecting a response. Is there any way I can ensure that the values are getting added and removed from the map when and as they should?
For these scenarios, sync.Mutex can be used to ensure that only one modification is allowed by acquiring a lock around the code that you want to be thread-safe.
var mu sync.Mutex
func readMap(key string) {
mu.Lock()
defer mu.Unlock()
return yourMap[key]
}
func updateMap(key, value string) {
mu.Lock()
defer mu.Unlock()
yourMap[key] = value
}
Mutex ensures that ONLY ONE goroutine can is allowed access to the locked code, which means for your case, only one operation, either read or write can be performed.
For better efficiency, you should consider using sync.RWMutex since you might not want to lock the map when it's being read. From GoDoc:
A RWMutex is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer. The zero value for a RWMutex is an unlocked mutex.
var mu sync.RWMutex
func readMap(key string) {
mu.RLock()
defer mu.RUnlock()
return yourMap[key]
}
func updateMap(key, value string) {
mu.Lock()
defer mu.Unlock()
yourMap[key] = value
}

GO API with SQLITE3 can't DELETE tuple from db

Hello fellow developers.
I am trying to learn GO while constructing a simple web API using sqlite3. I got stuck at somepoint where i am unable to delete rows from my table by sending a DELETE request from postman. I am trying to use the code below to delete a row. I have already verified that I have access to db and I can also delete rows by using command tool of sqlite3. I do not understand what is wrong!
func deleteArticle(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
params := mux.Vars(r) // get any params
db := connectToDB(dbName)
defer db.Close()
_, err := db.Query("DELETE FROM article WHERE id=" + params["id"])
if err != nil {
fmt.Fprintf(w, "article couldn't be found in db")
}
}
Here is the navigation part:
myRouter.HandleFunc("/articles/{id}", deleteArticle).Methods("DELETE")
No mather what I do I cannot delete an article from db using postman.
Thanks bunches.
Thanks to #mkopriva 's comments I have learned that
1.
It is very important that you do not use Query nor QueryRow for SQL
queries that do not return any rows, for these cases use the Exec
method. When you use Query you always have to assign the result to a
non-blank identifier, i.e. anything but _, and then invoke the Close
method on that once you're done with the result. If you do not do that
then your application will leak db connections and very soon will
start crashing.
2.
when you want to pass user input (including record ids) to your
queries you have to utilize, at all times, the parameter-reference
syntax supported by the sql dialect and/or dirver you are using, and
then pass the input separately. That means that you should never do
Exec("DELETE FROM article WHERE id=" + params["id"]),
instead you should always do
Exec("DELETE FROM article WHERE id= ?",params["id"])
If you do not do it the proper way and instead continue
using plain string concatenation your app will be vulnerable to SQL
injection attacks.
Regarding this information I have changed my code into:
func deleteArticle(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
params := mux.Vars(r) // get any params
db := connectToDB(dbName)
defer db.Close()
fmt.Printf("%q\n", params["id"])
statement, err := db.Prepare("DELETE FROM article WHERE id= ?")
if err != nil {
fmt.Fprintf(w, "article couldn't be found in db")
}
statement.Exec(params["id"])
}
Which has solved my problem. So thank you #mkopriva

How can I increase firebase database read time and prevent timeout from Nginx

Context: I try to build a chat service. In chat service, I have (Say: 50000+) chat rooms.
I have 20 admins who can access some specific chat rooms (Say: can access around 5000 chat rooms). So, I want to create features, so that I can add new admin and get the chat room list based on my query (Say: I got 5000 chat room from my query), add that new admin to on those 5000 chat room using a single endpoint. I am using Golang and Firebase.
//GetAdmin user take a userID and it's return a user.
func GetAdminUser(userID int) (user *User, err error) {
// It will query on the database
// then return the a user
return user, nil
}
The problem is when I pass the patient's list and try to read the patient's topic from the firebase database It took around 20 minutes to read. So, it will give time out on Nginx.
Is there any way I can improve firebase read time using go concurrency or any other way I can improve the reading from firebase and adding them without cazing any timeout error.
func AddNewAdminToPatientTopics(ctx context.Context, user User, patients []User) error {
for _, patient := range patients {
oldTopics := firebase.database.NewRef(fmt.Sprintf("USER_TOPICS/%d", patient.ID))
for topicID, t := range topics {
newUserTopics := firebase.database.NewRef(fmt.Sprintf("USER_TOPICS/%d/%s", user.ID, topicID))
// Add this new admin as a participant in this topic
topic := firebase.database.NewRef(fmt.Sprintf("TOPICS/%s/Participants/%d", topicID, user.ID))
participant := &Participant{
UserID: strconv.Itoa(user.ID),
LastTimeSeenOnline: time.Now().Unix(),
.......
}
err = topic.Set(ctx, participant)
if err != nil {
return err
}
}
}
return nil
}
func AddManager(w http.ResponseWriter, r *http.Request) {
// Don't worry about error, I handle them gracefully
// Get User
user, err := GetAdminUser(UserID)
// get patients list
// Say, In this case you we have 5000+ patients
patients, err := GetPatients(user.CustomerID)
// Join this user to all chat rooms that the first admin has
err = AddNewAdminToTopics(context.Background(), *user, patients)
}
Routers:
http.HandleFunc("chat/managers/new/add", Post).Then(clinic.AddManager))
Don't perform huge read or write operations on your real database please. While the server is processing that huge read/write, it is not servicing requests from your clients.
For this type of processing, I'd highly recommend setting up automatic backups in the Firebase console, and then performing the operation on the raw data from that backup. That way you can optimize how you process the data, to not be dependent on the response time of the Firebase Database servers.
Of course you'll still depend on the Firebase servers for the write operations. I'd consider doing that in reasonably sized multi-location updates, where you ensure that each multi-location update takes no more than a few seconds at most.

FIRDatabaseReference observe gets empty updates while another reference is running a transaction

We're using Firebase DB together with RxSwift and are running into problems with transactions. I don't think they're related to the combination with RxSwift but that's our context.
Im observing a data in Firebase DB for any value changes:
let child = dbReference.child(uniqueId)
let dbObserverHandle = child.observe(.value, with: { snapshot -> () in
guard snapshot.exists() else {
log.error("empty snapshot - child not found in database")
observer.onError(FirebaseDatabaseConsumerError(type: .notFound))
return
}
//more checks
...
//read the data into our object
...
//finally send the object as Rx event
observer.onNext(parsedObject)
}, withCancel: { _ in
log.error("could not read from database")
observer.onError(FirebaseDatabaseConsumerError(type: .databaseFailure))
})
No problems with this alone. Data is read and observed without any problems. Changes in data are propagated as they should.
Problems occur as soon as another part of the application modifies the data that is observer with a transaction:
dbReference.runTransactionBlock({ (currentData: FIRMutableData) -> FIRTransactionResult in
log.debug("begin transaction to modify the observed data")
guard var ourData = currentData.value as? [String : AnyObject] else {
//seems to be nil data because data is not available yet, retry as stated in the transaction example https://firebase.google.com/docs/database/ios/read-and-write
return TransactionResult.success(withValue: currentData)
}
...
//read and modify data during the transaction
...
log.debug("complete transaction")
return FIRTransactionResult.success(withValue: currentData)
}) { error, committed, _ in
if committed {
log.debug("transaction commited")
observer(.completed)
} else {
let error = error ?? FirebaseDatabaseConsumerError(type: .databaseFailure)
log.error("transaction failed - \(error)")
observer(.error(error))
}
}
The transaction receives nil data at first try (which is something you should be able to handle. We just just call
return TransactionResult.success(withValue: currentData)
in that case.
But this is propagated to the observer described above. The observer runs into the "empty snapshot - child not found in database" case because it receives an empty snapshot.
The transaction is run again, updates the data and commits successfully. And the observer receives another update with the updated data and everything is fine again.
My questions:
Is there any better way to handle the nil-data during the transaction than writing it to the database with FIRTransactionResult.success
This seems to be the only way to complete this transaction run and trigger a re-run with fresh data but maybe I'm missing something-
Why are we receiving the empty currentData at all? The data is obviously there because it's observed.
The transactions seem to be unusable with that behavior if it triggers a 'temporary delete' to all observers of that data.
Update
Gave up and restructured the data to get rid of the necessity to use transactions. With a different datastructure we were able to update the dataset concurrently without risking data corruption.

Resources