Symfony2 adding tests for saving api feed to database - symfony

I'm working on my first major symfony2 project.
I have updated an api that's no longer being maintained by the original author. https://github.com/DizzyBHigh/FantasyDataAPI-v2
The updated API contains all the necessary Unit and Integration Tests for the different API Calls. Including Mocks of the data feeds that come from the API.
I've now written a symfony2 bundle that uses this api via console commands and saves the data from the feeds to the database. FP_DataBundle
My question is about testing:
Can i use the same mocks that are in my FantasyDataAPI library to test that the correct data is being saved to the database?
I'm thinking that i need the tests to execute the console commands and then fetch the data from the database and then go through the mocks and check that that the data in the DB Matches.
Can i create a database version that just holds the mock data and then test against that db? How can i do that?
Is my thinking askew and i need to do it another way, the feeds contain a lot of fields in json format, and duplicating all these in my bundle again seems like overkill.

Related

can an alternative migration framework be used with hasura?

Is it possible to use a different migration framework for your relational database with hasura?
I am seeing hasura has the ability to manage migrations as noted in the documentation here.
We are using liquibase as the migration framework for all of our other projects and want to use hasura, but keep our existing migration framework (liquibase).
In the setup documentation already linked above, there's a prompt that asks if you want to initialize the project with metadata and migrations. Is it as simple as saying no here?:
? Initialize project with metadata & migrations from https://docs-demo.hasura.app ? Yes
Can this be done or do you have to use the hasura migrations if you want to use hasura?
Yes, you can manage your database migrations however you want to, and you have no obligation to use Hasura. Hasura's migrations are just a collection of .sql files that can be applied/revoked sequentially.
What is critically important is that you keep Hasura's metadata in sync with the database state.
For example, if you're tracking a database column in Hasura, and you use a SQL client to drop that column in your DB, Hasura's metadata (which describes the tables, columns, etc. that are exposed through the API) will be inconsistent with the database state. The proper way to manage a task like that is to either (1) use the Hasura console UI, (2) use the Hasura metadata HTTP API, or (3) manually edit and apply metadata with the Hasura CLI.
The task of keeping Hasura metadata in sync with DB state becomes non-trivial very quickly as you start to make use of features like "actions" and "events". You should run through some real-life migration scenarios with your current set up to get a sense of the challenges.

Point dynamic the Firebase database

I would like to build a simple framework on Flutter + Firebase, but unfortunately I have a big problem.
Construction of the framework makes sense when you can dynamically point any database in the application downloaded from the store (each customer has a different database, but the application in the store is one).
Unfortunately, from what I understand, the connection data to Fierbase must be in the google-service.json file in the source code.
Is there any way to point dynamic the Firebase database?
google-services.json is not required to initialize Firebase. You can take control of initialization by calling FirebaseApp.initializeApp() on your own with the values you specify. You can also use the Play services documentation to help.

Recommended way to maintain stored procedures, user defined functions, indexes, etc. in source control and for CI/CD

For our stored procedures, we were using an approach that was working rather well during CD which was making use of the javascript v2 SDK to call container.storedProcedures.upsert. Upsert has now been removed from the API on v3 as it's not supported on non-partitioned collections (which are the only ones you'll be able to create from now on).
I supposed that the v3 SDK would have a way to at least delete and re-create these objects, but for what I can see it only allows creation: https://learn.microsoft.com/en-us/javascript/api/%40azure/cosmos/storedprocedures?view=azure-node-latest
We followed a similar approach for maintaing the index definitions updated and this is the main reason we now need to migrate to the v3 SDK as otherwise updating some kind of indexes fail through v2.
Given that what we want (if possible) is to be able to maintain all of these objects in soure control and automatically deploy them during CD, what would be the recommended way to do this?
(Meanwhile I'm exploring using these powershell commands for it: https://github.com/PlagueHO/CosmosDB but attempting to create a UDF through them caused a very bizzarre outcome in which Azure Portal stopped showing me any UDF on the collection until I removed the one I had created using New-CosmosDbUserDefinedFunction)
There are a few options today and your choices will get better here over the next couple of months.
Cosmos now has support for creating stored procedures, triggers and UDFs using ARM Templates. The second sample on this page has an ARM template that shows this. Cosmos DB ARM Template Samples. This PS tool you are using is not officially supported so you'll need to file an issue there for any questions. We will be releasing PS Cmdlets to create stored procedures, triggers and UDF's but there is no ETA to share at this time.

Firestore schema migration between projects

I have a Firebase project which basically have two environments: Staging and Production. The way I organized them is by creating different Firebase projects. Each of my projects uses Firebase Cloud Functions and Firestore. Except for that, I have each of the projects associated with a specific GIT branch. Both of the branches are integrated into CI/CD pipeline in Google Cloud Build.
So, in order to make it absolutely clear, I will share a simple diagram:
As you can see, I have the source code for the cloud functions under source control and there's nothing to worry about there. The issue comes in when I have the following situation:
A Firestore schema change is present on Staging
Cloud function (on Staging) is adjusted to the new schema.
Merge staging branch into production.
Due to the old Firestore schema on production, the new functions there won't work as expected.
In order to work around it, I need to manually go to the production Firestore instance and adjust the schema there (there's a risk to mess up production data).
In the perfect case, I would have that operation automated and existing project data would be adjusted to a new schema which comes in dynamically after merge.
Is that possible somehow? Something like migrations in .NET Core.
Cloud Firestore is schema-less - documents have no enforced schema. Code is able to write whatever fields it wants at any time that it wants. (For web and mobile clients, this is gated by security rules, but for backend code, there are no restrictions.) As such, there is no such thing as a formal migration in Cloud Firestore.
The "schema" of your documents is effectively defined by your code that reads and writes those documents. This means that migrating a data to a new format means that you're going to have to write code to perform the required changes. There is really no easy way around this. All you can really do is design your updates so that they are not disruptive to existing code when it comes time to move them to another environment. This means your code should be resilient to breaking changes, or simply do not perform breaking changes until after all code has been updated to deal with those changes.
You have to use Google Cloud to download an archive of the Firestore data. Run a migration script yourself on the archive, and then upload the archive to restore your Firestore database.
https://cloud.google.com/firestore/docs/manage-data/export-import
Google Cloud gives you a lot of command line access for managing your Firestore service.
// manage indexes
gcloud firestore indexes
// export all data to a bucket
gcloud firestore export gs://[BUCKET_NAME]
// import data from a bucket
gcloud firestore import gs://[BUCKET_NAME]/[filename]
// manage admin "functions" currently running (i.e. kill long processes)
gcloud firestore operations
To download/upload your JSON archive from Google Cloud buckets
// list files in a bucket
gsutil ls gs://[BUCKET_NAME]
// download
gsutil cp gs://[BUCKET_NAME]/[filename] .
// upload
gsutil cp [filename] gs://[BUCKET_NAME]/[filename]
Once you setup Google Cloud to be accessible from your build scripts. It's possible to automate data migration scripts to download, transform and then upload data.
It's recommended to maintain a "migrations" document in your Firestore so you can track which reversion of the migration needs to be done.
To avoid heavy migration tasks try adding a "version" property to documents, and then use the converter callbacks on the query builder to mutate data to the latest schema on "client side". While this won't help you handle changes with Firestore rules or functions. It's often easier to make tiny changes that are mostly cosmetic.

Best common practice for data insert/update scripts in flyway

scenario: I have two databases.
The first database is a blank database used for testing. I essentially run flyway:migrate and build the database with complete schema and run my integration tests against that blank database. Any data that the integration tests need are inserted before the tests are run. Finally, the database is tore down by using flyway:clean to make sure the next build that comes through has a clean db to work with.
The second database has data in it.
Problem: The build fails in the integration phase because I have migration scripts that depends on data which database 1 doesn't have. Basically I'm inserting data based on certain data existing in the db.
Is the best common practice for flyway to only have ddl change type migration scripts and no data insert/update scripts?
Consider adding your reference data behind an IF statement in an afterMigrate callback:
http://flywaydb.org/documentation/callbacks.html
In the best case you add it as a migration and change it in the future via migrations. Including production. Things can be more complicated if that data can be changed on real environments by other means. In such case I would personally prefer to have a (shared) test fixture to insert the sample data.

Resources