I have a Unity3D app backed by a Firebase/Firestore database.
Firebase provides a Local Emulator Suite for testing, but it is not supported on Unity.
It seems the only way to test logic which uses Firestore is to either:
Use a real, test-only database and clear it on each SetUp
Mock out all usages of Firestore
(2) seems error-prone and like I'm reinventing the wheel. (1) seems both dangerous and slow. Is there another widely-supported option I'm missing?
The emulator suite is now supported. See https://github.com/firebase/quickstart-unity/issues/719#issuecomment-938759588
To use the emulator, run
FirebaseFirestore firestore = FirebaseFirestore.DefaultInstance;
firestore.Settings.Host = "localhost:8080";
firestore.Settings.SslEnabled = false;
Make sure you use this code in each assembly used. Many tutorials have the tests run in a separate assembly as the rest of your code. If your architecture is similar, you will need the above code in each assembly. Specifically, be sure to call this code both in your test assembly when accessing FirebaseFirestore.DefaultInstance, and in your game code, since FirebaseFirestore.DefaultInstance exists once per process (i.e. once per assembly).
Related
I am new to PACT and trying to use pact-net for contract testing for a .net microservice. I understand the concept of consumer test which generates a pact file.
There is the concept of a provider state middleware which is responsible for making sure that the provider's state matches the Given() condition in the generated pact.
I am bit confused on the following or how to achieve this:
The provider tests are run against the actual service. So we start the provider service before tests are run. My provider service interacts with a database to store and retrieve records. PACT also mentions that all the dependencies of a service should be stubbed.
So we run the actual provider api that is running against the actual db?
If we running the api against actual db how do we inject the data into the db? Should we be using the provider api's own endpoints to add the Given() data?
If the above is not the correct approach then what is?
All the basic blog articles I have come across do not explain this and usually have examples with no provider states or states that are just some text files on the file system.
Help appreciated.
I'm going to add to Matt's comment, you have three options:
Do your provider test with a connected environment but you will have to do some cleanup manually afterwards and make sure your data is always available in your db or/and the external APIs are always up and running. Simple to write but can be very hard to maintain.
You mock your API calls but call the real database.
You mock all your external dependencies: the API and the DB calls.
For 2) or 3) you will have to have test routes and inject the provider state middleware in your provider test fixture. Then, you can configure provider states to be called to generate in-memory data if solution 3) or add some data-init if you are in solution 2)
You can find an example here: https://github.com/pact-foundation/pact-net/tree/master/Samples/EventApi/Provider.Api.Web.Tests
The provider tests are run against the actual service
Do you mean against a live environment, or the actual service running locally to the unit test (the former is not recommended, because of (2) above).
This is one of the exceptions to that rule. You can choose to use a real DB or an in-memory one - whatever is most convenient. It's common to use docker and tools like that for testing.
In your case, I'd have a specific test-only set of routes that respond to the provider state handler endpoints, that also have access to the repository code and can manipulate state of the system.
I'm currently doing server side development for firebase. I'd like to test that my sendMessage function is working by targeting a test token.
I tried using testToken as a value, but arbitrary values will in fact return an error:
Error: The registration token is not a valid FCM registration token
I've searched all over and found no way to test my code without using a phone connected to the service. Am I just missing something?
Does firebase provide any form of test tokens for doing server side development?
If you are working with Android, and don't have a device to test with, you should be able to test against an emulator instance. But you will definitely need to provide FCM a real token that's associated with some device, real or virtual.
This is a very good question, but does some raise further questions.
You want a test token, basically that is mocking, so that will be good for unitary test. But for integrations tests you need to have a real token, so you can test the success case and the error case. So in case of error you can retry or write an entry on the database.
Maybe a better aproach for your test as describe is to mock the method sendMessage or the class containing it. However if you follow this path, please remember, mocking is a double edge sword, you should not mock things dependencies because those are already tested. But mock wrapper components using those dependencies. Off course testing require finese, so maybe mocking one thing aint thar bad.
We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.
I'm trying to refactor an existing project into PureMVC. This is an Adobe AIR desktop app taking advantage of the SQLite library included with AIR and building upon it with a few other libraries:
Paul Robertson's excellent async SQLRunner
promise-as3 implementation of asynchronous promises
websql-js documentation for good measure
I made my current implementation of the database similar to websql-js's promise based SQL access layer and it works pretty well, however I am struggling to see how it can work in PureMVC.
Currently, I have my VOs that will be paired with DAOs (data access objects) for database access. Where I'm stuck is how to track the dbFile and sqlRunner instances across the entire program. The DAOs will need to know about the sqlRunner, or at the very least, the dbFile. Should the sqlRunner be treated as singleton-esque? Or created for every database query?
Finally, how do I expose the dbFile or sqlRunner to the DAOs? In my head right now I see keeping these in a DatabaseProxy that would be exposed to other proxies, and instantiate DAOs when needed. What about a DAO factory pattern?
I'm very new to PureMVC but I really like the structure and separation of roles. Please don't hesitate to tell me if this implementation simply will not work.
Typically in PureMVC you would use a Proxy to fetch remote data and populate the VOs used by your View, so in that respect your proposed architecture sounds fine.
DAOs are not a pattern I've ever seen used in conjunction with PureMVC (which is not to say that nobody does or should). However, if I was setting out to write a crud application in PureMVC, I would probably think in terms of a Proxy (or proxies) to read information from the database, and Commands to write it back.
I've just encountered an error in my app that probably could have been caught with some integration tests, so I think its high time I wrote some!
My question relates to the setup of these tests, and at what layer of the code you run the tests against.
Setup
Considering I should have lots of integration tests, I don't want to be creating and dropping a test database for every test, this would be epically slow (even if its an SqlLite in-memory one). My thoughts are:
Have a test db which sits next to my dev db
Before testing, run some reset script which will set up my schema correctly and insert any necessary data (not test case specific)
Simply use this test db as if its the real db.
However, it seems very wasteful that I have to run my Fluent NHib configuration in every [Setup]. Is this just tough? What are my options here?
My session is currently wrapped in a UoW pattern, with creation and destruction performed on begin_request and end_request (MVC web application) respectively. Should I modify this to play well with the tests to solve this problem?
Testing
When it comes to actually writing some tests, how should I do it?
Should I test from the highest level possible (my MVC controller actions) or from the lowest (repositories).
If I test at the lowest I'll have to manually hard code all the data. This will make my test brittle to changes in the code, and also not representative of what will really happen in the code at runtime. If I test at the highest I have to run all my IoCC setup so that dependencies get injected and the whole thing functions (again, repeating this in every [SetUp]?)
Meh! I'm lost, someone point me in the right direction!
Thanks
In regards to creating the session factory, I create a class called _AssemblyCommon in my test project and expose the session factory as a static from there. A method marked with the [SetupFixture] (NUnit) attribute configures the session factory.
In general integration tests should cover CRUD operations in the repositories. To do this, I have one test method per object (or aggregate root) and do an insert, retrieve, update, and delete all within that method. I also test any cascade deletes that I've defined. Keeping these operations in a single method doesn't leave any traces behind in the database. I do have some integration tests that leave behind test data but that hasn't been a problem.
Your higher level operations should be unit tested and mock (I use Moq) the repositories if possible.
In my current MVC app I've found that it's enough to test the interaction between the repositories and the database. More than anything, this is to iron out any wrinkles in the NHibernate mapping. Everything (ok I'm exaggerating when I say everything) above that layer is unit tested in isolation. I did have some integration tests from the controllers all the way down the stack to the database and these used an IoC container (StructureMap) to build the controllers and the dependencies, but I found these tests were really not adding anything and they were quite an overhead to maintain, so I've removed them from the 'integration' tests for now - I may find a reason to put them in but so far I haven't.
Anyway, the test process I use works like this:
The build process for the data access layer test assembly creates the test database via a FluentNHibernate configuration ExposeSchema() call. The build process then runs some NHibernate repository level code to populate reference tables in the database.
Each integration test that runs is then wrapped in a System.Transactions.TransactionScope using() statement and the TransactionScope never has Complete() called, so each test can run in isolation and the results can be setup and verified within the using() scope by an ISession without altering the state of any other test data. e.g.
using (new TransactionScope())
{
var session = NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting();
// This is an NHibernate ISession - setup any dependencies for the repository tests here
var testRepository = new NHibernateGenericRepository<NewsItem>(NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting())
// repository test code goes here - the repository is using a different ISession
// Validate the results here directly with the ISession
}
// at this point the transaction is rolled back and we haven't changed the test data
This means that I don't have to modify the UnitOfWork implementation I'm using - the transaction is rolled back at a higher level.