Skip Shopify Statistics/analytics Collection while running tests - automated-tests

I'm starting to code some in-browser automated tests for our Shopify store, and I noticed that I've inadvertantly caused a massive traffic spike to our store during the time I was developing.
Is there a way to make a browser visit not count on Shopify analytics, like a "nostats" queryparam or something? I may eventually end up with dozens of tests running maybe a dozen times a day, and that'll make a significant difference to our analytics.
Right now I'm testing against a previewed theme deployed with themekit, so I'm not testing against the live theme.
I could create a dev store and copy over all our products/collections/etc, but I'd really rather test as close to the live store as possible. If that's stupid (or if there's a really easy way to make my dev store mirror my live store), let me know.

There's no way to disable Shopify analytics or stop collecting data in any manner you would like to do this. So, you would definitely need to use a development store to run your tests.
There's a number of apps available for store data syncing/migration. That's an easy option but might be quiet expensive. Depends on your resources though.
You can also create your own solution to sync the entities you need for testing. Not so easy but good if you would want to apply this process to multiple Shopify projects.

Related

Use different data for production and develop firebase sites

I have a CI/DC pipeline with google cloud build triggers that deploy my code to different sites depending on which branch I push to. The develop site is a live test - the final check before I merge to master, which triggers a deploy of master to the production site.
Currently, both sites use the same firebase Firestore db, and any document changed on the develop site will also be changed on the production site.
What I want to avoid is creating another firebase project to push the develop code to with a different database, because that means I need a separate set of credentials and would copy the same functions over to the new project every time I change them. That's not maintainable and is a lot of work.
What I would like is some way for the develop site to only have access to part of the firestore database, and the production site to have access to another part.
How do people do this? Is it even possible? Is there a better way? One alternative I can think of is using authentication and creating separate accounts for testing with different access permissions, but this seems a work-around and not the ideal solution.
What you're trying to do sounds like a lot more hassle than using multiple projects, which is the documented and strongly preferred solution. Putting everything in one project is a huge anti-pattern in Firebase and Google Cloud, and it will cause you more problems in the long run, in addition to increasing the risk of catastrophic failure if you manage to misconfigure something in that one project.
It's perfectly maintainable to have multiple projects like this, if you apply some scripting to automate the work. This is very common, and I strongly suggest thinking through how this would work for you.
You CI/CD pipeline could definitely check out your updates from source control and deploy them to whatever other project environments you have set up. It's very common to manage different credentials and configurations for use in CI/CD.

Account-base package on Meteor

I have to deploy now my app and I was wondering to my self if is a good idea bring apps to production with the account-base package. Wouldn't be better to manage this with our own methods?
I think trying to manage it all yourself is a sure fire way to run into problems in the future. The concept of accounts seems simple, but the implementation never is especially when you start sprinkling in all the extra complexities (registration, resetting passwords, validation emails, 3rd party authentication, role based authorization, complete integration with your app, etc.).
At the end of the day, you dont gain much by spending days (or more) implementing and testing (hoping that you got everything right) that stuff when you could be spending​ that time on building the next big app. I'd much rather use tried and try packages that have verified by thousands of users across thousands of apps instead of trying to roll my own.
It depends on your purposes and preferences. Since account-base seems to fit well on my projects, I recommend you to get the most out of it and don't waste your time reinvent the wheel.
However, in the scope of meteor, if you want to make changes to the package, you can clone it to the packages directory and modify it your way.
The solution is create as many fields as I want and onCreateUser get them and save this on my ddbb
More info: http://docs.meteor.com/api/accounts-multi.html#AccountsServer-onCreateUser

Selective Continuous Integration with Git

My Django project's team is looking to have the designer's CSS in a central place, preferably on the production server (so that there's one "truth" to the current design, a model he claims that he's worked with in the past). Assuming that this is even a good practice, it would mean setting up Git to deploy the CSS in a Continuous Integration (CI) manner to production.
However, I would want to restrict Git somehow for the designer so that he doesn't accidentally update any files other than CSS or HTML. Python and Django files would be updated by developers, who would be deploying in a more traditional manner: working in their own branches and only having a human build manager
merging everything in to master when tested and ready.
Part of the reason that we want the designer to be able to deploy the CSS to a server is to avoid setting up the Django site locally on his laptop (he's not so technical outside of CSS, HTML, and Git).
Is this setup even a good idea? If not, what's the proper alternative?
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?
Is this setup even a good idea? If not, what's the proper alternative?
I have some reservations. It sounds like your designer is going to be the only person pushing changes to production without any gates: no code review, no tests, etc. Continuous integration is great, but a sane process includes safeties that prevent bad deploys. Since the rest of the team is following a different process, you'll end up managing two different pipelines. That's a waste of effort, and inevitably one of them (probably the designer's) falls apart due to lack of attention.
The alternative is put everyone on the same process. Teach the designer how to run the application locally, or build a harness that makes it easier. Unless your site is entirely static, how can they even see what their changes look like without that? Maybe it's more work to train them up, but it's an excellent opportunity for personal growth.
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?
If you go this route, you can use Git hooks to restrict what the designer is allowed to commit. You can either put a pre-commit hook on their client or, if you control the server, a pre-receive hook that runs for only the designer's user. Either one can look at the committed files and block the commit/push if any are not CSS or HTML. There's a pre-commit framework called Overcommit that might be helpful to you. If you're using a code review tool, most have places you can hook in a bot to leave a comment or block the merge when they've modified a file they shouldn't have.
Another option here is trust your coworker. Presumably they were hired because they're effective and useful, so you can save a lot of effort building up restrictions if instead everyone's clear on what they're supposed to be doing and generally doesn't screw it up.

deploying changes on a living drupal site

I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.

Should we have separate database instance for each developer?

What is the best way for developing a database based application? We can have two approaches.
One common database for all the developers.
Separate database for all the developers.
What are the pros and cons of each? And which one is better way?
Edit: More then one developer is supposed to update the database and we already have SqlExpress 2005 on each developer machine.
Edit: Most of us are suggesting a common database. However if one of the dev has modified the code and database schema . He has not committed the code changes but the schema changes has gone to the common database. Will it not possibly break the other developers code.
Both -
I like a single database that changes are tested on before going live, or going to a 'formal' test environment. This is your developer's sanity check; it stays up to date with the live system and it makes sure they always consider each others changes. The rule should be that changes don't go on here if they might break something else.
A database per developer is great (even essential) when more than one developer is making updates. It allows them all the development flexibility they want without breaking things for other developers.
The key is to have a process for moving database changes from development through to your live system, and stick to your process.
Shared database
Simpler
Less cases of "It works on my machine".
Forces integration
Issues are found quickly (fail fast)
Individual databases
Never affect other developers, but this is also a bad thing, in continuous integration
We use a shared development database and it works out nicely. Our schema rarely changes in a way that makes it backwards incompatible, but occasionally a design change will occur before we go live, and we simply ask the other developers to update.
We do have separate development application (web) servers, but they share the same database. Our developers do have the option to use their own database, as they know how to set this up, and will do that on occasion, but only temporarily. The norm, for us, is to share the database.
Thought I'd throw this out there, but why not let every developer host their own instance of SQL Server Developer on their desktops and then have a shared server for each of the other environments (development, QA, and prod)? I think even the basic MSDN that comes with Visual Studio Pro (if you opt for it) includes a license for SQL Server Developer.
The developer can work on their desktop without impacting the others and then you can have them move the code to the next shared environment as you see fit (at will, with daily/weekly builds, etc.).
EDIT:
I should add that the desktop instance allows developers to do things that he DBAs often restrict on shared environments. This includes database creation, backup/restore, profiler, etc.. These things are not essential but they allow the developer to become so much more productive while reducing the demands they make against your DBAs.
The shared environment is completely necessary for testing - I would not recommend going from desktop to production. But you can add so much by allowing the developers to have 100% control over a given database environment (including isolation from others) with a relatively minor cost.
Depends on your development, testing and maintenance cycles. Also on the size and location of the development team (and of course organization). If you support several versions of the database you might need even more environments.
In real world I found the following approach rather satisfying:
single central database/application for testing purposes, gets all the changes by various developers periodically merged into it
local copies for development (so you are free to drop and reload the whole database)
upgrade scripts are maintained for any changes to schema, auxiliary and sample data sets
Here are some further points:
If two developers (two teams) are working on changes that can affect each other then they should complete their tasks independently and then integrate/merge and test. For this it is much better to have separate development environments (unless they have to work together in which case I consider them to be a part of the same team; still they can work on their own copies of the database and share it if necessary)
If they work on the changes that do not influence each other they could work on the main server. Or on their own local copies of the database.
So, developing on the local copy has all the benefits with no risk in a general case (when you support multiple versions of the system and maintain upgrade scripts anyway).
Still it is great if you can share test cases so ability to dump/restore the database easily and quickly is a big plus.
EDIT:
All of the above assume that having a copy on the local machine of the whole system for testing purposes is feasible (size, performance, licenses, etc).
I would opt for solution #1 : One common database for all the developers.
Pros
Less expensive for the infrastructure;
Only one dump is required when it's time to refresh the development database;
Everyone develops with the same data, so it closely represents the production environment;
Cons
If one developer performs a bad operation, this could impact a larger amount of developers.
As for solution #2 : One independant database for each of the developers;
Pros
This could be useful for new features developments, when development requires isolation;
Cons
More expensive for the company (infrastructure, licences...);
Multiplication of problems caused by eager isolation development environment (works in devloper's environement, not integrated);
Multiplication of dumps by the DBAs of the same copy from the production environment.
Considering the above, I would recommend, depending on your company size:
One database for development;
One database for testing the integration;
One database for acceptance tests;
One for new feature development that will perhaps require integration tests.
If your company doesn't require integration tests, then go with acceptance tests, this step is crucial before going to production.
One per developer plus a continuous integration and build server to run unit and integration tests. That gives you the best of both worlds.
Having all developers modify a single dev database quickly becomes less productive once the amount of database change reaches a certain level because it forces a developer to deploy changes to the shared database before he is ready to check-in, which means other parts of the code line may break unnecessarily.
Simple answer:
Have one development database, and if the developers want their own, they can just run their own instance on their own machines. Just be sure to test/publish on the shared.
We do both:
We use code generation where I'm at and our database is generated as well. So we have an instance on each developer's box where the database is generated. Then we use the scripts that are generated to apply the changes to a central test database. If that goes well we apply the changes to the production database during a release.
What's nice with this approach is that when our "source of truth" is checked in to source control, all the database changes are automatically distributed to the other developers when they rebase and regenerate. It works well for us.
The best way is single database on Test/QA server and one database (probably on developer's local computer) for each developer (so, 10 developers work with 10 + 1 databases).
The same approach as for general development: each developer has own copy of source code on local machine.
Also, multiple-database approach simplifies the keeping database schema in version control systems. We are keeping database creation scripts in SVN.
We are using the approach, described here:
http://www.sqlaccessories.com/Howto/Version_Control.aspx
You might also want to look at Refactoring Databases. Aside from discussing database changes, he includes discussions on going from development to production in a way that reduces risk.
Why on earth would you want a separate database for all developers?
Have one common database for all, that way the table structure is consistent and the sql statements are as well.
The biggest problems with developers having their own databases are:
First it is unlikely to be the size
of the real production database (if
you take all the databases we need to
work with here, they would take up
several hundred gigabytes of space, I
don't have that available on my
machine), this causes bad code to be
written that will never work on a
large database for performance
reasons. SQL code should never be written against a data set significantly smaller than the one on prod.
Second, developers who use their own
database create problems when they
spend a long time developing
something and then find out only
after they merge with a real datbase
that it affects something else. You
find this stuff much faster when you
share the environment. So there is
inthe end less wasted development
time.
Third developers working on related
things need to know about the changes
you are making, it will affect their
change.
When you know you are going to affect others, I think you tend to be more careful what you do which isa plus in my book.
Now the shared database server should have what we call a scratch database, a place where people can create and test table changes, so if they are doing something that might need to drop and recreate a table (which should be a rare case!), they can test the process first by copying the table to the scratch database and running their process there and then changin to the real database when they are sure it works. Or we often copy a backup table to the scratch database before testing a particular change, so we can easily recreate the old data if it goes bad.
I see no advantages at all to using individual databases.

Resources