I would like to introduce Apache Airflow to a medium-sized team (1-10 people).
How people can organize their work, so they can develop, test, review, deploy and not step on each others toes?
I tried to use docker images for testing, like this one: puckel's docker image. People can write their DAGs on their own machines, submit for review, and then we could deploy it to production. The disadvantage is that docker image doesn't have access to all services the production environment has (e.g. AWS API credentials) so tests are limited to relatively simple cases, or require users to do a lot of configuration themselves. The environment is also different a little bit in terms of configuration (e.g. the dependencies have to be added by the user).
Related
Very often in enterprise applications, something doesn't work as expected and you need to debug and create a fix.
Obviously you can't test in production as you might have to save something in order to debug it, and you don't want to be responsible for accidentally sending a $1M transaction by mistake!
With traditional applications, this process is done by copying the database from production to a dev environment (maybe redacting sensitive data) and duplicating and debugging the problem there.
In Corda you have multiple nodes involved, the nodes have specific keys and the network has a truststore hierarchy.
What is the process to replicate the production structure and copy all the data from production to development in order to debug?
I think it depends on how complicated your setup is.
The easy way to rigorously do this is within a mocknetwork during unit testing. (this is the most common setup, example here: https://github.com/corda/samples-kotlin/blob/master/Advanced/obligation-cordapp/workflows/src/test/kotlin/net/corda/samples/obligation/flow/IOUSettleFlowTests.kt)
Something I like to do a lot is to use intellij breakpoints in the flow / unit tests in order to be sure something works the way i expect.
Another way to do it is potentially using the testnet (again depends on your use case) https://docs.corda.net/docs/corda-os/4.7/corda-testnet-intro.html
Another way to do this is to write up a script to perform all of the transactions you want the nodes to do while running them locally on your machine just by using the corda shell on all the local nodes and feeding the transactions directly that way.
Copying data from production to apply to local network is gonna be hard because you can't fake all the transactions / state history without a lot of really painful editing of all the tables on each node.
Context:
We have a WordPress/Woocommerce site that has a wide range of custom-tailored features that solve for a specific marketing need.
We want to have variants of this same site (around 80% of the code would be the same) for different domains hosted on completely different servers.
Question:
What would be the best approach to instantiate and maintain the clone sites?
Additional Details
We don't track (git) WordPress core files.
We track (git) only vital plugins for the site, the remaining are ignored.
The difference between sites would be mainly branding, but still content AND code related.
The idea is to set up a new "clone" site in a short period and still be able to migrate new features in future.
Deployment Specs
We use Laravel Forge to provision AWS servers.
We use Bash script for installing all dependencies, downloading WP core, restoring a sample DB.
We use composer for dependency management.
I am in charge of implementing QA processes and test automation for a project using microservices architecture.
Project has one public api that makes some data available. So I will automate API tests. Tests will live in one repository. This part is clear to me, I did this before in other monolith projects. I had one repo for API tests. And possibly another repo for selenium tests.
But then here the whole poduct consists of many microservices that communicate via restful apis and/or rabbit queues. How would I go about automating tests for each of these individual servicess? Would tests for each individual service be in a separate repo? Note: services are written in Java or PHP. I will automate tests with Python. It seems to me that I will end up with a lot of repos for tests/stubs/mocks.
What suggestions or good resources can community offer? :)
Keep unit and contract tests with the microservice implementation
Component tests make sense in the context of composite microservices,
so keep them together
Have the integration and E2E tests in a
separate repo, grouped by use cases
For this kind of testing I like to use Pact. (I know you said Python, but I couldn't find anything similar in that space, so I hope you (or other people searching) will find this excellent Ruby gem useful.)
For testing from outside in, you can just use the proxy component - hope this at least gives you some ideas.
Give each microservice its own code repository, and add one for the cross-service end-to-end tests.
Inside a microservice's repository, keep everything that relates to that service, from code over tests to documentation and pipeline:
root/
app/
source-code/
unit-tests/ (also: integration-tests, component-tests)
acceptance-tests/
contract-tests/
Keep everything that your build step uses in one folder (here: app), probably with sub-folders to distinguish source code from unit tests, integration tests, and component tests.
Put tests like acceptance tests and contract tests that run in later stages of the delivery pipeline in own folders. This keeps them viually separate. It also simplifies creating separate build/test steps for them, for example by including own pom.xml's when using Maven.
If a developer changes a feature, he will need to change the tests at the exact same time to ensure the two fit together. Keeping code and tests in the same repository keeps the two in sync in a natural way.
meteor deploy myapp.meteor.com
When I run this command line, my meteor app upload to meteor cloud server.
Is there any solution or repository for make my own meteor cloud server?
meteor deploy mycloud.server.com myapp.mydomain.com
I know I can use my own domain use this command.
meteor deploy myapp.mydomain.com
But I want to make my own cloud service like meteor do.
I know https://github.com/arunoda/meteor-up. But this is single service solution.
This is not for one or more server (clustered server) with many services.
If there are no solution for this, I'll make this solutions.
For now galaxy is still not released, this one should do exactly what you are looking for i.e. using deploy on your own server.
An alternative might be modulus.io but it is still not the easy deployment we would like.
The simplest I found yet is still using meteor-up. You can use it for deploying on several server too. The point is meteor-up expect to have a running ubuntu (or debian), and you deploy to those machines. You still need to setup an oplog for mongodb and a high availability proxy (with sticky session) to forward on the right virtual machines….
If only the performances matter, you can build micro services and integrated them through a service discovery as provided through meteorhacks:cluster, as this will help load balance your app it does not (yet?) provide a way to route the client according to the domain name (meaning you still need a reverse proxy for accessing the right service discovery from a domain) Also this packages does not provide any way to deploy you app, this is just a convenient way to help manage and scale your service.
If you need a reliable solution right now, docking meteor, deploying it on clusters and managing them, I would strongly advise looking at: https://bulletproofmeteor.com It is a very good source for building reliable meteor app with high availability. Note that all the chapters are not free, but there is a whole chapter covering "Deploying Meteor Apps into a Kubernetes Cluster" which goes step by step on the process of setting up your server(s) for running your meteor app in a PaaS way.
In WebsiteAzure we have an Staging feature. So we can deploy to one staging site, test it, fill all caches and then switch production with stage.
Now how could I do this on a normal Windowsserver with IIS ?
Possible Solution
One stragey i was thinking about is having a script which copies content from one folder to an other.
But there can be file locks. Also as this is not transactional there is some time a kind of invalid state in the websites.
First Poblem:
I've an external loadbalancer but it is externally hosted and unfortunately currently not able to handle this scenario.
Second problem As I want my scripts to always deploy to the staging i want to have a fix name in IIS for the staging site which i'm using in the buildserver scripts. So I would also have to rename the sites.
Third Problem The Sites are synced between multiple servers for loadbalancing. Now when i would rebuild bindings on a site ( to have consistent staging server) i could get some timing issues because not all Servers are set to the same folder.
Are there any extensions / best practices on how to do that?
You have multiple servers so you are running a distributed system. It is impossible by principle to have an atomic release of the latest code version. Even if you made the load-balancer atomically direct traffic to new sites some old requests are still in flight. You need to be able to run both code versions at the same time for a small amount of time. This capability is a requirement for your application. It is also handy to be able to roll back bad versions.
Given that requirement you can implement it like this:
Create a staging site in IIS.
Warm it up.
Swap bindings and site names on all servers. This does not need to be atomic because as I explained it is impossible to have this be atomic.
as explained via Skype, you might like to have a look at "reverse proxy iis". The following article looks actually very promising
http://weblogs.asp.net/owscott/creating-a-reverse-proxy-with-url-rewrite-for-iis
This way you might set up a public facing "frontend" website which can be easily switched between two (or more) private/protected sites - even they might reside on the same machine. Furthermore, this would also allow you to actually have two public facing URLs that are simply swapped depending on your requirements and deployment.
Just an idea... i haven't tested it in this scenario but I'm running a reverse proxy on Apache publicly and serving a private IIS website through VPN as content.