I've written a .NET Core Rest API which does migrate/ update the database (using Entity Framework Core) in Startup.cs. Currently, only one instance is running in the production environment. It seems to be recommended to run 2 instances in production.
What happens while executing the cf push command? Are both instances stopped automatically or do I need to execute cf stop?
In addition, how do I prevent both instances from updating the database?
I've read about the CF_INSTANCE_INDEX environment variable. Is it OK to only start the database migration when CF_INSTANCE_INDEX is 0? Or does CloudFoundry provide the next mechanism: start the first instance and when this one is up-and-running, the second instance will be started?
What happens while executing the cf push command? Are both instances stopped automatically or do I need to execute cf stop?
Yes, your app will stop. The new code will stage (i.e. buildpacks run) and produce a droplet. Then the system will bring up all the requested instances using the new droplet.
In addition, how do I prevent both instances from updating the database? I've read about the CF_INSTANCE_INDEX environment variable. Is it OK to only start the database migration when CF_INSTANCE_INDEX is 0?
You can certainly do it that way. The instance number is guaranteed to be unique and the zeroth instance will always exist, so if you limit to the zeroth instance then it's guaranteed to only run once.
Another option is to run your migration as a task (i.e. cf run-task). This runs in its own container, so it would only run once regardless of the number of instances you have. This SO post has some tips about running a migration as a task.
Or does CloudFoundry provide the next mechanism: start the first instance and when this one is up-and-running, the second instance will be started?
It does, it's the --strategy=rolling flag for cf push.
See https://docs.cloudfoundry.org/devguide/deploy-apps/rolling-deploy.html
I'm not sure that this feature would work for ensuring your migration runs only once. According to the docs (See "How it works" section at the link above), your new and old containers could overlap for a short period of time. If that's the case, running the migration could potentially break the old instances. It'll be a short period of time, just until they get replaced with new instances, but maybe something to consider.
Related
We have most, but not all of our build artifacts into a Git repo (Bitbucket).
Our current build looks like this, and takes 30+ minutes to build/deploy to Firebase, we would like to reduce the time to build.
We are not using Google Cloud Build at the moment, but before heading down that path, I want to find out if that would even be fruitful.
We have all of the code cloned from the git repo (Bitbucket), to a GCE VM.
And then 1 TB of static data is then copied into a directory under the git repo area, artifacts that are needed for the deploy.
We do not want to check in that 1TB of data into the git repo, it is from a 3rd party, it is rarely updated, and would be too heavy of a directory to pull into developer environments on their IDE's, it is pointless to do so.
We launch a build script on the GCE VM to build the code, and deploy to Firebase (bash script), it takes about 30 minutes.
We want the builds to go faster, and possible to use cloud build.
With this:
a git repo
external files that need to remain in a stateful container, not copied over each time, due to the time it would take
how do we create a stateful container that would only require a git update (pull origin master), and then to fire off a build/deploy to Firebase?
We want to avoid ingress traffic to the Firebase deploy using external build services where the 1TB of data that remains the same each and every time is sent to Firebase, where we would be billed.
Cloud Run containers are not stateful. GCE VM's are stateful, but it requires that we keep them up and going 24x7x365, so that any developer anywhere can run a build, and that may take only 30 minutes out of any day, and we don't know when that will be, so leaving it up 24x7x365 is mostly wasteful.
We want to avoid building a stateless container where the code is checked out fresh each and every time, a git pull origin master will do, and to have to copy the 1TB of artifacts into the container each and every time taking time.
We just want to do:
git pull origin master
Fire off the build as the next step in the script
spin down the container, have it save it's state for the next build, minimizing time, each and every time, saving the previous 'git pull origin master' updated artifacts, and preserving the 1TB files we copied to the container.
The ideal situation would be to have a container that is stateful, that spins down when not in use, and "spins up", or is made active for use when we need to do a build.
It would retain the previous git update (git pull origin master), and would retain all artifacts outside the git repo that we copy over. We also need shell access to the container (ssh, scp) etc.
A stateful 'Cloud Run' option would be ideal, but I don't know of such a thing (stateful containers with GCP that we can run and only be billed for runtime/compute time)
One solution is to use a VM for this. Add a startup script. In it
git pull origin master
Fire off the build as the next step in the script
Add this line which stop your compute
gcloud compute instances stop $(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/id) --zone=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/compu
teMetadata/v1/instance/zone)
By the way, each time that you start your VM, it will apply the startup script and shutdown automatically. You keep your persistent disk, and thus your 1TB data, and you pay few because of automatic stop.
If you have to wait an external build. 2 solutions:
Either you set a sleep timer and shutdown after in the startup script
Or customize this tutorial -> At the end of your build, publish a message in PubSub, which trigger a function which will stop your instance.
EDITED to reply to comments
Here again, 2 solutions:
You can create a custom role with only the permission required. You can see all compute role here. If you provide an access to the console, I recommend list (to view the VMs), start and stop. Else, only start and/or stop if you write a script.
You can create a private function or Cloud Run. Assign a service account as identity to this, with enough role to start the VM (even if there is more permission as required -> it's not a good practice. Prefer the least privilege with custom role) and grant the role function.invoker or run.invoker to the user (depend if you use Function or Cloud Run) for allowing it to call this private endpoint and start the VM without right on the VM (only the right to perform an HTTP call).
I am setting up Integration Broker messaging from HCM 9.2 to FSCM 9.2 using the PERSON_BASIC_FULLSYNC service operation (the delivered process) to sync data from HCM to FSCM. I have activated the service operation, handler, queue, and routing on both sides, however when I run the Full Data Publish process, it runs to No Success with the following error:
Fetching array element 0: index is not in range 1 to 3.
(180,252) EOL_PUBLISH.PUBDTL.GBL.default.190 0-01-01.Step05.OnExecute PCPC:16088 Statement:266
I had initially run this process, and it ran to success, however it did not publish any new data in PS_PERSONAL_DATA in FSCM, so I updated the service operation version in HCM from 'INTERNAL' to 'VERSION_1', as the corresponding service operation in FSCM only had the 'VERSION_1' version available. But after I change the version so they match, and run the process it goes to No Success.
If I set the version of the service operation in HCM back to 'INTERNAL' and run the process, then it is successful but no data gets published in PS_PERSONAL_DATA. Any thoughts on what I should look at?
Sounds like a service op. routing problem. Confirm the routing directions and ensure that any alias' that are set don't cause issues. Service Ops on each side need to be the same.
One of the github repository is resource for my pipeline. I have 3 parallel jobs in my concourse pipeline which gets triggered when there is any checkin to the github repository. Other jobs in the pipeline is in sequence. I am having the below issues:
1) I want the pipeline to complete full execution then only start new run. I am using pool resource to make sure the execution completes then only new run is triggered. Is there a better way to resolve it.
2) If there are multiple checkins while the pipeline is in progress then is there a way to only execute pipeline on the last checkin. For example 1st instance of pipeline is running and while the pipeline execution completes there are 6 checkins in the repository. Can the pipeline pick only 6th version of the repos and purge the run for previous five checkins?
using the lock pool resource is almost the perfect option but as you have rightly caught, there will be a trigger for each git commit and jobs will start to queue.
It sounds like you want this pipeline to be serialised. Have you considered serial_groups http://concourse-ci.org/single-page.html#job-serial-groups
After moving most of our BT-Applications from BizTalk 2009 to BizTalk 2010 environment, we began the work to remove old applications and unused host. In this process we ended up with a zombie host instance.
This has resulted in that the bts_CleanupDeadProcesses startet to fail with error “Executed as user: RH\sqladmin. Could not find stored procedure 'dbo.int_ProcessCleanup_ProcessLabusHost'. [SQLSTATE 42000] (Error 2812). The step failed.”
After looking at the CleanupDeatProcess process, I found the zombie host instance found in the BTMsgBox.ProcessHeartBeats table, with dtNextHeartbeatTime set to the time when the host was removed.
(I'm assuming that the Host Instance Processes don't exist in your services any longer, and that the SQL Agent job fails)
From looking at the source of the [dbo].[bts_CleanupDeadProcesses] job, it loops through the dbo.ProcessHeartbeats table with a cursor (btsProcessCurse, lol) looking for 'dead' hearbeats.
Each process instance has its own cleanup sproc int_ProcessCleanup_[HostName] and a sproc for the heartbeat watchdog to call, viz bts_ProcessHeartbeat_[HostName] (although FWR the SPROC calls it #ApplicationName), filtered by WHERE (s.dtNextHeartbeatTime < #dtCurrentTime).
It is thus tempting to just delete the record for your deleted / zombie host (or, if you aren't that brave, to simply update the Next dtNextHeartbeatTime on the heartbeat record for your dead host instance to sometime next century). Either way, the SQL agent job should skip the dead instances.
An alternative could be to try and re-create the Host and Instances with the same name through the Admin Console, just to delete them (properly) again. This might however cause additional problems as BizTalk won't be able to create the 2 SPROCs above because of the undeleted objects.
However, I wouldn't obviously do this on your prod environment until you've confirmed this works with a trial run first.
It looks like someone else got stuck with a similar situation here
And there is also a good dive into the details of how the heartbeat mechanism works by XiaoDong Zhu here
Have you tried BTSTerminator? That works for one-off cleanups.
http://www.microsoft.com/en-us/download/details.aspx?id=2846
Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.