I am performing a database restore as part of our TFS 2010 Team build. Since a number of databases are being restored, I am using a batch file which is invoked via the InvokeProcess activity.
I have a number of issues that I am uncertain about:
1. Does the TFS wait for all the command in the batch file to complete or move to the next activity as soon as kicking the InvokeProcess?
2. Is there a way to have the build process wait for successful completion of the batch command?
I am using it as follows:
The FileName property of InvokeProcess has "c:\windows\system32\cmd.exe"
The Arguments property has the full path of my batch file.
Yes the InvokeProcess will wait for the external command to finish.
Related
One of the github repository is resource for my pipeline. I have 3 parallel jobs in my concourse pipeline which gets triggered when there is any checkin to the github repository. Other jobs in the pipeline is in sequence. I am having the below issues:
1) I want the pipeline to complete full execution then only start new run. I am using pool resource to make sure the execution completes then only new run is triggered. Is there a better way to resolve it.
2) If there are multiple checkins while the pipeline is in progress then is there a way to only execute pipeline on the last checkin. For example 1st instance of pipeline is running and while the pipeline execution completes there are 6 checkins in the repository. Can the pipeline pick only 6th version of the repos and purge the run for previous five checkins?
using the lock pool resource is almost the perfect option but as you have rightly caught, there will be a trigger for each git commit and jobs will start to queue.
It sounds like you want this pipeline to be serialised. Have you considered serial_groups http://concourse-ci.org/single-page.html#job-serial-groups
When i changed the start time of a coordinator job in job.properties in oozie, the job is not taking the changed time, instead its running in the old scheduled time.
Old job.properties:
startMinute=08
startTime=${startDate}T${startHour}:${startMinute}Z
New job.properties:
startMinute=07
startTime=${startDate}T${startHour}:${startMinute}Z
The job is not running at the changed time:07th minute,its running at 08th minute in every hour.
Please can you let me know the solution, how i can make the job pickup the updated properties(changed timing) without restarting or killing the job.
You can't really change the timing of the co-ordinator via any methods given by Oozie(v3.3.2) . When you submit a job the contents properties are stored in the database whereas the actual workflow is in the HDFS.
Everytime you execute the co-ordinator it is necessary to have the workflow in the path specified in properties during job submission but the properties file is not needed. What I mean to imply is the properties file does not come into the picture after submitting the job.
One hack is to update the time directly in the database using SQL query.But I am not sure about the implications of it.The property might become inconsistent across the database.
You have to kill the job and resubmit a new one.
Note: oozie provides a way to change the concurrency,endtime and pausetime as specified in the official docs.
I am working on a project which will batching some 834 records in file.
I setup the batching trigger as when the record count reaches a number, a batch file will release. But I also want release a batch even the record count is not reached (for example, every night, release all queueing record as a final file).
I know it can be done by click the override button in Batch Configuration window, but it need be done automatically.
So, basically, my question is, what did BizTalk do when I clicked the override button? Does BizTalk prove anyway to let me do that in a program?
I must say I did not try to send a controlmessage to a batch setting as release per record count, if you know this works, please let me know.
You're almost there and to complete the process isn't that difficult.
Leave the Batch configuration at the records count as it is.
Then, setup a process where an External Release trigger is sent at the appropriate time. A Windows Scheduled Task is a viable option, it can copy a file to a File Receive Location.
This article describes how to create the trigger message: http://msdn.microsoft.com/en-us/library/bb246108.aspx
I'm running batch Java on an IBM mainframe under JZOS. The job creates 0 - 6 ".txt" outputs depending upon what it finds in the database. Then, I need to convert those files from Unix to MVS (ebcdic) and I'm using OCOPY command running under IKJEFT01. However, when a particular output was not created, I get a JCL error and the job ends. I'd like to check for the presence or absence of each file name and set a condition code to control whether the IKJEFT01 steps are executed, but don't know what to use that will access the Unix file pathnames.
I have resolved this issue by writing a COBOL program to check the converted MVS files and set return codes to control the execution of subsequent JCL steps. The completed job is now undergoing user acceptance testing. Perhaps it sounds like a kludge, but it does work and I'm happy to share this solution.
The simplest way to do this in JCL is to use BPXBATCH as follows:
//EXIST EXEC PGM=BPXBATCH,
// PARM='pgm /bin/cat /full/path/to/USS/file.txt'
//*
// IF EXIST.RC = 0
//* do whatever you need to
// ENDIF
If the file exists, the step ends with CC 0 and the IF succeeds. If the file does not exist, you get a non-zero CC (256, I believe), and the IF fails.
Since there is no //STDOUT DD statement, there's no output written to JES.
The only drawback is that it is another job step, and if you have a lot of procs (like a compile/assemble job), you can run into the 255 step limit.
Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.