Add time & increment 1 sec for every file name in control M - control-m

I am trying to append current time to the file. But, there is one thing that I need is the FTP job that I am using is transferring multiple files and all the files have same name expect for the time that we are appending to it. Could any one tell me how I can add one second/ minute to the %%TIME param so that I can pass it to my file name

I am assuming that you're using the AFT/MFT module. If so, you can use [T] to get the timestamp and then [C#] to get a counter number.
[N]__[T]_[C3].[E] -> this will give you something like MyFileName__235703_001.txt
If, however, you want to stick with using %%TIME you can create your own local variable based on %%TIME and then use the "%%PLUS 1" numeric expression to create a modified variable.

Related

PeopleCode to load from CSV file and split 1 field into multiple columns

I am not familiar with Application Engine or PeopleCode but inherited this project when someone left. Seems simple but I'm not sure how to approach it.
I have to load a CSV file that has 5 fields. The last field has multiple values separated by a comma and it is qualified with quotes.
file example:
ID , YEAR, VALUE1 , VALUE2, CODE
87778, 2022, processed, none , 100,40
93332, 2022, processed, none , 60
76633, 2022, error , none , 55,35,9
I have created a File Layout definition and set the qualifier and I can load the file into a staging table but now I want to split the last column (CODE) into individual codes.
I have created 2 PeopleTools Record definitions with a parent/child relationship:
parent Record definition with ID,YEAR,VALUE1,VALUE2, and
child Record definition with ID,YEAR,CODE
I have found that I can use the PeopleCode split function to break the CODE column out into an array containing each value in an element. I'm not sure what the best way to structure the program is though.
Is the staging table necessary?
Or can I use the split function as I read the CSV file in and update the parent/child tables?
Or do I need to keep the staging table and then read out the fields for the parent record and move them to the permanent table and then do the same for the child after using the split function and then loop through the array?
Just looking for some guidance so my first AE project is not a mess.
IMO, there are always multiple ways to achieve the same thing(especially in AE). we choose one based on our requirements and efficiency.
for staging table: In your case, you can ignore the staging table unless you are expecting to load a huge set of data every time or want to do parallel processing. In other words, you can have staging table if you think loading takes a lot of time and you don't want to risk failing that due to other errors.
You can even achieve this whole thing in one peoplecode action without a staging table.
or,
Load the data into staging table and commit.
loop through the data from staging table in AE (having the data in state rec)
Do the transformation as required using peoplecode action
insert data in necessary tables
update status(have a field in staging table) field in staging table, this may come in handy for any analysis/issue in production

Export unique JSON values from two files

I am trying to extract unique values between two JSON files. I see many jq posts on how to filter unique values within the same file, but not compare two.
Both of my files are in the same format:
{
"time":"2021-10-01T04:00:38.161Z",
"Number":2,
"signature":"e03756fa67a30d52837d3743d4d87e9a810c5e2ddf11061a976c386a742fa"
}
{
"time":"2021-10-01T04:01:38.164Z",
"Number":2,
"signature":"3b4d746ac2da2543047d8cc981db2464d4993065993449b321fc15d7f0aa6"
}
I would like to create a 3rd file which contains only unique values. If I must choose a single value to declare as unique, then I would select 'signature.'
Choose a field that will be compared (e.g. .signature) and filter by that using unique_by in the comprehensive array obtained by using the option --slurp or -s:
jq -s 'unique_by(.signature)[]' file*.txt
I'm not sure if I totally understand what you are trying to explain to us here, but if you are trying to extract/export it from your file to a command or a retrieval command, then you would need to specify which files need to be included, along with where you want to post that text to.
With any files you can extract data. For example, if you were using Sqlite:
db.fetch(`data_specified_here`)
Note: This would fetch the data from the database—or for you db file— then what you would want to do is either log or print out the data.
Since you have things like "time" and "Number"'s, you'd want to specify that that (meaning "time":2021-10-01, and so on) you need to specify that it is the string, or input you with to take out from your file.
If this didn't help, please re-ask your question with a little more detail and I can help more. I just gave a general rundown on how to fetch something from the DB, or in your case "JSON".

MS Project: How to set daily actual work for a task using a JavaScript Add-In?

I want to synchronize data for actual work from a web-based application of my company with MS Project. I am currently developing an Add-In with JavaScript in order to achieve this:
The red circle in my screenshot shows the data that I want to set programmatically. However, I have no idea how to achieve this.
I understand that I can get Task GUIDs and then set task fields using the task GUID and the field ID. This way I can save the cumulative actual work, but not per day like in my screenshot.
The API Docs on the MS Office Website are rather hard to read and navigate. Any help would be apprechiated!
Let's first separate the language from the operation.
Operationally, based on your circle, you want to set work for a task to happen on individual days? This is done using timeScaleData, see https://learn.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa206255(v=office.11) . When I did something similar (in VBA), I had to (1) get an array of time scale values, then (2) walk/iterate through that array and set work to those days:
set timeScaleValsArry = myTask.Assignments(1).TimeScaleData(startDay, endDay, pjAssignmentTimeScaledWork, daily)
for a = 1 to timeScaleValsArry.Count
timeScaleValsArry[a].value = hoursToWorkThatDay
next
Breaking down the elements above:
myTask is the task (of type task) I want to manipulate.
Assignments is an array representing each resource assigned to the task; for my purposes, I only ever had 1 resource assigned, hence the index of (1).
TimeScaleData is the function that returns the the array starting on the day startDay (whatever you want that to be), endDay, pjAssignmentTimeScaledWork which tells this function what data we want to work with (being work, but there are alternates ), and daily which is the frequency you want to work with (for instance you can go down to minutes, or up to years).
Then the returned array timeScaleValsArry is walked, and inside the loop the daily assignment for each value is manipulated. You'd need to customize this part to meet your needs; alternatively, you don't even need to loop if you always had three days: just hard code the array indices.
As far as language, clearly this is do-able in VBA. Doing this in C# as a VSTO addin has very similar syntax. I'd presume for JavaScript (what are you using, ScriptLab?) would also have similar syntax.

How to get a range of columns as collection?

EDIT: For context, I am trying to import a certain range of columns in an Excel sheet into a Blue Prism object as a collection.
So I've got a worksheet with columns from A to AM. When I get sheet as collection, blank columns named "Column1" to "Column10" (the first time) and "Column1" to "Column19" (the second time, note its 19 cols this time) mysteriously appear in the collection. No data is in these columns - no whitespace, nothing.
In order to prevent anything of the sort from messing up the collection cols, I'm looking for a way to get a range of columns as a collection, e.g. A - AM. The number of rows is undetermined, so the get range as collection action is not suitable. Thanks in advance!
I never really liked the default object to get range as collection because of that. You can create a new action in the Excel VBO object (do make sure to be careful with that since re-importing the default object will basically erase the action. I usually rename it as 'MS Excel VBO Customized' or something along those lines).
The way I would do it is as follows:
Open the 'MS Excel VBO' object and duplicate the page 'Get Worksheet Rage as Collection' and name it 'Get Worksheet Range as Collection New' (or anything you deem suitable):
Edit the code stage: give it a new name (because code stages cannot have the same name in the same object) and change the inputs and code stages to match the following (I'm calling the new range as 'Address' here, but feel free to name it something else as long as you are consistent throughout):
Edit the start stage. You can delete the previous data items for Start Cell and End Cell and create one for Address:
Publish and save the object. You can then use it from the object or process you are working on and use a range such as A:AM.

create control-M job on the fly

Is it possible to dynamically create control-M jobs.
Here's what I want to do:
I want to create two jobs. First one I call a discovery job, the second one I call a template job.
The discovery job runs against some database and comes back with an array of parameters. I then want to start the template job for each element in the returned array passing in that element as a parameter. So if the discovery job returned [a1,a2,a3] I want to start the template job 3 times, first one with parameter a1, second with parameter a2 and third one with parameter a3.
Only when each of the template jobs finish successfully should the discovery job show as completed successfully. If one of the template job instances fails I should be able to manually retry that one instance and when it succeeds the Discovery job should become successful.
Is this possible ? And if so, how should this be done ?
Between the various components of Control-M this is possible.
The originating job will have an On/Do tab - this can perform subsequent actions based on the output of the first job. This can be set to work in various ways but it basically works on the principle of "do x if y happens". The 'y' can be job status (ok or not) exit code (0 or not) or text string in standard output (e.g. "system wants you to run 3 more jobs"). The 'x' can be a whole list of things too - demand in a job, add a specific condition, set variables.
You should check out the Auto Edit variables (I think they've changed the name of these in the latest versions) but these are your user defined variables (use the ctmvar utility to define/alter these). The variables can be defined for a specific job only or across your whole system.
If you don't get the degree of control you want then the next step would be to use the ctmcreate utility - this allows full on-the-fly job definition.
You can do it and the way I found that worked was to loop through a create script which then plugs in your variable name from your look-up. You can then do the same for the job number by using a counter to generate a job name such as adhoc0001, adhoc0002, etc. What I have done is to create n number of adhoc jobs as required by the query, order them into a new group and then once the group is complete send the downstream conditions on. If one fails then you can re-run it as normal. I use ctmcreate -input_file . Which works a treat.

Resources