Autosys File Watcher - looking for create date - autosys

I am wanting to setup a File Watcher job to monitor a file shared in an Active Directory environment. The filename is always the same, and does not contain the date/time. And the file stays in it's location until replaced, as others might use the file.
How can I create a File Watcher job to look for a file less than 24 hours old?
AutoSys Automation AE - Release:11.4.6.20180302-b425

There is no easy way of doing that. I would suggest, if you know when the file is created have the FT start little after the creation or deleting the file after it is processed.

Related

WinSCP command line for uploading file from folder named with current date

Our bank just changed the way in which upload and download files to them. Previously we could log in to a secured website, choose directory, and upload/download manually. Everything now has to be done through SFTP, using FileZilla or similar program.
I want to automate SFTP upload process by using WinSCP.
I realize I will need to use the put command line to upload. The file I'm wanting to upload is generated every day and the file name is exactly the same, but the folder being uploaded from changes. The directory structure is as such:
C:\Finance\FY 2021\YYYYMMDD\file.txt
My question is what would the upload command line look like to upload this file on a daily basis. This upload will always take place the same day, so the folder name will always be the current date in the above format.
Can these commands be contained within and run from a batch file rather than creating a batch file that merely points to a scripted txt file to run? Thanks for your help!
A follow-up question for handling of the FY YYYY part part:
Use WinSCP to upload from a folder with a fiscal year in its name to an SFTP server
WinSCP has %TIMESTAMP% syntax which you can use to refer to the folder with today's timestamp in its name.
And yes, you can specify WinSCP commands directly in the batch file using the /command parameter:
winscp.com /ini=nul /command ^
"open sftp://username:password#ftp.example.com/ -hostkey=""...""" ^
"put ""C:\Finance\FY 2021\%%TIMESTAMP#yyyymmdd%%\file.txt"" ""/remote/path/""" ^
"exit"

Automating- Appending two text files to create 1 Excel file daily

I have two files that come in daily to a shared drive. When they are posted, they come in with the current date as part of the file name. example ( dataset1_12517.txt and dataset2_12517.txt) the next day it posts it will be (dataset1_12617.txt and so on). They are pipe delimited files if that matters.
I am trying to automate a daily merge of these two files to a single excel file that will be overwritten with each merge (file name remains the same) so my tableau dashboard can read the output without having to make a new connection daily. The tricky part is the file names will change daily, but they follow a specific naming convention.
I have access to R Studio. I have not started writing code yet so looking for a place to start or a better solution.
On a Window machine, use the copy or xcopy command lines. There are several variations on how to do it. The jist of it though is that if you supply the right switches, the source file will append to the destination file.
I like using xcopy for this. Supply the destination file name and then a list of source files.
This becomes a batch file and you can run it as a scheduled task or on demand.
This is roughly what it would look it. You may need to check the online docs to choose the right parameter switches.
xcopy C:\SRC\souce_2010235.txt newfile.txt /s
As you play with it, you may even try using a wildcard approach.
xcopy C:\SRC\*.txt newfile.txt /s
See Getting XCOPY to concatenate (append) for more details.

Rsync Time Machine Style Backup issues

I bought an external USB3 drive to backup a WD MyCloud NAS (plugs directly into a USB3 on the NAS), and started searching for an rsync script to simulate a Time Machine style backup.
I found one that I like, but it's not working the way I expected it to.
Therefore I'm hoping you could shed some light on the matter and suggest what could/should be done to, first of all, make it work and second, suggest how this should be done to get a result similar to a Time Machine style snapshot backup.
Where I found the script I started with:
https://bipedu.wordpress.com/2014/02/25/easy-backup-system-with-rsync-like-time-machine/
He breaks down the process like this:
So here I make first a “date” variable that will be used in the name
of the backup folder to easily know when that backup/snapshot was
made.
Then use the rsync with some parameters (see man rsync for more
details):
-a = archive mode ( to send only changed parts)
-P = to give a progress info – (optional)
–delete = to delete the deleted files from backup in case they are
removed from source
–log-file = to save the log into a file (optional)
–exclude = to exclude some folders/files from backup . This are
relative to source path !!! do not use absolute path here !
–link-dest = link to the latest backup snapshot
/mnt/HD/HD_a2/ = source path
/mnt/USB/USB2_c2/MyCloud/Backups/back-$date = destination folder , it
will contain all the content from the source.
Then by using rm I remove the old link to the old backup ( the
“current” link) and then I replace it with a new soft link to the
newly created snapshot.
So now whenever I click on “current” I go in fact to the latest backup
. And because every time I make the backup the date is different the
old snapshots will be kept. So for every day I will have a snapshot.
Here is my script version based on his outline.
#!/bin/bash
date=`date “+%Y%m%d-%H-%M”`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude="lost+found" --exclude="Anti-Virus Essentials" --exclude=Nas_Prog --exclude=SmartWare --exclude=plex_conf --exclude=Backup --exclude=TimeMachineBackup --exclude=groupings.db --link-dest=/mnt/USB/USB2_c2/MyCloud/Backups/Current /mnt/HD/HD_a2/ /mnt/USB/USB2_c2/MyCloud/Backups/back-$date
rm -f /mnt/USB/USB2_c2/MyCloud/Backups/Current
ln -s /mnt/USB/USB2_c2/MyCloud/Backups/back-$date /mnt/USB/USB2_c2/MyCloud/Backups/Current
So if I am understanding his idea, first initial backup lives here. /mnt/USB/USB2_c2/MyCloud/Backups/Current.
Then on subsequent backups, the script creates a new directory in the /mnt/USB/USB2_c2/MyCloud/Backups/Current/ named ‘back-2015-12-20T09-19’ or whatever date the backup took place.
This is where I am getting a bit lost on whats actually happening.
It writes a time stamped folder to the /Backups/Current/ directory, and ALSO to the /Backups/ directory. So I have 2 versions of those time stamped folders now in two different directories.
Im confused as to where the actual most complete set of recent backup files resides now.
What I THOUGHT was going to happen is that the script would run, and any file that wasn’t changed, it would create a link from the ‘Current’ folder to the time stamped folder.
Im sure I have something wrong here, and hoping someone can point out the error and/or suggest a better method.

Usage of mmap and reloading changes to the file

I'm using mmap to load a big file with just with READ-ONLY access.
It's expected, that a cron job overwrites this file, daily once with updated content.
My query here is that how would my executable re mmap the updated file to get to the updated content?
Do I need to call mmap again? How would my executable know at what time the file was updated?
What's the usual recommended ways and options available with tradeoffs?
If the cron job just opens the file and overwrites the data in it, the new data should be immediately reflected in your mapped memory. If the cron job creates a new file, writes the data there, and then calls rename() to move the new file on top of the old, you need to close the old file and reopen to get the new data. This is often done to avoid data corruption in case of a power failure while rewriting the file.
As for how you get notified, there are several possibilities. The easiest might be to have the cron job just send a signal (e.g. SIGUSR1) to your process. You can then react to the signal and do your work. Otherwise, you could use inotify (on Linux) to monitor your file for writes.
Another option is to periodically poll the file's mtime to detect changes. Personally, I'd avoid that route though, as it seems rather hacky and inelegant.

Unix invoke script when file is moved

I have tons of files dumped into a few different folders. I've tried organizing them several times, unfortunatly, there is no organization structure that consistently makes sense for all of them.
I finally decided to write myself an application that I can add tags to files with, then the organization can be custom to the actual organizational structure.
I want to prevent from getting orphaned data. If I move/rename a file, my tag application should be told about it so it can update the name in the database. I don't want it tagging files that no longer exist, and having to readd tags for files that used to exist.
Is there a way I can write a callback that will hook into the mv command so that if I rename or move my files, they will invoke the script, which will notify my app, which can update its database?
My app is written in Ruby, but I am willing to play with C if necessary.
If you use Linux you can use inotify (manpage) to monitor directories for file events. It seems there is a ruby interface for inotify.
From the Wikipedia:
Some of the events that can be monitored for are:
IN_ACCESS - read of the file
IN_MODIFY - last modification
IN_ATTRIB - attributes of file change
IN_OPEN and IN_CLOSE - open or close of file
IN_MOVED_FROM and IN_MOVED_TO - when the file is moved or renamed
IN_DELETE - a file/directory deleted
IN_CREATE - a file in a watched directory is created
IN_DELETE_SELF - file monitored is deleted
This does not work for Windows (and I think also not for other Unices besides Linux) as inotify does not exist there.
Can you control the path of your users? Place a script or exe and have the path point to it before the standard mv command. Have this script do what you require and then call the standard mv to perform the move.
Alternately an alias in each users profile. Have the alias call your replacement mv command.
Or rename the existing mv command and place a replacement in the same dir, call it mv and have it call your newly renamed mv command after doing what you want.

Resources