How can you get a second copy of a running log file without deleting it? - unix

I am trying to trouble shoot an issue on an application running on flavor of UNIX.
The default logging level puts out a reasonable amount of messages and does not affect performance.
But where there is an issue I need to change the logging level to verbose. Thousands of line in a second. Which effects performance.
Doing a delete of the trace log file would crash the application.
Being able to change back the logging level as quick as possible helps avoid a production performance hit.
The code is running in production so a performance hit is not good.
How can one create a second instance of the log for just the second or two that the problem is reproduced?
This would save having to copy the whole large file and then doing an edit to remove log entries not of concern for the problem at hand?
I have answered my own question because I have found this tip to be very useful at times and hope it helps others.

The steps below show how to quickly get a small section of the log to a separate file.
1) Navigate to the directory with the log file that you want to clear.
cd /logs
2) At the command prompt enter the following line. e.g. (include the ">")
> trace.log
This will clear the file trace.log without a file pointer changing.
3) Now quickly reproduce the error.
4) Quickly go back to the command line and copy the trace.log file to a new file.
cp trace.log snippet_of_trace.log
5) Now you have a much small log to analyze.

Related

Rsync - How to display only changed files

When my colleague and I upload a PHP web project to production, we use rsync for the file transfer with these arguments:
rsync -rltz --progress --stats --delete --perms --chmod=u=rwX,g=rwX,o=rX
When this runs, we see a long list of files that were changed.
Running this 2 times in a row, will always show the files that were changed between the 2 transfers.
However, when my colleague runs the same command after I did it, he will see a very long list of all files being changed (though the contents are identical) and this is extremely fast.
If he uploads again, then again there will be only minimal output.
So it seams to me that we get the correct output, only showing changes, but if someone else uploads from another computer, rsync will regard everything as changed.
I believe this may have something to do with the file permissions or times, but would like to know how to best solve this.
The idea is that we only see the changes, regardless who does the upload and in what order.
The huge file list is quite scary to see in a huge project, so we have no idea what was actually changed.
PS: We both deploy using the same user#server as target.
The t in your command says to copy the timestamps of the files, so if they don't match you'll see them get updated. If you think the timestamps on your two machines should match then the problem is something else.
The easiest way to ensure that the timestamps match would be to rsync them down from the server before making your edits.
Incidentally, having two people use rsync to update a production server seems error prone and fragile. You should consider putting your files in Git and pushing them to the server that way (you'd need a server-side hook to update the working copy for the web server to use).

MTM - Error saving the test run :There is not enough space on the disk

When I start to run the automated test case in Test Agent through Test Manager, Run Log shows below error. Test Agent has the enough memory space on the disk. What could be the issue? Please help.
Actually test controller was runs out of space because of the files being stored in localappdata%\VSEQT\QTController\TestRunStorage which never get deleted. So to resolve the issue, we have to remove the all files from TestRunStorage folder or turn off job spooling in QTController.exe.config file. Please refer following reference links for more details.
Reference Links:
http://www.anujchaudhary.com/2011/06/tfs-2010-test-controller-disk-runs-out.html
https://blogs.msdn.microsoft.com/shivash/2011/01/20/using-visual-studio-test-controller-with-mtm-and-disk-out-of-space-issues/

Autosys job in windows fails to copy all files but doesnt fail

We have a BOX scheduled in Autosys. If the BOX gets triggered at the scheduled time, all the PDFs generated out of one of the steps is not getting copied but the job is also not failing. When we are HOLDING the box and running step by step all outputs are getting copied.
A good troubleshooting step would be to either add in a sleep/delay step of a short time between the generation of the files and the downstream jobs.
A better way might be to use a file trigger or file watcher that will only let the below steps proceed if the files are all there (you can trigger on number of files or whatever stat is appropriate).
If your copy step is a simple copy command without any validation (like copy abc_file_*.pdf) then it wouldn't have any trouble copying whatever files it sees, even if not as many as you intend.

Acting on changes to a log file in R as they happen

Suppose a log file is being written to disk with one extra line appended to it every so often (by a process I have no control over).
I would like to know a clean way to have an R program "watch" the log file, and process a new line when it is written to the log file.
Any advice would by much appreciated.
You can use file.info to get the modification date of a file, just check every so often and take action is the modification date changes. Keeping track of how many lines have already been read will enable you to use scan or read.table to read only the new lines.
You could also delete or move the log file after it is read by your program. The external program will then create a new log file, I assume. Using file.exists you can check if the file has been recreated, and read it when needed. You then add the new data to the already existing data.
I would move the log file to an archive subfolder and read the logfiles as they are created.

Strategy for handling user input as files

I'm creating a script to process files provided to us by our users. Everything happens within the same UNIX system (running on Solaris 10)
Right now our design is this
User places file into upload directory
Script placed on cron to run every 10 minutes.
Script looks for files in upload directory, processes them, deletes immediately afterward
For historical/legacy reasons, #1 can't change. Also, deleting the file after processing is a requirement.
My primary concern is concurrency. It is very likely that the situation will arise where the analysis script runs while an input file is still being written to. In this case, data will be lost and this (obviously) unacceptable.
Since we have no control over the user's chosen means of placing the input file, we cannot require them to obtain a file lock. As I understand, file locks are advisory only on UNIX. Therefore a user must choose to adhere to them.
I am looking for advice on best practices for handling this problem. Thanks
Obviously all the best solutions involve the client providing some kind of trigger indicating that it has finished uploading. That could be a second file, an atomic move of the file to a processing directory after writing it to a stage directory, or a REST web service. I will assume you have no control over your clients and are unable or unwilling to change anything about them.
In that case, you still have a few options:
You can use a pretty simple heuristic: check the file size, wait 5 seconds, check the file size. If it didn't change, it's probably good to go.
If you have super-user privileges, you can use lsof to determine if anyone has this file open for writing.
If you have access to the thing that handles upload (HTTP, FTP, a setuid script that copies files?) you can put triggers in there of course.

Resources