I know you can use touch to create a new empty file.
I just learned that touch can be used to update the access and modification time of a file. I don't quite know in what situations and why do you need to update the access and modification time of a file , i.e. the usefulness of this particular function?
Thanks!
Some utility depends on timestamp of the file.
For example, make uses timestamp to check whether it is required to do something (usually build) based on the timestamp of the source code, and output (executable, object files, ...)
By touching followed by make, the source file, you can force rebuild.
In addition, touch has a -d option that can fake the modification time.
If one "knows what he's doing" she can avoid long build time, due to unnecessary re-compilations.
For example, when adding a declaration to a common header file,
that does not change any old API, one can fake the header real modification time,
and bypass Makefile's dependencies.
Related
I've written an Extension that, among many other things, renames files based on the Types they contain.
This works fine for files in the directory-tree under the csproj-file -- I find the ProjectItem entry for the file and change its name.
For 'linked'-files (those not in the directory-tree) I can rename the file (via File.Move()) but haven't found a way to programatically modify the csproj-file (after the rename the csproj-file has to be modified manually).
If this is something that can be done I'd appreciate a pointer to the docs showing how to implement the functionality.
The easiest solution for me was to modify the csproj-file.
Open, read whole file, close.
Verify that file I want to rename (e.g. xxx.cs) only occurs in 1 directory
(if it occurs in multiple directories the change has to be done manually.)
Make change
Open, write whole file, close
For an SDK project the change is applied immediately.
For a non-SDK project the change is applied after responding to the prompt that the csproj-file has been modified.
I've written a small function that download a file from my S3 data repository only if the size of the local version of the file is different, to save bandwhich and time.
I would like to improve it to download if and only if the last update datetime is different. I can make the check using HEAD (from httr package) to get the datetime for the remote file and file.info for the local one.
But (as espected) when I download a fresh copy of the file it's going to have the Sysdate as creation/last update time. I need a way to update the datetime of the fresh new local copy with the one from the server including the potential issue due to different time-zones.
file.info doesn't seem to be able to write file properties.
any idea on how can i do that?
I don't think you can and even if you could, that approach seems a bit unreliable to me (you mentioned time zones for example). Instead, I would suggest you rely on a file's md5sum (a unique representation of its contents) to tell when it has changed:
library(tools)
if (md5sum(remote) != md5sum(local)) file.copy(remote, local)
My team using Gerrit for reviewing of changes and sometimes we have to push .patch for some files. Sometimes these .patches could reach over ~1000 lines (which obviously not good for reviewing). It is very inconvenient to review it as diff between patches itself. It would be better(in some cases) to review it as diff of origin file and origin file with applied patch(not diff between .patches). Even if original file isn't under version control, committer could attach it with patch set, right?
Unfortunately, after lasting googling i didn't find anything...
Is there any way to show diff between two patch files (not patch sets) in such approach or similar?
I think the best you can do is to find a three-way merge tool to compare
the original source;
the source with the old patch; and
the source with the new patch.
To use such a tool, I think you'd need to use the command lines the Gerrit interface provides to fetch the changes locally, then apply the patches, then use the merge tool for the review.
I don't think you'll find a way to get a "three-way merge" view of the code in the Gerrit UI itself.
I have an unusual environment in a project where I have many files that are each independent standalone scripts. All of the code required by the script must be in the one file and I can't reference outside files with includes etc.
There is a common function in all of these files that does authorization that is the last function in each file. If this function changes at all (as it does now and then) it has to changed in all the files and there are plenty of them.
Initially I was thinking of keeping the authorization function in a separate file and running a batch process that produced the final files by combining the auth file with each of the others. However, this is extremely cumbersome when debugging because the auth function needs to be in the main file for this purpose. So I'd always be testing and debugging in the folder with the combined file and then have to copy changes back to the uncombined files.
Can anyone think of a way to solve this problem? i.e. maintain an identical fragment of code in multiple files.
I'm not sure what you mean by "the auth function needs to be in the main file for this purpose", but a typical Unix solution might be to use make(1) and cpp(1) here.
Not sure what environment/editor your using but one thing you can do is to use prebuild events. create a start-tag/end-tag which defines the import region, and then in the prebuild event copy the common code between the tags and then compile...
//$start-tag-common-auth
..... code here .....
//$end-tag-common-auth
In your prebuild event just find those tags, and replace them with the import code and then finish compiling.
VS supports pre-post build events which can call external processes, but do not directly interact with the environment (like batch files or scripts).
Instead of keeping the authentication code in a separate file, designate one of your existing scripts as the primary or master script. Use this one to edit/debug/work on the authentication code. Then add a build/batch process like you are talking about that copies the authentication code from the master script into all of the other scripts.
That way you can still debug and work with the master script at any time, you don't have to worry about one more file, and your build/deploy process keeps everything in sync.
You can use a technique like #Priyank Bolia suggested to make it easy to find/replace the required bit of code.
I ugly way I can think of:
have the original code in all the files, and surround it with markers like:
///To be replaced automatically by the build process to the latest code
String str = "my code copy that can be old";
///Marker end.
This code block can be replaced automatically by the build process, from one common code file.
I am uploading several files to Alfresco repsitory via webdav. The batch process works fine, but after the upload, all dates in the repository are changed to current date.
How can I make it keep and show the original file dates (creation and modified) ?
Thanks.
You can leverage metadata extractors. The main purpose is to extract metadata from binary files during upload. There are lots of built-in metadata extractors, just look for implementers of interface org.alfresco.repo.content.metadata.MetadataExtracter. There are different extractors that can extract creation date and set it as cm:created on Alfresco node.
You can enable metadata extraction by applying it as a rule on a space, look for action named Extract Common Metadata in the actions drop-down-box while creating the rule.
I don't believe it's possible without the importing code explicitly turning off the default behaviour of the "cm:auditable" policy, and I suspect the WebDAV driver doesn't do this (since it has no way of knowing whether that's appropriate or not - there are cases where forcing the creation and modification dates to today is the correct thing to do).
This behaviour is discussed in some detail here - it might be worth evaluating whether the bulk filesystem import tool is a more appropriate way to import the content into Alfresco, particularly since it can preserve the creation and modification dates if you tell it to (i.e. by specifying the values of those properties).