What is the unix test -f command equivalent in powershell? - unix

I'm converting unix scripts into powershell scripts.
I want to know the unix test -f equivalent command in powershell.
If anybody is knowing this, please answer me.

test -f FILE exits with a success error code if "FILE exists and is a regular file". For PowerShell, you probably want to use Test-Path -Type Leaf FILE. We need the -Type Leaf to make sure that TestPath don't return $true for directories.
test -f and Test-Path -Type Leaf aren't going to be 100% identical. The fine differences between them may or may not matter, so I'd audit the script just to be sure. For example, test -F some_symlink is not true, but Test-Path -Type Leaf some_symlink is true. (Well, it was when I tested with a NTFS symlink.)
NB: test may be a built-in command in whichever shell you are using. I assume it has the semantics I quoted from the man page for test that I found.

Related

Correct Wildcard Notation for UNIX systems?

I am currently trying to remove a number of files from my root directory. There are about 110 files with almost the exact same file name.
The file name appears as wp-cron.php?doing_wp_cron=1.93 where 93 is any integer from 1-110.
However when I try to run the code: sudo rm /root/wp-cron.php?doing_wp_cron=1.* it actually tries to find the file with the asterisk * in the filename, leaving me with a file not found error.
What is the correct notation for removing a series of files using wildcard notation?
NOTE: I have already tried delimiting the filepath with both single ' and double quotes ". This did not avail.
Any thoughts on the matter?
Take a look at the permission on the /root directory with ls -ld /root, typically a non-root user will not have r-x permissions, which won't allow them to read the directory listing.
In your command sudo rm /root/wp-cron.php?doing_wp_cron=1.* the filename expansion attempt happens in the shell running under your non-root user. That fails to expand to the individual filenames as you do not have permissions to read /root.
The shell then execs sudo\0rm\0/root/wp-cron.php?doing_wp_cron=1.*\0. (Three separate, explicit arguments).
sudo, after satisfying its conditions, execs rm\0/root/wp-cron.php?doing_wp_cron=1.*\0.
rm runs and attempts to unlink the literal path /root/wp-cron.php?doing_wp_cron=1.*, failing as you've seen.
The solution to removing depends on your sudo permissions. If permitted, you may run a bash sub-process to do the file-name expansion as root:
sudo bash -c "rm /root/a*"
If not permitted, do the sudo rm with explicit filenames.
Brandon,
I agree with #arkascha . That glob should match, so something is amiss here. Do you get the appropriate list of files if you use a different binary, say 'ls' ? Try this:
ls /root/wp-cron.php?doing_wp_cron=1.*
If that returns the full list of files, then you know there's something funny with your environment regarding rm. This could be an alias as suggested.
If you cannot determine what is different or wrong with your environment you could run the list of files through a for loop and remove each one as a work-around:
for file in `ls /root/wp-cron.php?doing_wp_cron=1.*`
do
rm $file
done

What does '-d' do in Unix?

I have this code, it was set as a condition for a step.
what does the '-d' in the code mean?
if [ -d $FTBASEDIR/$1/$2 ]; then
ftcmd="lcd $FTBASEDIR"
ftcmd2="cd $FTROOTDIR"
ftcmd3="put $1"
fi
It means: is the following argument a directory?
From the bash manpage (under CONDITIONAL EXPRESSIONS):
-d file
True if file exists and is a directory.
There's a whole host of these, letting you discover regular files, character-special files, whether files are writable, and so on.
From http://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions:
-d file
True if file exists and is a directory.
[ is actually a command name, (nearly) equivalent to the test command.
Both [ and test are implemented both as a built-in commands in many shells, and as actual executables, typically in /usr/bin ([ is usually a symbolic link to test). (In the old days, they were just executables; building them into the shell lets tests be performed faster.)
The documentation for bash or for the test command explains that -d tests whether the following argument is a directory.
(I wrote "nearly" above; the difference is that [ requires a matching ] arguments, while test does not.)

replacing dos2unix line endings

I am using the following code to replace dos2unix line endings. Every time I execute the code it gets stuck at the command prompt. What is wrong with the below command?
for i in `find . -type f \( -name "*.c" -o -name "*.h" \)`; do sed -i 's/\r//' $i ; done
In Ubuntu, dos2unix and unix2dos are implemented as todos and frodos respectively. They are available in the package tofrodos.
I suggest using
find . -type f \( -name "*.c" -o -name "*.h" \) -print0 | xargs -0 frodos
I suggest confirming that your find command and for loop work properly.
You can do this by simply using an echo statement to print each file's name.
Depending on your platform (and how many .c and .h files you have) you might need to use xargs instead of directly manipulating the output from find. It's hard to say, because you still haven't told us which platform you're on.
Also, depending on your platform, different versions of sed work differently with the -i option.
Sometimes you MUST specify a file extension to use for the backup file, sometimes you don't have to.
All of the above are reasons that I suggest testing your command piece by piece.
You should read the man pages for each command you're trying to use on the system on which you're trying to use it.
Regarding the sed portion of your command, you should test that on a single file to make sure it works.
You can use the following sed command to fix your newlines:
sed 's/^M$//' input.txt > output.txt
You can type the ^M by typing CTRLv CTRLm
Like I said before, the -i option works differently on different platforms.
If you have trouble getting that to work, you could have sed output a new file and then overwrite the original file afterwards.
This would be very simple to do inside your for loop.

How to process directory first, then files and directories under it?

On my Linux system, I've got into a situation where there are not write/execute permissions on directories on a mounted drive. As a result, I can't get into a directory before I open its permissions up. This happens every time I mount that drive. The mounting operation is done by a tool under its hood, so I doubt if could modify mount parameters to address this problem.
As a workaround, I am using this find command to modify permissions on directories. I use it repetitively, since it gets one more level of directories on each run.
find . -type d -print0 | xargs -0 -n 1 chmod a+wrx
I am sure there is a better way to do this. I wonder if there is a find option that processes a directory first and then its contents - the opposite of -depth|-d option.
Any tips?
Try:
chmod +wrx /path/to/mounted/drive/*
Another possibility is to investigate the mount options available for that particular file type (I'm guessing FAT/VFAT here, but it might be something else). Some file systems have mount options for overriding default permissions in some form or other... That would also avoid having to change all the permissions, which might have some effect when you put that file system back to whereever its original source is (is this a memory card from a camera or something or a USB stick, or .... ?)
Thanks to StarNamer at unix.stackexchange.com, here's something that worked great:
Try:
find . -type d -exec chmod a+rwx {} ';'
This will cause find to execute the chmod before it tries to read the directory rather than trying to generate a list and feed it to xargs.
From: https://unix.stackexchange.com/q/45907/22323

Problem redirecting output of find to a file

I am trying to put the result of a find command to a text file on a unix bash shell
Using:
find ~/* -name "*.txt" -print > list_of_txt_files.list
However the list_of_txt_files.list stays empty and I have to kill the find to have it return the command prompt. I do have many txt files in my home directory
Alternatively How do I save the result of a find command to a text file from the commandline. I thought that this should work
The first thing I would do is use single quotes (some shells will expand the wildcards, though I don't think bash does, at least by default), and the first argument to find is a directory, not a list of files:
find ~ -name '*.txt' -print > list_of_txt_files.list
Beyond that, it may just be taking a long time, though I can't imagine anyone having that many text files (you say you have a lot but it would have to be pretty massive to slow down find). Try it first without the redirection and see what it outputs:
find ~ -name '*.txt' -print
You can redirect output to a file and console together by using tee.
find ~ -name '*.txt' -print | tee result.log
This will redirect output to console and to a file and hence you don't have to guess whether if command is actually executing.
Here is what worked for me
find . -name '*.zip' -exec echo {} \\; > zips.out

Resources