Robocopy Delete Directory Not Functioning - directory

I am testing a script to delete unwanted directories with Robocopy:
#echo off
set maxdepth=3
set source=.
robocopy %source% %source% /s /purge /lev:%maxdepth% /xd unwantedDirectory
However, the directories remain untouched.
I have tried EnableDelayedExpansion with the % changed to ! but still no joy. I have tried replacing option /s with /e for empty directories & replacing /purge with /mir but still, they do not delete.
The terminal printout shows no unwantedDirectory in the directory tree:
Source : G:\
Dest : G:\
Files : *.*
Exc Dirs : unwantedDirectory
Options : *.* /S /DCOPY:DA /COPY:DAT /PURGE /LEV:3 /R:1000000 /W:30
3 G:\
0 G:\$RECYCLE.BIN\
0 G:\$RECYCLE.BIN\S-1-5-21-1628594993-1838746295-2145544969-1000\
3 G:\$RECYCLE.BIN\S-1-5-21-3144879857-1687253292-3729363637-1001\
0 G:\$RECYCLE.BIN\S-1-5-21-3144879857-1687253292-3729363637-1005\
0 G:\Backups_01\
1 G:\Backups_01\2022-02-15 1409\
1 G:\Backups_01\2022-07-17 1725\
1 G:\Backups_01\2022-10-22 1335\
1 G:\Backups_01\2022-11-20 1922\
1 G:\Backups_01\2022-12-18 1812\
0 G:\Backups_01\example\
29 G:\Backups_02\
0 G:\Recovery\
Total Copied Skipped Mismatch FAILED Extras
Dirs : 17 0 14 0 3 0
Files : 40 0 40 0 0 0
Bytes : 3.562 g 0 3.562 g 0 0 0
Times : 0:00:00 0:00:00 0:00:00 0:00:00

Related

Most efficient way to subset a file by a list of text patterns to match

I have a large, tab delimited file (technically a VCF of genetic variants), call it file.vcf, with millions of lines that look something like this
locus1 1 15 0 0/0,21,2,2,;0
locus1 2 17 0 0/0,21,2,1,;0
locus2 1 10 0 0/1,21,2,2,;0
locus3 1 2 0 0/1,21,2,1,;0
...
locus123929 1 3 0 1/0,22,2,1,;0
locus123929 2 4 0 1/2,1,1,3,;0
I'd like to subset this original file to include all lines from loci in another file (search-file.txt). For example, if search-file.txt were:
locus1
locus3
locus123929
Then the final would be:
locus1 1 15 0 0/0,21,2,2,;0
locus1 2 17 0 0/0,21,2,1,;0
locus3 1 2 0 0/1,21,2,1,;0
locus123929 1 3 0 1/0,22,2,1,;0
locus123929 2 4 0 1/2,1,1,3,;0
What is the most efficient way to subset this large of a file using either bash or R? (Note, reading the entire file into memory, as in R is very very very slow, and often crashes the system.)
I'd use awk:
awk -F'\t' '
NR == FNR { a[$0]; next }
$1 in a
' search-file.txt file.vcf > filtered_file
bash would be too slow for this job.
Note: Make sure the file search-file.txt doesn't have DOS line endings.
Alternatively,
LC_ALL=C sort search-file.txt file.vcf |
awk '
NF == 1 { loc = $1; next }
$1 == loc
' > filtered_file
but this version may disturb the original order of lines.

Replace a value if this value is present in a txt file

Goodmorning everyone, I have a data.ped file made up of thousands of columns and hundreds of lines. The first 6 columns and the first 4 lines of the file look like this:
186 A_Han-4.DG 0 0 1 1
187 A_Mbuti-5.DG 0 0 1 1
188 A_Karitiana-4.DG 0 0 1 1
191 A_French-4.DG 0 0 1 1
And I have a ids.txt file that looks like this:
186 Ignore_Han(discovery).DG
187 Ignore_Mbuti(discovery).DG
188 Ignore_Karitiana(discovery).DG
189 Ignore_Yoruba(discovery).DG
190 Ignore_Sardinian(discovery).DG
191 Ignore_French(discovery).DG
192 Dinka.DG
193 Dai.DG
What I need is to replace (in unix) the value in the first column of the data.ped file with the value in the second column of the ids.txt that is in the same line of the value that is going to be replaced from the data.ped file. For example, I want to replace the "186" value from the data.ped first column with the "Ignore_Han(discovery).DG" value from the ids.txt second column (and this because in the first column of the same line of this value there is "186") So the output.ped file must look like this:
Ignore_Han(discovery).DG A_Han-4.DG 0 0 1 1
Ignore_Mbuti(discovery).DG A_Mbuti-5.DG 0 0 1 1
Ignore_Karitiana(discovery).DG A_Karitiana-4.DG 0 0 1 1
Ignore_French(discovery).DG A_French-4.DG 0 0 1 1
The values of the first column of the data.ped file are a subset of the values present in the first column of the ids.txt file. So there is always match.
Edit:
I've tried with this:
awk 'NR==FNR{a[$1]=$2; next} $1 in a{$1=a[$1]; print}' ids.txt data.ped
but when I check the result with:
cut -f 1-6 -d " " output.ped
I get this strange output:
A_Han-4.DG 0 0 1 1y).DG
A_Mbuti-5.DG 0 0 1 1y).DG
A_Karitiana-4.DG 0 0 1 1y).DG
A_French-4.DG 0 0 1 1y).DG
while if I use this command:
cut -f 1-6 -d " " output.ped | less
I get this:
Ignore_Han(discovery).DG^M A_Han-4.DG 0 0 1 1
Ignore_Mbuti(discovery).DG^M A_Mbuti-5.DG 0 0 1 1
Ignore_Karitiana(discovery).DG^M A_Karitiana-4.DG 0 0 1 1
Ignore_French(discovery).DG^M A_French-4.DG 0 0 1 1
and I can't figure out why there is that ^M in every line.
awk 'NR==FNR{a[$1]=$2; next} $1 in a{$1=a[$1]} 1' ids.txt data.ped
output:
Ignore_Han(discovery).DG A_Han-4.DG 0 0 1 1
Ignore_Mbuti(discovery).DG A_Mbuti-5.DG 0 0 1 1
Ignore_Karitiana(discovery).DG A_Karitiana-4.DG 0 0 1 1
Ignore_French(discovery).DG A_French-4.DG 0 0 1 1
This is a classic awk task with various modifications according to your requirements. Here we replaced the first field of data.ped only if we have found its value in the ids.txt, else we print the line unchanged. If you would like to remove lines that don't match:
awk 'NR==FNR{a[$1]=$2; next} $1 in a{$1=a[$1]; print}' ids.txt data.ped
There is no need for the input files to be sorted and the order of the second file is preserved.
UPDATE:
If you have Ctrl-M characters in your inputs, remove them first with
cat file | tr -d '^M' > file.tmp && mv file.tmp file
for any file you use. In general, I suggest running dos2unix for any text files that could contain characters like ^M or \r, usually coming from dos/windows editing.
Use join command to join two files
join ids.txt data.ped > temp
You can use cut command to remove the first column like:
cut -d " " -f 2- temp > output.ped

Defining a workflow for importing the RNA-seq count data

i am getting starting with R and i read some basics and syntax to get me started with it,
now i using miodin to define a project and a case-control study design.
library(miodin)
mp <- MiodinProject(name = "MyProject", author = "Myself", path ="." )
mshow(mp)
I have a file named "randseq"in my computer hard-disk which look like this.
ID LineA_1 LineA_2 LineA_3 LineA_4 LineA_5 LineB_1 LineB_2 LineB_3 LineB_4 LineB_5 LineB_6 LineB_7 LineB_8 LineB_9
ENSG00000000003 23 1 0 0 0 1 0 0 0 0 0 3 3 0
ENSG00000000005 0 0 0 0 0 0 0 0 0 0 0 0 0 0
ENSG00000000419 0 0 0 0 0 0 0 0 4 0 0 0 0 0
Now i want to define a workflow for importing the RNA-seq count data of that file which is in a folder named analysis_with_r, using the study design. Execute the workflow and export the dataset to the project folder. Below is my code for it
mw <- MiodinWorkflow(name = 'MyProject')
mw <- mw + downloadRepositoryData(
name = 'RNA downloader',
accession = 'randseq',
repository = '/Users/aarf/Desktop/analysis_with_r/randseq.txt',
path = 'data',
type = 'processed'
)
mw <- insert(mw,mp)
mshow(mw)
mw <- execute(mw)
saveDataFile(mp)
export(mp, 'dataset', 'randseq')
After running this code i get this error
[INFO] Module terminated with the following error [ERROR] Unknown
repository/Users/aarf/Desktop/analysis_with_r/randseq.txt
[INFO] 1 modules were not executed [STATUS] Execution finished
Can anybody tell me what am i doing wrong here?

UFF58 File reader using R Program

I have a input uff file with 'n' no.of channels. I want to read the UFF file and also split the values based on each individual channel. Then store the result for each channel in separate file. Each channel always start with '-1' '58' etc., and end with '-1'.
Example channel_01 from the input UFF file:
-1
58
filename
22-Mar-2016 10:16:53
164
MnBrgFr-AC225R/N;50.9683995923 mV/m/s2
0 0 0 0 channel_01 0 0 NONE 0 0
2 1048576 1 0.00000E+00 8.19669930804e-06 0.00000E+00
17 0 0 0 Time s
1 0 0 0 MnBrgFr-AC225R/N m/s2
0 0 0 0 NONE NONE
0 0 0 0 NONE NONE
392.665124452 392.659048025 392.658404832 392.661676933 392.665882251 392.671989083
392.67634175 392.673743248 392.672398388 392.669360175 392.665533757 392.66088639
392.660390546 392.660975268 392.663400693 392.662668621 392.661209156 392.65498538
392.649463269 392.649580214 392.649259786 392.658580248 392.664715147 392.667051694
-1

X11. How to know the full size of a window (with the size of its decorations)

I would like to retrieve the complete size of any windows in X11 in order to automatically resize it.
So far I have used wmctrl but the size seems to be incomplete.
for instance
>$ wmctrl -lG
0x00e0000f -1 0 0 1920 1200 tclogin1 KDE Desktop
0x010000ee -1 0 1160 1920 40 tclogin1 kicker
0x01200008 0 4 28 1920 1127 tclogin1 ...p7zip_9.13/bin - Shell No. 8 - Konsole
The Kicker height is 40 and the screen resolution is 1920x1200 so If I wanted to resize my Konsole to take all the screen but the kicker its size should be 1920x1160 (1200-40).
But when I do that, the Konsole overlaps the kicker size.
So I assume that its means that the windows decorations might not be taken into account here.
How can I know the size of the decorations that I would have to add to the windows size given by wmctrl ?
Thanks
$ cat allborders.sh
# assumptions:
# windows ids are at least 5 digits long
# we dont need to bother with windows that have no name
# "first argument" from the pipe is east (could be west)
#
WINDOW_IDS=`xwininfo -int -root -tree |\
grep '[0-9]*\ (has no name)' -v |\
grep -Eo '[0-9]{5,}'`
for win in $WINDOW_IDS;
do
xprop -id $win |\
grep -Ee '^(_NET_FRAME_EXTENTS|WM_CLASS)' |\
sed 's/.*=\ //' |\
sed -e :a -e '/$/N;s/\n/ /;ta' |\
grep ^[0-9] |\
while read line;
do
set -- $line
E=`echo $1|sed 's/,$//'`
W=`echo $2|sed 's/,$//'`
N=`echo $3|sed 's/,$//'`
S=`echo $4|sed 's/,$//'`
NAME=`echo $5|sed 's/,$//'`
CLASS=`echo $6|sed 's/,$//'`
echo -e "$CLASS $NAME $N $E $S $W"
done
done
$ ./allborders.sh
"URxvt" "urxvt" 1 1 1 1
"XTerm" "aterm" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"Firefox" "Navigator" 18 1 3 1
"Gmpc" "gmpc" 18 1 3 1
"XTerm" "aterm" 0 0 0 0
"XTerm" "one" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"XTerm" "one" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"XTerm" "aterm" 0 0 0 0
"FbPager" "fbpager" 0 0 0 0

Resources