I have lots (millions) of small log files in s3 in with its name (date/time) helping to define it i.e. servername-yyyy-mm-dd-HH-MM. e.g.
s3://my_bucket/uk4039-2015-05-07-18-15.csv
s3://my_bucket/uk4039-2015-05-07-18-16.csv
s3://my_bucket/uk4039-2015-05-07-18-17.csv
s3://my_bucket/uk4039-2015-05-07-18-18.csv
...
s3://my_bucket/uk4339-2015-05-07-19-23.csv
s3://my_bucket/uk4339-2015-05-07-19-24.csv
...
etc
From EC2, using the AWS CLI, I would like to simultaneously download all files that are have the minute equal 16 for 2015, for all only server uk4339 and uk4338
Is there a clever way to do this?
Also if this is a terrible file structure in s3 to query data, I would be extremely grateful for any advice on how to set this up better.
I can put a relevant aws s3 cp ... command into a loop in a shell/bash script to sequentially download the relevant files but, was wondering if there was something more efficient.
As an added bonus I would like to row bind the results together too as one csv.
A quick example of a mock csv file can be generated in R using this line of R code
R> write.csv(data.frame(cbind(a1=rnorm(100),b1=rnorm(100),c1=rnorm(100))),file='uk4339-2015-05-07-19-24.csv',row.names=FALSE)
The csv that is created is uk4339-2015-05-07-19-24.csv. FYI, I will be importing the combined data into R at the end.
As you didn't answer my questions, nor indicate what OS you use, it is somewhat hard to make any concrete suggestions, so I will briefly suggest you use GNU Parallel to parallelise your S3 fetch requests to get around the latency.
Suppose you somehow generate a list of all the S3 files you want and put the resulting list in a file called GrabMe.txt like this
s3://my_bucket/uk4039-2015-05-07-18-15.csv
s3://my_bucket/uk4039-2015-05-07-18-16.csv
s3://my_bucket/uk4039-2015-05-07-18-17.csv
s3://my_bucket/uk4039-2015-05-07-18-18.csv
Then you can get them in parallel, say 32 at a time, like this:
parallel -j 32 echo aws s3 cp {} . < GrabMe.txt
or if you prefer reading left-to-right
cat GrabMe.txt | parallel -j 32 echo aws s3 cp {} .
You can obviously alter the number of parallel requests from 32 to any other number. At the moment, it just echoes the command it would run, but you can remove the word echo when you see how it works.
There is a good tutorial here, and Ole Tange (the author of GNU Parallel) is on SO, so we are in good company.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to construct and submit an array job based on R in the HPC of my university.
I'm used to submit array jobs based on Matlab and I have some doubts on how to translate the overall procedure to R. Let me report a very simple Matlab example and then my questions.
The code is based on 3 files:
"main" which does some preliminary operations.
"subf" which should be run by each task and uses some matrices created by "main".
a bash file which I qsub in the terminal.
1. main:
clear
%% Do all the operations that are common across tasks
% Here, as an example, I create
% 1) a matrix A that I will sum to the output of each task
% 2) a matrix grid; each task will use some rows of the matrix grid
m=1000;
A=rand(m,m);
grid=rand(m,m);
%% Tasks
tasks=10; %number of tasks
jobs=round(size(grid,1)/tasks); %I split the number of rows of the matrix grid among the tasks
2. subf:
%% Set task ID
idtemp=str2double(getenv('SGE_TASK_ID'));
%% Select local grid
if idtemp<tasks
grid_local= grid(jobs*(idtemp-1)+1: idtemp*jobs,:);
else
grid_local= grid(jobs*(idtemp-1)+1: end,:); %for the last task, we should take all the rows of grid that have been left
end
sg_local=size(grid_local,1);
%% Do the task
output=zeros(sg_local,1);
for g=1:sg_local
output(g,:)=sum(sum(A+repmat(grid_local(g,:),m,1)));
end
%% Save output by keeping track of task ID
filename = sprintf('output.%d.mat', ID);
save(filename,'output')
3. bash
#$ -S /bin/bash
#$ -l h_vmem=6G
#$ -l tmem=6G
#$ -l h_rt=480:0:0
#$ -cwd
#$ -j y
#Run 10 tasks where each task has a different $SGE_TASK_ID ranging from 1 to 10
#$ -t 1-10
#$ -N Example
date
hostname
#Output the Task ID
echo "Task ID is $SGE_TASK_ID"
export PATH=/xx/xx/matlab/bin:$PATH
matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID = $SGE_TASK_ID; subf; exit"
These are my questions:
Suppose I'm able to translate "main" and "subf" into R language. Should I be extra-careful about anything in particular concerning the parallelisation? For example, do I have to declare some parallel environment, such as parLapply or dopar?
In the "main" file I should also install some R packages. Can I do them locally in my folder directly at the beginning of the "main" file, or should I contact the HPC administrator to install them globally?
I could not find any example of bash file for R in the instructions given by my university. Therefore, I have doubts on how to re-adapt the above bash file. I suppose that the only lines to change are:
export PATH=/xx/xx/matlab/bin:$PATH
matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID = $SGE_TASK_ID; subf; exit"
Could you give some hints on how I should change them?
The parallelization is handled by the HPC, right? In which case, I think "no", nothing special required.
It depends on how they allow/enable R. In a HPC that I use (not your school), the individual nodes do not have direct internet access, so it would require special care; this might be the exception, I don't know.
Recommendation: if there is a shared filesystem that both you and all of the nodes can access, then create an R "library" there that contains the installed packages you need, then use .libPaths(...) in your R scripts here to add that to the search path for packages. The only gotcha to this might be if there are non-R shared library (e.g., .dll, .so, .a) requirements. For this, either "docker" or "ask admins".
If you don't have a shared filesystem, then you might ask the cluster admins if they use/prefer docker images (you might provide an image or a DOCKERFILE to create one) or if they have preferred mechanisms for enabling various packages.
I do not recommend asking them to install the packages, for two reasons: First, think about them needing to do this with every person who has a job to run, for any number of programming languages, and then realize that they may have no idea how to do it for that language. Second, package versions are very important, and you asking them to install a package may install either a too-new package or overwrite an older version that somebody else is relying on. (See packrat and renv for discussions on reproducible environments.)
Bottom line, the use of a path you control (and using .libPaths) enables you to have complete control over package versions. If you have not been bitten by unintended consequences of newer-versioned packages, just wait ... congratulations, you've been lucky.
I suggest you can add source("main.R") to the beginning of subf.R, which would make your bash file perhaps as simple as
export PATH=/usr/local/R-4.x.x/bin:$PATH
Rscript /path/to/subf.R
(Noting that you'll need to reference Sys.getenv("SGE_TASK_ID") somewhere in subf.R.)
I was trying to store multiple diff outputs in one .patch file to keep versioning in one file instead of running multiple
diff -u f1 f2 > f1.patch
commands. Preferably I'd keep running
diff -u[other params?] f1 f2 >> f1.patch
to have one file containing all changes which would allow me to later on run patch on those files to have a f1 file available in any given moment.
Unfortunately patch fails with file generated in such manner. It only seems to apply first patch from the file and then quits with error.
My question: is that possible with diff and patch? And if so, how?
Thank you in advance.
I have a series of sequential directories to gather files from on a linux server that I am logging into remotely and processing from an R terminal.
/r18_060, /r18_061, ... /r18_118, /r18_119
Each directory is for the day of year the data was logged on, and it contains a series of files with standard prefix such as "fl.060.gz"
I have to supply a function that contains multiple system() commands with a linux glob for the day. I want to divide the year into 60-day intervals to make the QA/QC more manageable. Since I'm crossing from 099 - 100 in the glob, I have to use brace expansion to match the correct sequence of days.
ls -d /root_driectory/r18_{0[6-9]?,1[0-1]?}
ls -d /root_driectory/r18_{060..119}
All of these work fine when I manually input these globs into my bash shell, but I get an error when the system() function provides a similar command through R.
day_glob <- {060..119}
system(paste("zcat /root_directory/r_18.", day_glob, "/fl.???.gz > tmpfile", sep = "")
>gzip: cannot access '/root_directory/r18_{060..119}': No such file or directory
I know that this could be an error in the shell that the system() function operates in, but when I query that it gives the correct environment and user name
system("env | grep ^SHELL=")
>SHELL=/bin/bash
system("echo $USER")
>tgw
Does anyone know why this fails when it is passed through R's system() command? What can I do to get around this problem without removing the system call altogether? There are many scripts that rely on these functions, and re-writing the entire family of R scripts would be time prohibitive.
Previously I had been using 50-day intervals which avoids this problem, but I thought this should be something easy to change, and make one less iteration of my QA/QC scripts per year. I'm new to the linux OS so I figured I might just be missing something obvious.
I am using premake5 version 0.06 to generate a vs2012 project that contains 3000+ files in a directory tree that goes about 2 levels deep.
The project contains 6 configurations and 3 platforms.
It takes approximately 2 minutes to bake the configurations and then about 10 seconds to process the action and write out the solution and project files.
I am wondering if this is the expected time for this number of files or whether I can optimise my premake scripts to improve the bake times?
I make use of a number of overrides and I include my files by making use of wildcards.
files {
path.join(includeDir,"**.h"),
path.join(includeDir,"**.inl"),
path.join(srcDir,"**.h"),
path.join(srcDir,"**.inl"),
path.join(srcDir,"**.c"),
path.join(srcDir,"**.cpp"),
}
Is it better to put all options under one filter?
For convenience of setup I have options setup by different functions and so effectively list the same filter multiple times for different options e.g.
setupOption1 = function(args)
filters( "platforms:win" )
--set up option1
end
setupOption2 = function(args)
filters( "platforms:win" )
--set up option2
end
--with the project
project("myProject")
--global setup
language "C++"
kind "WindowedApp"
--individual options
setupOption1(args)
setupOption2(args)
That does sound a little long, but as this is still an alpha build performance isn't being as closely monitored right now. There is an open pull request to reduce memory usage that might help?
In general, fewer filters should help, but I would be surprised if it made a dramatic difference (unless you really have a lot).
I found that using ** wildcards in a files filter slows the build right down.
filter {"files:**_win.cpp", "platforms:not win"}
flags "ExcludeFromBuild"
filter {"files:**_xone.cpp", "platforms:not xone"}
flags "ExcludeFromBuild"
filter {"files:**_ps4.cpp", "platforms:not ps4" }
flags "ExcludeFromBuild"
If I comment out these filters, the configuration now takes about 30 seconds to build.
I have a very large file (~10 GB) that can be compressed to < 1 GB using gzip. I'm interested in using sort FILE | uniq -c | sort to see how often a single line is repeated, however the 10 GB file is too large to sort and my computer runs out of memory.
Is there a way to compress the file while preserving newlines (or an entirely different method all together) that would reduce the file to a small enough size to sort, yet still leave the file in a condition that's sortable?
Or any other method of finding out / countin how many times each line is repetead inside a large file (a ~10 GB CSV-like file) ?
Thanks for any help!
Are you sure you're running out of the Memory (RAM?) with your sort?
My experience debugging sort problems leads me to believe that you have probably run out of diskspace for sort to create it temporary files. Also recall that diskspace used to sort is usually in /tmp or /var/tmp.
So check out your available disk space with :
df -g
(some systems don't support -g, try -m (megs) -k (kiloB) )
If you have an undersized /tmp partition, do you have another partition with 10-20GB free? If yes, then tell your sort to use that dir with
sort -T /alt/dir
Note that for sort version
sort (GNU coreutils) 5.97
The help says
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
I'm not sure if this means can combine a bunch of -T=/dr1/ -T=/dr2 ... to get to your 10GB*sortFactor space or not. My experience was that it only used the last dir in the list, so try to use 1 dir that is big enough.
Also, note that you can go to the whatever dir you are using for sort, and you'll see the acctivity of the temporary files used for sorting.
I hope this helps.
As you appear to be a new user here on S.O., allow me to welcome you and remind you of four things we do:
. 1) Read the FAQs
. 2) Please accept the answer that best solves your problem, if any, by pressing the checkmark sign. This gives the respondent with the best answer 15 points of reputation. It is not subtracted (as some people seem to think) from your reputation points ;-)
. 3) When you see good Q&A, vote them up by using the gray triangles, as the credibility of the system is based on the reputation that users gain by sharing their knowledge.
. 4) As you receive help, try to give it too, answering questions in your area of expertise
There are some possible solutions:
1 - use any text processing language (perl, awk) to extract each line and save the line number and a hash for that line, and then compare the hashes
2 - Can / Want to remove the duplicate lines, leaving just one occurence per file? Could use a script (command) like:
awk '!x[$0]++' oldfile > newfile
3 - Why not split the files but with some criteria? Supposing all your lines begin with letters:
- break your original_file in 20 smaller files: grep "^a*$" original_file > a_file
- sort each small file: a_file, b_file, and so on
- verify the duplicates, count them, do whatever you want.