In directory I have same structure files that are updated 2 times in a month. I need to read 3 latest files by month. So for example for October I have to read three latest files - if there is one made in October, then I also need to read September's and August's files. If October file is not there yet - then I need Septembers, August's and July's.
Also, if there are 2 editions in one month, I need to read the latest.
I have listed files in my directory using list.files and it gives me this output:
list.files(path="./input/focus_economics/", full.names = TRUE)
[1] "./input/focus_economics/focuseconomics 2022-05-04.csv"
[2] "./input/focus_economics/focuseconomics 2022-06-15.csv"
[3] "./input/focus_economics/focuseconomics 2022-08-08.csv"
[4] "./input/focus_economics/focuseconomics 2022-09-08.csv"
[5] "./input/focus_economics/focuseconomics 2022-09-16.csv"
So, as it is October now, from these files I would need to read "focuseconomics 2022-09-16.csv" and "focuseconomics 2022-08-08.csv".
Is there any way I could make it simple?
Related
I have over 7,000 .wav files in one folder which need to be split up into groups of 12 and placed into separate smaller folders.
The files correspond to 1-minute recordings taken every 5 minutes, so every 12 files corresponds to 1 hour.
The files are stored on my PC in the working directory: "E:/Audiomoth Files/Winter/Rural/Emma/"
Examples of the file names are as follows:
20210111_000000.wav
20210111_000500.wav
20210111_001000.wav
20210111_001500.wav
20210111_002000.wav
20210111_002500.wav
20210111_003000.wav
20210111_003500.wav
20210111_004000.wav
20210111_004500.wav
20210111_005000.wav
20210111_005500.wav
which would be one hour, then
20210111_010000.wav
20210111_010500.wav
20210111_011000.wav
and so on.
I need the files split into groups of 12 and then I need a new folder to be created in: "E:/Audiomoth Files/Winter/Rural/Emma/Organised Files"
With the new folders named 'Hour 1', 'Hour 2' and so on.
What is the exact code I need to do this?
As is probably very obvious I'm a complete beginner with R so if the answer could be spelt out in layman's terms that would be brilliant.
Thank you in advance
Something like this?
I intentionally used copy instead of cut in order to prevent data from being lost. I edited the answer so the files will keep their old names. I order to give them new names, replace name in the last line by "Part_", i, ".wav", for example.
# get a list of the paths to all the files
old_files <- list.files("E:/Audiomoth Files/Winter/Rural/Emma/", pattern = "\\.wav$", full.names = TRUE)
# create new directory
dir.create("E:/Audiomoth Files/Winter/Rural/Emma/Organised Files")
# start a loop, repeat as often as there are groups of 12 within the list of files
for(i in 1:(round(length(old_files)/12)+1)){
# create a directory for the hour
directory <- paste("E:/Audiomoth Files/Winter/Rural/Emma/Organised Files", "/Hour_", i, sep = "")
dir.create(directory)
# select the files that are to copy (I guess it will start with 1*12-11 = 1st file
# and end with i*12 = 12th file)
filesToCopy <- old_files[(i*12-11):(i*12)]
# for those files run another loop:
for(file in 1:12){
# get the name of the file
name <- basename(filesToCopy[file])
# copy the file to the current directory
file.copy(filesToCopy[file], paste(directory, "/", name, sep = ""))
}
}
When you're not entirely sure, I'd recommend to copy the files instead of moving them directly (which is what I hope this script here does). You can delete them manually, later on. After you checked that everything worked well and all data is where it should be. Otherwise data can be lost due to even small errors, which we do not want to happen.
I have downloaded some data from the following site as a zip file and extracted it onto my computer. Now, I'm having trouble trying to open the included json data files.
Running following code:
install.packages("rjson")
library("rjson")
comp <- fromJSON("statsbomb/data/competitions")
gave this error:
Error in fromJSON("statsbomb/data/competitions") : unexpected character 's'
Also, is there a way to load all files at once instead of writing individual statements each time?
Here is what I did(Unix system).
Clone the Github repo(mark location)
git clone https://github.com/statsbomb/open-data.git
Set working directory(directory to which you cloned the repo or extracted the zip file).
setwd("path to directory where you cloned the repo")
Read data.
jsonlite::fromJSON("competitions.json")
With rjson: rjson::fromJSON(file="competitions.json")
To run all the files at once, move all .json files to a single directory and use lapply/assign to assign your objects to your environment.
Result(single file):
competition_id season_id country_name
1 37 4 England
2 43 3 International
3 49 3 United States of America
4 72 30 International
competition_name season_name match_updated
1 FA Women's Super League 2018/2019 2019-06-05T22:43:14.514
2 FIFA World Cup 2018 2019-05-14T08:23:15.306297
3 NWSL 2018 2019-05-17T00:35:34.979298
4 Women's World Cup 2019 2019-06-21T16:45:45.211614
match_available
1 2019-06-05T22:43:14.514
2 2019-05-14T08:23:15.306297
3 2019-05-14T08:02:00.567719
4 2019-06-21T16:45:45.211614
The function fromJSON takes a JSON string as a first argument unless you specify you are giving a file (fromJSON(file = "competitions.json")).
The error you mention comes from the function trying to parse 'statsbomb/data/competitions' as a string and not a file name. In JSON however, everything is enclosed in brackets and strings are inside quotation marks. So the s from "statsbomb" is not a valid first character.
To read all json files you could do:
lapply(dir("open-data-master/",pattern="*.json",recursive = T), function(x) {
assign(gsub("/","_",x), fromJSON(file = paste0("open-data-master/",x)), envir = .GlobalEnv)
})
however this will take a long time to complete! You probably should elaborate a little bit on this function. E.g. split the list of files obtained with dir into chunks of 50 before running the lapply call.
I have filenames in a directory like:
ACCT_GA12345_2015-01-10.xml
ACCT_GA12345_2015-01-09.xml
ACCT_GDC789g_2015-01-09.xml
ACCT_GDC567g_2015-01-09.xml
ACCT_GDC567g_2015-01-08.xml
ACCT_GCC7894_2015-01-01.xml
ACCT_GCC7894_2015-01-02.xml
ACCT_GAC7884_2015-02-01.xml
ACCT_GAC7884_2015-01-01.xml
I want to have only the latest file in the folder. The latest file can be found using only the file name (NOT the date stamp). For example ACCT 12345 has files from 1/10 & 1/09. I need to delete 1/09 file and have only 1/10 file, for ACCT 789g there is only one file so I have to have that file, and ACCT 567g the latest file is 1/09 so I have to remove 1/08 and have 1/09. So the combination for latest file should be ACCT & Max date for that ACCT.
I would need the final list of files as:
ACCT_GA12345_2015-01-10.xml
ACCT_GDC789g_2015-01-09.xml
ACCT_GDC567g_2015-01-09.xml
ACCT_GCC7894_2015-01-02.xml
ACCT_GAC7884_2015-02-01.xml
Can someone help me with this command in unix? Any help is appreciated
I'd do something like this.... to test start with ls command, when you get what you want to delete, then do rm.
ls ACCT_{GDC,GA1}*-{09,10}.xml
this will list any GDC or GA1 files that end in 09 or 10. You can play with combinations and different values until you have the right set of files showing that you want deleted. once you to just change ls to rm and you should be golden.
With some more info I could help you out. To test this out I did:
touch ACCT_{GDC,GA1}_{01..10}_{05..10}.xml
this will make 56 different dummy files with different combinations. Make a directory, run this command, and get your hands dirty. That is the best way to learn linux cli. Also 65% of commands you need, you will learn, understand, use then never use again...so learn how to teach yourself how to use man pages and setup a spot to play around in.
I'm on a mac, and I format my files with "gnatpp -rnb *.ads *adb" and it refuses to format some files. When I create a new file it works, and after a while, when the file grow/age (I couldn't find a pattern), it stops working. It spits the same file as the input, event when I try with -pipe.
What's happening? is there a limit that I would be reaching somewhere? I can't find anything relevant in the help and I have no error relevant error message. The files are all less than 200 lines long.
there is no error message:
maquette:src nraynaud$ gnatpp -w -v -rnb -M150 *.ads
gnatpp 4.9.0
Copyright 2003-2014, AdaCore.
TMPDIR = "/private/var/folders/rb/4gj9qf4j4_765mm9dlfmd0k00000gn/T"
[13] button.ads
[12] driver.ads
[11] last_chance_handler.ads
[10] leds.ads
[9] motor.ads
[8] registers.ads
[7] stm32f4-busses.ads
[6] stm32f4-genericperipheral.ads
[5] stm32f4-gpio.ads
[4] stm32f4-reset_clock_control.ads
[3] stm32f4-sysconfig_control.ads
[2] stm32f4-tim.ads
[1] stm32f4.ads
gnatpp complains that your file does not compile, which is required for gnatpp to work.
From the GNAT User's Guide:
To produce a reformatted file, gnatpp generates and uses the ASIS tree for the input source and thus requires the input to be syntactically and semantically legal. If this condition is not met, gnatpp will terminate with an error message; no output file will be generated.
I am trying to run an R-script file using windows task scheduler that runs it every two hours. What I am trying to do is gather some tweets through Twitter API and run a sentiment analysis that produces two graphs and saves it in a directory. The problem is, when the script is run again it replaces the already existing files with that name in the directory.
As an example, when I used the pdf("file") function, it ran fine for the first time as no file with that name already existED in the directory. Problem is I want the R-script to be running every other hour. So, I need some solution that creates a new file in the directory instead of replacing that file. Just like what happens when a file is downloaded multiple times from Google Chrome.
I'd just time-stamp the file name.
> filename = paste("output-",now(),sep="")
> filename
[1] "output-2014-08-21 16:02:45"
Use any of the standard date formatting functions to customise to taste - maybe you don't want spaces and colons in your file names:
> filename = paste("output-",format(Sys.time(), "%a-%b-%d-%H-%M-%S-%Y"),sep="")
> filename
[1] "output-Thu-Aug-21-16-03-30-2014"
If you want the behaviour of adding a number to the file name, then something like this:
serialNext = function(prefix){
if(!file.exists(prefix)){return(prefix)}
i=1
repeat {
f = paste(prefix,i,sep=".")
if(!file.exists(f)){return(f)}
i=i+1
}
}
Usage. First, "foo" doesn't exist, so it returns "foo":
> serialNext("foo")
[1] "foo"
Write a file called "foo":
> cat("fnord",file="foo")
Now it returns "foo.1":
> serialNext("foo")
[1] "foo.1"
Create that, then it returns "foo.2" and so on...
> cat("fnord",file="foo.1")
> serialNext("foo")
[1] "foo.2"
This kind of thing can break if more than one process might be writing a new file though - if both processes check at the same time there's a window of opportunity where both processes don't see "foo.2" and think they can both create it. The same thing will happen with timestamps if you have two processes trying to write new files at the same time.
Both these issues can be resolved by generating a random UUID and pasting that on the filename, otherwise you need something that's atomic at the operating system level.
But for a twice-hourly job I reckon a timestamp down to minutes is probably enough.
See ?files for file manipulation functions. You can check if file exists with file.exists, and then either rename the existing file, or create a different name for the new one.