Find method - multiple searches running simultaneously - cefsharp

How do I find multiple searches running simultaneously using the Find method within a certain webpage? For instance I want to find the words, "Tammy" AND "Family" on the same webpage using the Find method? The documentation reads that you can use the "identifier" parameter, but I've tried the following and it only finds the last "Find"...
browser.Find(1, "Tammy", True, False, False)
browser.Find(2, "Family", True, False, False)
Any help on this would be greatly appreciated.

Related

Using two templates with custom name in exams2pdf

I am using exams2pdf() to generate two PDF files as:
exams2pdf(file = "ICvar.Rmd",
name = "icvar",
engine = "knitr",
verbose = FALSE,
texdir = "tmptex",
template = c("exam", "solution")
)
But I get this error:
Error in base::file(out_tex[j], open = "w+", encoding = encoding) : invalid 'description' argument
Any ideas why?
Also, is it possible to use custom templates in exams2nops() like template = c("exam", "solution") to produce two PDF files, the first with the questions; the second with the solutions? I read the vignette but could not find any information and adding template to options in exams2nops() does nothing.
The problem is that you only provide a single name = "icvar" but actually need two distinct names for template = "exam" and template = "solution", respectively. Hence, lacking a second name lead to the somewhat cryptic error message. A simple solution is to provide a vector of two name = c("icex", "icsol"), say.
Additionally, I have just committed a fix to the development version on R-Forge that points this out more clearly in ?exams2pdf, throws an intelligible warning, and provides a workaround. If you use your code above, name is changed to name = c("icvar_exam", "icvar_solution") automatically.
As for exams2nops(): Internally this essentially sets up a standardized template via make_nops_template() and then calls exams2pdf(). No additional templates can be provided. The reason for this is that all the convenient options in the NOPS template (e.g., adding an intro, selecting the language, switching to twocolumn layout, etc.) would only work for the NOPS template but not the other templates provided. Therefore, if you want to produce a solution sheet you have to use another call to exams2pdf() (or exams2html() or exams2pandoc()) after setting the same random seeds as for exams2nops().

Navigate to a list of directories without using loop in r

I want to navigate to specific directories and process the files found within the folders. I have many models ('models') and experiment ('exps') combinations that I want to navigate to their specific directory and carry out processing on the data held within.
I can do this using:
models<- c("model1","model2","model3")
exps<- c("exp21","exp42","exp54")
for (g in 1:length(models)){
gfilepath = file.path("C:/Users/Documents/data/models",models[g],"Netcdfs")
for (r in 1:length(exps)){
rfilepath = file.path(gfilepath,exps[r])
list.files(rfilepath)
#Do my processing here
}
}
Can I do this without using a loop? I was looking at trying to use apply using just one vector of strings to incorporate in the file path:
test<-apply(c("model1","model2","model3"),function(x){
filename=file.path("C:/Users/Documents/data/models",x)
})
but I got:
Error in match.fun(FUN) : argument "FUN" is missing, with no default
I thought the 'file.path' would be the function?
Is it possible to navigate to the specific folders another way (besides for the loop)? I think maybe using a recursive option in a function may help, but I'm not sure.
The help file to list.files which can be accessed using ?list.files contains the argument you require, namely recursive = T
list.files(all.files = T, recursive = T)

R: extracting files using dir() function with pattern being identified in a matrix

I am currently trying to create a function that will extract file paths located in a specific directory. These files are extracted if the pattern identified in the dir() function is present within a specific filepath. Each pattern that I am looking for is stored within a matrix. There are multiple files associated with each pattern. I am currently testing this function on 2 file names, but will be adding on several more once it is up and running.
I am running into two issues within this approach:
I don't believe the pattern being pulled from the matrix is identified as a string in the dir() function (path=data_sets[i,2]) - there are no quotes around it.
I receive the error : "replacement has length zero" (This may be related to the first error I listed above but am unsure).
I am wondering if someone can help me turn the pattern column within my matrix into a string that will be recognized as a pattern within the dir() function in my code or alternatively, identify an easier way to get the end result.
Here's the code I am currently using:
data_sets<-c("data1", "data2")
extract_most_recent_files_to_source<-function(data_sets){
data_sets<-as.data.table(data_sets)
data_sets[, pattern:=paste0("\"^[", data_sets, "]\"")]
data_sets[, pattern:=gsub("\\\\", "", pattern )]
data_sets<-as.matrix(data_sets)
df_mod<-do.call("rbind", lapply(1:nrow(data_sets), function(i){
files<-dir("filepath", pattern = as.character(data_sets[i,2]), full.names = TRUE, ignore.case = TRUE)
files<-as.data.table(files)
return(files)
}))

'NULL' and 'NA' issue when scraping websites with ContentScraper in R?

I have a very long list of websites that I'd like to scrape for its title, description, and keywords.
I'm using ContentScraper from Rcrawler package, and I know it's working, but there are certain URLs that it can't do and just generate the error message below. Is there anyway that it can skip that particular URL instead of stopping the entire execution?
Error: 'NULL' does not exist in current working directory
I've looked at this, but I don't think it has any answer to it. Here is the code I'm using. Any advice is greatly appreciated.
Web_Info <- ContentScraper(Url = Websites_List,
XpathPatterns = c('/html/head/title', '//meta[#name="description"]/#content', '//meta[#name="keywords"]/#content'),
PatternsName = c("Title", "Description", "Keywords"),
asDataFrame = TRUE)

R how to get information about listed files in the excel

I've digging through the raised questions here but failed to find an answer.
Problem I'm facing is the following:
I need to get the list of files which were uploaded to the Sharepoint List - around 3300 items, this I was able to do.
List needs to be extracted to excel - also managed to do.
Each item of that list has to contain last modification date - here's where I struggle to get the results.
I know that for that I need to use a combination of list.files and file.info functions, but I'm struggling to get the results.
The following code doesn't work:
setwd("X:/")
ls<- file.info(list.files(path = "link",
pattern = NULL, all.files = TRUE, full.names = TRUE, recursive = TRUE, ignore.case = FALSE, include.dirs = FALSE, no.. = FALSE))
Does anyone did something similar and could help me understand what should be the logic here?

Resources