I'm trying to automate adding my signature to my photos using the magick package - but I'm having problems saving them.
Here's my code:
file_list <- dir(".", pattern = "*.png")
mySig <- image_read("~/Desktop/sig1.png")
mySig <- mySig %>% image_resize("300x300")
newPhotos <-
for (i in 1:length(file_list)) {
img <- image_read(file_list)
photoSig <- img %>%
image_composite(mySig, gravity = "southeast", offset = "+20+20")
newName <- as.character(paste0("New_", file_list))
image_write(photoSig, newName, format = "png")
}
And I get the following error regardless of the script being in or out of a loop:
Error in file(con, "wb") : invalid 'description' argument
On top of this - even when I don't automate the name and just have the output called "test" or something - it only saves one of the photos and not all of them. What am I doing wrong? Thanks!
Related
#Crop Vietnam
lst <- raster(destPath, varname = 'tbb_14')
lst_crop <- crop(lst, ext_VN)
lst_crop <- lst_crop - 273.15
#Write output raster
outFilename <- paste(substr(j,1, nchar(j)-15),'_tbb_14.tif', sep='')
writeRaster(lst_crop, paste(extractedFolder + '/tbb_14/', outFilename, sep=""))
How should I fix the problem? I've tried to change path, grant permission for user, diving on the Internet to find a way, but there was nothing solved
You do this
lst <- raster(destPath, varname = 'tbb_14')
That fails, suggesting that destPath does not exist. You can see for yourself with
file.exists(destPath)
Once you have the correct filename, you can also replace "raster" with "terra", like this
library(terra)
x <- rast(destPath, 'tbb_14')
I would like to unzip and read in a shape file from the web in R without relying on rgdal. I found the read.shp function of the fastshp package that can apparently accomplish this without rgdal installed in the environment, however, I'm having trouble implementing.
I would like a function that can unzip and then read in the shape file akin to what's found in this SO post but for the read.shp function. I tried the following but to no avail:
dlshape=function(shploc, format) {
temp=tempfile()
download.file(shploc, temp)
unzip(temp)
shp.data <- sapply(".", function(f) {
f <- file.path(temp, f)
return(read.shp(".", format))
})
}
shp_object<-dlshape('https://www2.census.gov/geo/tiger/TIGER2017/COUNTY/tl_2017_us_county.zip', 'polygon')
Error in read.shp(".", format) : unused argument (format)
I also tried the following:
dlshape=function(shploc) {
temp=tempfile()
download.file(shploc, temp)
unzip(temp)
shp.data <- sapply(".", function(f) {
f <- file.path(temp, f)
return(read.shp("."))
})
}
shp_object<-dlshape('https://www2.census.gov/geo/tiger/TIGER2017/COUNTY/tl_2017_us_county.zip')
Error in file(shp.name, "rb") : cannot open the connection
In addition: Warning messages:
1: In file(shp.name, "rb") : 'raw = FALSE' but '.' is not a regular file
2: In file(shp.name, "rb") :
Show Traceback
Rerun with Debug
Error in file(shp.name, "rb") : cannot open the connection
I suspect it has to do with the fact that in the function read.shp() I'm feeding it the folder name and not the .shp name (for readOGR that works but not for read.shp). Any assistance is much appreciated.
You can use unzip() from utils and read_sf() from sf to unzip and then load your shapefile. Here is a working example:
# Create temp files
temp <- tempfile()
temp2 <- tempfile()
# Download the zip file and save to 'temp'
URL <- "https://www2.census.gov/geo/tiger/TIGER2017/COUNTY/tl_2017_us_county.zip"
download.file(URL, temp)
# Unzip the contents of the temp and save unzipped content in 'temp2'
unzip(zipfile = temp, exdir = temp2)
# Read the shapefile. Alternatively make an assignment, such as f<-sf::read_sf(your_SHP_file)
sf::read_sf(temp2)
I am trying to extract some info from a number of links.
I am applying the following function:
walk(filinginfohref, function(x) {
download.file(x, destfile = paste0("D:/deleteme/",x), quiet = FALSE)
})
However it returns the following error:
Error in download.file(x, destfile = paste0("D:/deleteme/", x), quiet = FALSE) :
cannot open destfile 'D:/deleteme/https://www.sec.gov/Archives/edgar/data/1750/000104746918004978/0001047469-18-004978-index.htm', reason 'Invalid argument'
Which I assume is because I cannot store the link as the destination file.
I need to somehow preserve the link from where the file is being downloaded from
How can I overcome this issue?
Data
filinginfohref <- c("https://www.sec.gov/Archives/edgar/data/1750/000104746918004978/0001047469-18-004978-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746917004528/0001047469-17-004528-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746916014299/0001047469-16-014299-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746915006136/0001047469-15-006136-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746914006243/0001047469-14-006243-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746913007797/0001047469-13-007797-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746912007300/0001047469-12-007300-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746911006302/0001047469-11-006302-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746910006500/0001047469-10-006500-index.htm",
"https://www.sec.gov/Archives/edgar/data/1750/000104746909006783/0001047469-09-006783-index.htm"
)
Each link have / interpreted as folders. The path that is built does not exist.
Please replace destfile = paste0("D:/deleteme/",x) by destfile = paste0("D:/deleteme/", gsub("/", "_", x, fixed = TRUE))
This way you have the character _ avoiding troubles.
There is probably a way to keep links intacts.
As you have figure it out, windows doesn't allow you to save those name files with the special characters. Add a function to remove the common part of the file name and get rid of those "/".
library(purrr)
htmName <- function (x) {
x <- gsub("https://www.sec.gov/Archives/edgar/data/", "",x)
x <- gsub("/","_",x)
return(x)
}
walk(filinginfohref, function(x) {
download.file(x, destfile = paste0("output/", htmName(x)), quiet = FALSE)
})
I see it is super-easy to grab a PDF file, save it, and fetch all the text from the file.
library(pdftools)
download.file("http://www2.sas.com/proceedings/sugi30/085-30.pdf", "sample.pdf", mode = "wb")
txt <- pdf_text("sample.pdf")
I am wondering how to loop through an array of PDF files, based on links, download each, and scrape the test from each. I want to go to the following link.
http://www2.sas.com/proceedings/sugi30/toc.html#dp
Then I want to download each file from 'Paper 085-30:' to 'Paper 095-30:'. Finally, I want to scrape the text out of each file. How can I do that?
I would think it would be something like this, but I suspect the paste function is not setup correctly.
library(pdftools)
for(i in values){'085-30',' 086-30','087-30','088-30','089-30'
paste(download.file("http://www2.sas.com/proceedings/sugi30/"i".pdf", i".pdf", mode = "wb")sep = "", collapse = NULL)
}
You can get a list of pdfs using rvest.
library(rvest)
x <- read_html("http://www2.sas.com/proceedings/sugi30/toc.html#dp")
href <- x %>% html_nodes("a") %>% html_attr("href")
# char vector of links, use regular expression to fetch only papers
links <- href[grepl("^http://www2.sas.com/proceedings/sugi30/\\d{3}.*\\.pdf$", href)]
I've added some error handling and don't forget to put R session to sleep so you don't flood the server. In case a download is unsuccessful, the link is stored into a variable which you can investigate after the loop has finished and perhaps adapt your code or just download them manually.
# write failed links to this variable
unsuccessful <- c()
for (link in links) {
out <- tryCatch(download.file(url = link, destfile = basename(link), mode = "wb"),
error = function(e) e, warning = function(w) w)
if (class(out) %in% c("simpleError", "simpleWarning")) {
message(sprintf("Unable to download %s ?", link))
unsuccessful <- c(unsuccessful, link)
}
sleep <- abs(rnorm(1, mean = 10, sd = 10))
message(sprintf("Sleeping for %f seconds", sleep))
Sys.sleep(sleep) # don't flood the server, sleep for a while
}
I have a simple for loop to write the past 100 tweets of a few usernames to .csv files:
library(twitteR)
mclist <- read.table('usernames.txt')
for (mc in mclist)
{
tweets <- userTimeline(mc, n = 100)
df <- do.call("rbind", lapply(tweets, as.data.frame))
write.csv(df, file=paste("Desktop/", mc, ".csv", sep = ""), row.names = F)
}
I mostly followed what I've read on StackOverflow but I continue to get this error message:
Error in file(file, ifelse(append, "a", "w")) :
invalid 'description' argument
In addition: Warning message:
In if (file == "") file <- stdout() else if (is.character(file)) { :
the condition has length > 1 and only the first element will be used
Where did I go wrong?
I just cleaned up the code a bit, and everything started working.
Step 1: Let's set the working directory and load the 'twitteR' package.
library(twitteR)
setwd("C:/Users/Dinre/Desktop") # Replace with your desired directory
Step 2: First, we need to load a list of user names from a flat text file. I'm assuming that each line in the text file has one username, like so:
[contents of usernames.txt]
edclef
notch
dkanaga
Let's load it using the 'scan' function to read each line into an array:
mclist <- scan("usernames.txt", what="", sep="\n")
Step 3: We'll loop through the usernames, just like you did before, but we're not going to refer to the directory, since we're going to use the same directory for output as input. The original code had a syntax error in attempting to referring to the desktop directory, and we're just going to sidestep that.
for (mc in mclist){
tweets <- userTimeline(mc, n = 100)
df <- do.call("rbind", lapply(tweets, as.data.frame))
write.csv(df, file=paste(mc, ".csv", sep = ""), row.names = F)
}
I end up with three files on the desktop, and all the data seems to be correct.
edclef.csv
notch.csv
dkanaga.csv
Update: If you really want to refer to different directories within your code, use the '.' character to refer to the parent directory. For instance, if your working directory is your Windows user profile, you would refer to the 'Desktop' folder like so:
setwd("C:/Users/Dinre")
...
write.csv(df, file=paste("./Desktop/". mc, ".csv", sep = ""), row.names = F)
There's a convenience function in the package twListToDF which will handle the conversion of the list of tweets to a data.frame.
Since your mclist is a data.frame, you can replace your for by apply
apply( mclist, 1,function(mc){
tweets <- userTimeline(mc, n = 100)
df <- do.call("rbind", lapply(tweets, as.data.frame))
write.csv(df, file=paste("Desktop/", mc, ".csv", sep = ""), ##!! Change Desktop to
## something like Desktop/tweets/
row.names = F)
})
PS :
The userTimeline function will only work if the user requested has a
public timeline, or you have previously registered a OAuth object
using registerTwitterOAuth