I would like a routine that systematically extracts and saves the frames from webcam footage to a local directory on my personal computer.
Specifically, I am trying to save frames from the webcam at Old Faithful geyser in Yellowstone Natl. Park. (https://www.nps.gov/yell/customcf/geyser_webcam_updated.htm)
Ideally, I would like to:
be able to control the rate at which frames are downloaded (e.g. take 1 frame every minute)
use FFMPEG or R
Save the actual frame and not a snapshot of the webpage
Despite point 3 above, I've tried simply taking a screenshot in R using the package webshot:
library(webshot)
i=1
while(i<=2) {
webshot('https://www.nps.gov/yell/customcf/geyser_webcam_updated.htm',delay=60,paste(i,'.png',sep=""))
i=i+1
}
However, from the above code I get these two images:
Despite the delay in the webshot() function (60 seconds) the two images are the same not to mention the obvious play button in the middle. This method also seems a bit of a hack as it is saving a snapshot of the website and not the frames themselves.
I am certainly open to to using more appropriate command line tools (I am just unsure of what they are). Any help is greatly appreciated!
The source code of the URL shows, under the video tag
<source type="application/x-mpegurl" src="//56cf3370d8dd3.streamlock.net:1935/nps/faithful.stream/playlist.m3u8">
The src identifies a HLS playlist. So, you can then run ffmpeg periodically to get an image output like this:
ffmpeg -i https://56cf3370d8dd3.streamlock.net:1935/nps/faithful.stream/playlist.m3u8 -vframes 1 out.png
Related
I am trying to merge/concatenate multiple videos with sound sequentially into one video using only R, (I don't want to work with ffmpeg in the command line as the rest of the project doesn't require it and I would rather not bring it in at this stage).
My code looks like the following:
dir<-"C:/Users/Admin/Documents/r_programs/"
videos<-c(
paste0(dir,"video_1.mp4"),
paste0(dir,"video_2.mp4"),
paste0(dir,"video_3.mp4")
)
#encoding
av_encode_video(
videos,
output=paste0(dir,"output.mp4"),
framerate=30,
vfilter="null",
codec="libx264rgb",
audio=videos,
verbose=TRUE
)
It almost works, the output file is an mp4 containing the 3 videos sequentially one after the other, but the audio present is only from the first of the 3 video and then it cuts off.
It doesn't really matter what the videos are. I have recreated this issue with the videos I was using or even 3 randomly downloaded 1080p 30fps videos from YouTube.
Any help is appreciated & thank you in advance.
The experienced behavior (only 1 audio source) is exactly how it is designed to do. In the C source code, you can identify that encode_video only takes the first audio entry and ignores the rest. Overall, audio is poorly supported by ropensci/av atm as its primary focus is to turn R plots into videos. Perhaps, you can file a feature request issue on GitHub.
Meanwhile, why not just use base.system function to call FFmpeg from R? This will likely speed up your process significantly assuming the videos have identical format by using concat demuxer + stream-copy (-c copy). The av library does not support this feature as far as I can tell. (If formats differ, you need to use the concat filter which is also explained in the link above.)
I have been making plots for some time now, and they are precisely the way I like them, on screen. The data is coming in from sensors related to solar power collection and storage.
Plotted on screen they look great so I do a screen region capture to save them.
So now I would like to automate the saving process.
Here is what I have done so far:
I set up a cron job so they would be run right at midnight, capturing the whole day and saving it as a .png file
Then it moves the "today.dat" data file to the archive named by date.
This part is all working as designed.
EXCEPT, by using .PNG the images do not look the same.
I really thought png would be the best option, but it turns out that the font used for the X-axis (HH:MM ticks) is too thick and they run together. It looks like a crayon-drawn version of my plot designs.
Can someone please give me some guidance on how to best programatically generate the plots for saving so they look like the way I designed them?
As pointed out in the comments above, the best way is probably to use a different terminal for output to an image file, and simply ignore the fact that the generated images are not identical to what you see on your screen when using the x11 terminal. However, if you really need an exact copy, there are (at least) two options:
You could automate the process of taking a screenshot. You can even do this from within gnuplot, where it might come handy that the GPVAL_TERM_WINDOWID variable contains the X Windows ID for the current plot window. You can use that to make a screenshot of the window after you made the plot:
system(sprintf("xwd -id 0x%x | convert xwd:- screenshot.png", GPVAL_TERM_WINDOWID))
Here I included a call to convert to convert the xwd file format to png.
Another option is to use the xlib terminal, which saves the sequence of commands that the gnuplot_x11 helper application turns into the window you see on the screen. For example,
set term push; set term xlib; set output "file.xlib"; replot; set output; set term pop
will create the file file.xlib that has all the information of the last plot. To later view this plot, use
gnuplot_x11 -noevents -persist < file.xlib
where you might have to specify the path to gnuplot_x11.
Similar as #user8153 suggested for x11, you can use import, which is as convert an imagemagick tool
system("import -window ".GPVAL_TERM_WINDOWID." screenshot.png")
Convenient is also a shortcut to copy the image into clipboard and paste it with Ctrl+v elsewhere:
bind Ctrl-c 'system("import -window ".GPVAL_TERM_WINDOWID." png:- | xclip -sel clip -t image/png")'
See also Show graph on display and save it to file simultaneously in gnuplot.
I'm trying to source multiple R scripts with a short delay in between each one. The 15 R scripts to be 'sourced' all collect data from GA API, transform/clean/analyze the data, then finally push the into their own worksheets within a single Google Sheet. So I'd like to set a wait of 1 minute between each script to make sure I'm not overloading the Google Sheet file.
How can I turn the code (below) into a mini-function where there is a wait time between each source() command?
source("/code/processed/script1.R")
source("/code/processed/script1.R")
source("/code/processed/script1.R")
...
source("/code/processed/script15.R")
Thanks in advance for your help! :)
PS - For context, please note I have my working directory organized in the following hierarchy:
|-project
|-code
|-processed
|-raw
|-data
|-processed
|-raw
As suggested in my comment I would use sys.sleep(), either by manually adding it netwerk every source command:
source(...)
sys.sleep(60)
source(...)
Or by storing all scripts in a vector and looping over them.
I'm putting the finishing touches of a project, and have a bit of a dilemma. Once all the data is gathered and statistics calculated, the results are then printed to the screen. However, in the program, the user is given the option of saving all the output to a file. I'd like to print data to both the terminal and file with same formatting.
I considered doing a fork(), but this is all one process and the data output is done just before the program termination. If I fork, then the child process will start executing from the beginning, and implementing successfully would result in a not so minor rewrite of 500+ LOC.
I covered roughly this exact same topic last semester, but left my unix programming book at home and none of the examples I've found fit my needs.
Consider piping your output through the tee command, which writes to stdout and a file.
I have a script that makes barplots, and opens a new window when 6 barplots have been written to the screen and keeps opening new graphic devices whenever necessary.
Depending on the input, this leaves me with a potential large number of openened windows (graphic devices) which I would like to write to a single PDF file.
Considering my Perl background, I decided to iterate over the different graphics devices, printing them out one by one. I would like to keep appending to a single PDF file, but I do not know how to do this, or if this is even possible. I would like to avoid looping in R. :)
The code I use:
for (i in 1:length(dev.list())
{
dev.set(which = dev.list()[i]
dev.copy2pdf(device = quartz, file = "/Users/Tim/Desktop/R/Filename.pdf")
}
However, this is not working as it will overwrite the file each time. Now is there an append function in R, like there is in Perl. Which allows me to keep adding pages to the existing pdf file?
Or is there a way to contain the information in a graphic window to a object, and keep adding new graphic devices to this object and finally print the whole thing to a file?
Other possible solutions I thought about:
writing different pdf files, combining them after creation (perhaps even possible in R, with the right libraries installed?)
copying the information in all different windows to one big graphic device and then print this to a pdf file.
Quick comments:
use the onefile=TRUE argument which gets passed through to pdf(), see the help pages for dev.copypdf and pdf
as a general rule, you may find it easier to open the devices directly; again see help(pdf)
So in sum, add onefile=TRUE to you call and you should be fine but consider using pdf() directly.
To further elaborate on the possibility to append to a pdf. Although, multiples graphs can be put easaly into one file it turns out that it is impossiple or at least not simple to really append a pdf once finished by dev.off() - see here.
I generate many separate pages and then join them with something like system('pdfjam pages.pdf -o output.pdf' )*