I am attempting to randomly loop through ~1400 mp4 files in a Qualtrics survey. Each respondent would see one of these files. I understand how to do this for images, using a Loop and Merge and:
img src="https://survey.qualtrics.com/ControlPanel/Graphic.php?IM=${lm://Field/1}"
However, the Field/1 links to the Qualtrics IDs that are in the image library and mp4's are in the files library. I do not know where to find those.
It is a little bit of a pain, but first click one of the files and View it. The URL at the top will have the ID at the very end. Go back to your list and open the inspect tool. Search for the ID. The first one you find is a large list of text. They are all in there.
Related
I am trying to scrape some tables from here: https://www.adfg.alaska.gov/index.cfm?adfg=commercialbyareapws.harvestbydistrict However, the tables are not visible until I manually select them from the dropdown (see snapshot below). I need to download a few different tables from 221,222,223,226,227 and 228.
I need to download these tables using R and save them as data.frames or as Excel files.
Any suggestion on how to go about this?
UPDATE: Here is the snapshot of the inspect network option.
I would like to grab satellite positions from the page(s) below, but I'm not sure if scraping is appropriate because the page appears to be updating itself every second using some internal code (it keeps updating after I disconnect from the internet). Background information can be found in my question at Space Stackexchange: A nicer way to download the positions of the Orbcomm-2 satellites.
I need a "snapshot" of four items simultaneously:
UTC time
latitude
longitude
altitude
Right now I use screen shots and manual typing. Since these values are being updated by the page - is conventional web-scraping going to work here? I found a "screen-scraping" tag, should I try to learn about that instead?
I'm looking for the simplest solution to get those four values, I wonder if I can just use urllib or urllib2 and avoid installing something new?
example page: http://www.satview.org/?sat_id=41186U I need to do 41179U through 41189U (the eleven Orbcomm-2 satellites that SpaceX just put in orbit)
Those values are calculated with a little math using javascript. The calculations are detailed here: https://www.satview.org/track.js
So, I guess one option is to write that script (plus any dependencies) in the language of your choice and use it to return your desired values.
There is one major function track() which takes arg $modo which can be one of two values - tic or plot.
There may be other source files (dependencies) referenced:
The easier way would probably be to use something which allows javascript to run e.g. automating a browser, and extracting the calculated values as they are generated.
On a Windows machine, when I look at any mp3-file and look at the properties of that file (mark the mp3, right click, properties), there are various subfields for the title, subtitle, artist, album, etc.
I am looking for a way to access these properties and change them. For instance, some files may indicate that the artist is "GreatArtist", whereas other files indicate "The GreatArtist" or "Great Artist".
I know I can change all of them manually by selecting all files that corespond to the same artist, right click and entering everything manually. I am looking for a way to automate this though so that it becomes easy for many folders, artists, and files, and R is my software of choice.
How can I access these properties using R? file.info() does not display these properties.
How efficient is reading the names of files in a directory in ASP.NET?
Background: I want to update pictures on a webserver automatically and deploy them in advance. E.g. until the 1. April I want to pick 'image1.png'. After the 1. April 'image2.png'. To achieve this I have to map every image name to a date which indicates if this image has to be picked or not.
In order to avoid mapping between file name and date in a seperate file or database the idea is to put a date in the file name. Iterating the directory and parsing the dates make me find my file.
E.g.:
image_2013-01-01.png
image_2013-04-31.png
The second one will be picked from May to eternity if no image with a later date will be dropped.
So I wonder how this solution impacts the speed of a website assuming <20 files.
If you are using something like Directory.GetFiles, that is one call to the OS.
This will access the disk to get the listing.
For less that 20 files this will be very quick. However since this data is unlikely to change very often, consider caching the name of your image.
You could store it in the application context to share it among all users of your site.
I've built an ASP.NET page whose output stream is a dynamically-generated PNG image containing only text on a transparent background.
The text is based upon database IDs contained in the querystring. There will be a limited number of variations.
Which one of the following would be the most efficient means of returning the image to the client?
Store each variation upon the first generation, and thenceforth retrieve this from the drive.
Simply generate the image each time.
Cache the output response based upon the querystring.
Totally depends on how often this image is going to have to be generated.
If it's a small project I would elect to generate it each time as this would be the simplest solution.
If you are expecting a lot of generations then storing the image each time it's generated and checking for pre generated images would be next, it gets a bit complicated though, all depends on how many unique variations of images you expect to be generated, if it is small, go for it, otherwise you may have to have expiry dates on the images that are not so frequently accessed.
In short, it depends on what the application of this is, and not enough information was given to give a comprehensive answer to your specific solution.