Using pygsheets is it possible to get a url for each worksheet - pygsheets

Using pygsheets its possible to list all the worksheet objects in an account but is it possible to get either a sharing url or a direct url to each workbook?

The google sheets url is of the format
https://docs.google.com/spreadsheets/d/<spreadsheet id>/edit#gid=<worksheet id>
so you can build it using pygsheets as follows
url = "https://docs.google.com/spreadsheets/d/"+ssheet.id+"/edit#gid="+wks.id

Related

How can i download rds file from dropbox in r? [duplicate]

I tried
download.file('https://www.dropbox.com/s/r3asyvybozbizrm/Himalayas.jpg',
destfile="1.jpg",
method="auto")
but it returns the HTML source of that page.
Also tried a little bit of rdrop
library(rdrop2)
# please put in your key/secret
drop_auth(new_usesr = FALSE, key=key, secret=secret, cache=T)
And the pop up website reports:
Invalid redirect_uri: "http://localhost:1410": It must exactly match one of the redirect URIs you've pre-configured for your app (including the path).
I don't understand the URI thing very well. Can somebody recommend some document to read please....
I read some posts but most of them discuss how to read data from excel files.
repmis worked only for reading excel files...
library(repmis)
repmis::source_DropboxData("test.csv",
"tcppj30pkluf5ko",
sep = ",",
header = F)
Also tried
library(RCurl)
url='https://www.dropbox.com/s/tcppj30pkluf5ko/test.csv'
x = getURL(url)
read.csv(textConnection(x))
And it didn't work...
Any help and discussion's appreciated. Thanks!
The first issue is because the https://www.dropbox.com/s/r3asyvybozbizrm/Himalayas.jpg link points to a preview page, not the file content itself, which is why you get the HTML. You can modify links like this though to point to the file content, as shown here:
https://www.dropbox.com/help/201
E.g., add a raw=1 URL parameter:
https://www.dropbox.com/s/r3asyvybozbizrm/Himalayas.jpg?raw=1
Your downloader will need to follow redirects for that to work though.
The second issue is because you're trying to use a OAuth 2 app authorization flow, which requires that all redirect URIs be pre-registered. You can register redirect URIs, in your case it's http://localhost:1410, for Dropbox API apps on the app's page on the App Console:
https://www.dropbox.com/developers/apps
For more information on using OAuth, you can refer to the Dropbox API OAuth guide here:
https://www.dropbox.com/developers/reference/oauthguide
I use read.table(url("yourdropboxpubliclink")) for instance I use this link
instead of using https://www.dropbox.com/s/xyo8sy9velpkg5y/foo.txt?dl=0, which is chared link on dropbox I use
https://dl.dropboxusercontent.com/u/15634209/histogram/foo.txt
and non-public link raw=1 will work
It works fine for me.

How to get reviews with xpath in R

I'm triying to scrape reviews from this webpage https://www.leroymerlin.es/fp/82142706/armario-serie-one-blanco-abatible-2-puertas-200x100x50cm. I'm running into some issues to get XPath, when I ran the code I found the output is always NULL.
Code:
library(XML)
url <- "https://www.leroymerlin.es/fp/82142706/armario-serie-one-blanco-abatible-2-puertas-200x100x50cm"
source <- readLines(url, encoding = "UTF-8")
parsed_doc <- htmlParse(source, encoding = "UTF-8")
xpathSApply(parsed_doc, path = '//*[#id="reviewsContent"]/div[1]/div[2]/div[3]/h3', xmlValue)
I must be doing something wrong! I'm trying everything. Many thanks for your helps.
The This webpage is dynamically created upon load with the data is stored in a secondary file, typical scraping and xpath methods will not work.
If you access your browser's developer's tools and goto the network tab.
Reload the webpage and filter for the XHR files. Review each file and one should see a file named "reviews", this is the file where the reviews are stored in a JSON format. Right click the file and copy the link address.
One can access this file directly:
library(jsonlite)
fromJSON("https://www.leroymerlin.es/bin/leroymerlin/reviews?product=82142706&page=1&sort=best&reviewsPerPage=5")
Here is a good reference: How to Find The Link for JSON Data of a Certain Website

How to pass any URL to an APIFY task?

There is a box to configure the "Start URL" in APIFY, but what happens if i don't know the start URL and it depends of my user input? I would like to be able to pass a variable URL to "Start URL"
Configuration of Start URL in APIFY:
I want to pass any URL automatically through an APIFY task and then scrap it.
I tried to make it automatically through Zapier, in the configuration is possible to select the URL input and pass it to APIFY, but finally it stops the task because is not able to read the format passed. Data out log from Zapier:
I think that APIFY probably lets configure dynamic input URL's but by my beginner level, probably there is something that scapes from my knowledge.
I want to be able to pass variable URL's to be scraped by APIFY.
You can check how input looks like in JSON format using Editor/JSON switcher on the top of input configuration.
After you switch to JSON you can easily check the structure of startUrls.
If you want to override startUrls for example in Zapier integration you can do it using Input JSON overrides field in Run Task Apify<>Zapier action.
You can override input same way using API to run the task, where you need to pass JSON as POST payload of the API request.
If you want to read more about Apify<>Zapier integration you can check article Scrape single URL using Zapier.

how to short url by using google firebase dynamic link if the url contains '#'

i am using firebase dynamic link to shorten my url but on doing so its gives the response of "warningCode": "UNRECOGNIZED_PARAM" as my url contails '#' sign. and its generates the short url of the data before # and ignores the data after #.
My url is mentioned below:-
"https://example.page.link/?link=https://example.com/#/xyz/pqr/eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.aTExSk5la3ZMUVUwTGU2ZTA3OThrdFRkVXE3ZThUZ0lZNzdpckVDcDhSRkIrZHBSUDl0ZFU0SlJOUkYwN0hwYXp2aUF4RlVZZjlTdGYzRnVJWlZpTlRxUDJvWlhyWVhCemJHa1VDc053Sm0vRmlYZlh4bGRb2xjcHM1RmhhdktkY2dRa1RhUlFPQjIya0Z2bWJSeEQ4YVFhY2FtSlJUOGFVMVR5ZUhOZm54Zz09.dhDJWIz9gqmnbhRhkwgZolwNZ8ba4CCjDEYlefkilPc"
i am calling api:-
https://firebasedynamiclinks.googleapis.com/v1/shortLinks?key=my-key
encode the url first before passing it to the api end point

CollectionFS cfs:dropbox How get url to image after uploading?

I published my whole folder with images, but how generate automatically url in my code for each image?
Just use url method. From CollectionFs docs
url
Returns the HTTP file URL for the current FS.File.
Specify a store attribute to get the URL for a specific store. If you
don't specify the store name, the URL will be for the copy in the
first defined store.
If you use several stores (for example I'm using file store for thumbs and S3 for full images for one collection), then define store or it returns first store url.

Resources