rtweet to scrape a whole thread of a twitter status? - r

I am trying to use R to scrape the whole conversation thread of a twitter status. I am exploring rtweet, which is the latest package for extracting twitter data in R. I could not not find any. I wonder if anybody could help
Thanks

Related

How to Scrape data from list

I'm trying to scrape data from [https://www.idealista.com/maps/madrid-madrid/][1]
I'm not getting the whole content of the page. I used BeautifulSoup python library. what I need is the list of the streets available on the webpage.
I'm a beginner to web scraping, Can anyone help on how to proceed or which libraries to use to get this done?

extracting twitter video with R

I have a video in my local folder.
What i want to do is the use rtweet or any pther R package to search the twitter for this specific video and pull the data as a data frame.
IS there any way of doing this?
There’s no way to do this via, for example, video fingerprinting - Twitter search does not support that. If you knew a specific URL where that video is shared from, you could search for it in the API.

Download Bloomberg Terminal's information using R

I've been looking for some clear examples for this approach. I know it requires an API in some cases. I've found the libraries Rblpapi and RblDataLicense, but I haven't been able to find a clear example to base on.
I need to download data from the DDIS function in the bloomberg terminal for a credit risk modeling I'm currently developing.
I'll appreciate a lot if anyone could help me out.
There are examples in the vignette and manual, which you can find at https://CRAN.R-project.org/package=Rblpapi .
I have never tried to do this, but I don't think you can just download the DDIS data as is. I suspect you'd have to recreate it by finding all the bonds for the company(ies) you're interested in and then downloading the info you want for each one. Looks as though you'd need to explore the bsrch() function.

How to do web scraping using R

I’m a beginner in web scraping and trying to learn how to implement an automated process to collect data from the web submitting search terms.
The specific problem I’m working on is as follows:
Given the stackoverflow webpage https://stackoverflow.com/ I submit a search for the term “web scraping” and want to collect in a list all question links and the content for each question.
Is it possible to scrape these results?
My plan is to create a list of terms:
term <- c(“web scraping”, “crawler”, “web spider”)
submit a research for each term and collect both question title and content of the question.
Of course the process should be repeated for each pages of results.
Unfortunately, being relatively new to web scraping, I'm not sure what to do.
I’ve already downloaded some packages to scrape the web (rvest, RCurl, XML, RCrawler).
Thanks for your help

Replying to tweets using the twitteR package in R

I'm trying to build a twitter bot with this general functionality: people tweet at my account, I do some natural language processing, and reply to their tweet with some results. What I'm not able to find in the twitteR package is a function that retrieves tweets that have been made directly at your account. Does this exist, or am I supposed to use a combination of the other functions in the package? Haven't been able to do it using the other functions.

Resources