Collect tweets with their related tweeters - r

I am doing text mining on tweets,I have collected random tweets form different accounts about some topic, I transformed the tweets into data frame, I was able to find the most frequent tweeters among those tweets(by using the column "screenName")... like those tweets:
[1] "ISCSP_ORG: #cybercrime NetSafe publishes guide to phishing:
Auckland, Monday 04 June 2013 – Most New Zealanders will have...
http://t.co/dFLyOO0Djf"
[1] "ISCSP_ORG: #cybercrime Business Briefs: MILL CREEK — H.M. Jackson
High School DECA chapter members earned the organizatio...
http://t.co/auqL6mP7AQ"
[1] "BNDarticles: How do you protect your #smallbiz from #cybercrime?
Here are the top 3 new ways they get in & how to stop them.
http://t.co/DME9q30mcu"
[1] "TweetMoNowNa: RT #jamescollinss: #senatormbishop It's the same
problem I've been having in my fight against #cybercrime. \"Vested
Interests\" - Tell me if …"
[1] "jamescollinss: #senatormbishop It's the same problem I've been
having in my fight against #cybercrime. \"Vested Interests\" - Tell me
if you work out a way!"
there are different tweeters have sent many tweets (in the collected dataset)
Now , I want to collect/group the related tweets for their corresponding tweeters/user..
Is there any way to do it using R ?? any suggestion? your help would be very appreciated.

Related

Converting a dataframe which contains list into a csv with r

I am new to R and I am facing difficulties to convert my dataframe (named dffinal) which contains list into a csv.
I tried the following code which gave a csv that is not usable:
dput(dffinal, file="out.txt")
new <- source("out.txt")
write.csv2(dffinal,"C:/Users\\final.csv", row.names = FALSE)
I tried all the option but I found nothing! Here is a sample of my dataframe:
dput(head(dffinal[1:2]))
structure(list(V1 = list("I heard about your products and I would like to give it a try but I'm not sure which product is better for my dry skin, Almond products or Shea Butter products? Thank you",
"Hi,\n\nCan you please tell me the difference between the shea shower oil limited edition and the other shower gels? I got a sample of one in a kit that had a purple label on it. (Please see attached photo.) I love it!\nBut, what makes it limited edition, the smell or what? It is out of stock and I was wondering if it is going to be restocked or not?\n\nAlso, what makes it different from the almond one?\n\nThank you for your help.",
"Hello, Have you discontinued Eau de toilette", "I both an eGift card for my sister and she hasn't received anything via her email\n\nPlease advise \n\nThank you \n\n cann",
"I do not get Coco Pillow Mist. yet. When are you going to deliver it? I need it before January 3rd.",
"Hello,\nI wish to follow up on an email I just received from Lol, notifying\nme that I've \"successfully canceled my subscription of bun Complete.\"\nHowever, I didn't request a cancelation and was expecting my next scheduled\nfulfillment later this month. Could you please advise and help? I'd\nappreciate it if you could reinstate my subscription.\n"),
V2 = list("How long can I keep a product before opening it? shea butter original hand cream large size 5oz, i like to buy a lot during sales promotions, is this alright or should i only buy what i'll use immediately, are these natural organic products that will still have a long stable shelf life? thank you",
"Hi,\nI recently checked to see if my order had been delivered, and I only received my gift box and free sample. Can you please send the advent calendar? Does not seem to have been included in the shipping. Thank you",
"Is the gade fragrance still available?", "I previously contacted you because I purchased your raspberry lip scrub. When I opened the scrub, 25% of the product was missing. Your customer service department agreed to send me a replacement, but I never received the replacement rasberry lip scrub. Could you please tell me when I will receive the replacement product? Thanks, me",
"To whom it may concern:\n\nI have 3 items in my order: 1 Shea Butter Intensive Hand Balm and 2 S‚r‚nit‚ Relaxing Pillow Mist. I have just received the hand balm this morning. I was wondering when I would receive the two bottles of pillow mist.\n\nThanks and regards,\n\nMe",
"I have not received 2X Body Scalp Essence or any shipment information regarding these items. Please let me know if and when you will be shipping these items, otherwise please credit my card. Thanks")), row.names = c(NA,
6L), class = "data.frame")
We can do this in tidyverse
library(dplyr)
library(readr)
dffinal %>%
mutate(across(everything(), unlist)) %>%
write_csv('result.csv')
If you have list of only length 1 for all the rows as shared in the example using unlist will work -
dffinal[] <- lapply(dffinal, unlist)
If the length of list is greater than 1 use -
dffinal[] <- lapply(dffinal, sapply, toString)
Write the data with write.csv -
write.csv(dffinal, 'result.csv', row.names = FALSE)

Extracting full article text via the newsanchor package [in R]

I am using the newsanchor package in R to try to extract entire article content via NewsAPI. For now I have done the following :
require(newsanchor)
results <- get_everything(query = "Trump +Trade", language = "en")
test <- results$results_df
This give me a dataframe full of info of (maximum) a 100 articles. These however do not containt the entire actual article text. Rather they containt something like the following:
[1] "Tensions between China and the U.S. ratcheted up several notches over the weekend as Washington sent a warship into the disputed waters of the South China Sea. Meanwhile, Google dealt Huaweis smartphone business a crippling blow and an escalating trade war co… [+5173 chars]"
Is there a way to extract the remaining 5173 chars. I have tried to read the documentation but I am not really sure.
I don't think that is possible at least with free plan. If you go through the documentation at https://newsapi.org/docs/endpoints/everything in the Response object section it says :
content - string
The unformatted content of the article, where available. This is truncated to 260 chars for Developer plan users.
So all the content is restricted to only 260 characters. However, test$url has the link of the source article which you can use to scrape the entire content but since it is being aggregated from various sources I don't think there is one automated way to do this.

Web Scraping with R: Truncation Issue [duplicate]

This question already has an answer here:
avoid string printed to console getting truncated (in RStudio)
(1 answer)
Closed 6 years ago.
As a beginner, I am currently working with web scraping with R, using the 'rvest' package. My goal is to get the lyrics of any song from 'www.musixmatch.com'. This is my attempt:
library(rvest)
url <- "https://www.musixmatch.com/lyrics/Red-Hot-Chili-Peppers/Can-t-Stop"
musixmatch <- read_html(url)
lyrics <- musixmatch%>%html_nodes(".mxm-lyrics__content")%>%html_text()
This code creates a vector 'lyrics' with 2 rows, containing the lyrics:
[1] "Can't stop addicted to the shindig\nChop top he says I'm gonna win big\nChoose not a life of imitation"
[2] "Distant cousin to the reservation\n\nDefunkt the pistol that you pay for\nThis punk the feeling that you stay for\nIn time I want to be your best friend\nEastside love is living on the Westend\n\nKnock out but boy you better come to\nDon't die you know the truth is some do\nGo write your message on the pavement\nBurn so bright I wonder what the wave meant\n\nWhite heat is screaming in the jungle\nComplete the motion if you stumble\nGo ask the dust for any answers\nCome back strong with 50 belly dancers\n\nThe world I love\nThe tears I drop\nTo be part of\nThe wave can't stop\nEver wonder if it's all for you\nThe world I love\nThe trains I hop\nTo be part of\nThe wave can't stop\n\nCome and tell me when it's time to\n\nSweetheart is bleeding in the snow cone\nSo smart she's leading me to ozone\nMusic the great communicator\nUse two sticks to make it in the nature\nI'll get you into penetration\nThe gender of a generation\nThe birth of every other nation\nWorth your weight the gold ... <truncated>
The problem is that the 2nd row gets truncated at some point. From what I know about rvest, there is no parameter to adjust truncation. Also, I could not find anything on the internet about this issue. Does anybody know how to adjust/ disable truncation for this feature? Thanks a lot in advance!
Best regards,
Jan
I think its better to copy and paste the lyrics into your Notepad or Wordpad. Save as a .txt file
Then use the readLines function, it prints our a warning message but I was able to have the entire lyrics in 84x1 chacacter vector which you can clean or do whatever you please.
words <- readLines("redhot.txt")
> head(words)
[1] "Can't stop addicted to the shindig"
[2] "Chop top he says I'm gonna win big"
[3] "Choose not a life of imitation"
[4] "Distant cousin to the reservation"
[5] "Defunkt the pistol that you pay for"
[6] "This punk the feeling that you stay for"
No truncation problem here.

Web scraping SEC Edgar 10-K and 10-Q filings

Are there anyone experienced with scraping SEC 10-K and 10-Q filings? I got stuck while trying to scrape monthly realised share repurchases from these filings. In specific, I would like to get the following information: 1. Period; 2. Total Number of Shares Purchased; 3. Average Price Paid per Share; 4. Total Number of Shares Purchased as Part of Publicly Announced Plans or Programs; 5. Maximum Number (or Approximate Dollar Value) of Shares that May Yet Be Purchased Under the Plans or Programs for each month from 2004 to 2014. I have in total 90,000+ forms to parse, so it won't be feasible to do it manually.
This information is usually reported under "Part 2 Item 5 Market for Registrant's Common Equity, Related Stockholder Matters and Issuer Purchases of Equity Securities" in 10-Ks and "Part 2 Item 2 Unregistered Sales of Equity Securities and Use of Proceeds".
Here is one example of the 10-Q filings that I need to parse:
https://www.sec.gov/Archives/edgar/data/12978/000104746909007169/a2193892z10-q.htm
If a firm have no share repurchase, this table can be missing from the quarterly report.
I have tried to parse the html files with Python BeautifulSoup, but the results are not satisfactory, mainly because these files are not written in a consistent format.
For example, the only way I can think of to parse these forms is
from bs4 import BeautifulSoup
import requests
import unicodedata
import re
url='https://www.sec.gov/Archives/edgar/data/12978/000104746909007169/a2193892z10-q.htm'
def parse_html(url):
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html5lib')
tables = soup.find_all('table')
identifier = re.compile(r'Total.*Number.*of.*Shares.*\w*Purchased.*', re.UNICODE|re.IGNORECASE|re.DOTALL)
n = len(tables) -1
rep_tables = []
while n >= 0:
table = tables[n]
remove_invalid_tags(table)
table_text = unicodedata.normalize('NFKD', table.text).encode('ascii','ignore')
if re.search(identifier, table_text):
rep_tables += [table]
n -= 1
else:
n -= 1
return rep_tables
def remove_invalid_tags(soup, invalid_tags=['sup', 'br']):
for tag in invalid_tags:
tags = soup.find_all(tag)
if tags:
[x.replaceWith(' ') for x in tags]
The above code only returns the messy that may contain the repurchase information. However, 1) it is not reliable; 2) it is very slow; 3) the following steps to scrape date/month, share price, and number of shares etc. are much more painful to do. I am wondering if there are more feasible languages/approaches/applications/databases to get such information? Thanks a million!
I'm not sure about python, but in R there is an beautiful solution using 'finstr' package (https://github.com/bergant/finstr).
'finstr' automatically extracts the financial statements (income statement, balance sheet, cash flow and etc.) from EDGAR using XBRL format.

How to connect data dictionaries to the unlabeled data

I'm working with some large government datasets from the Department of Transportation that are available as tab-delimited text files accompanied by data dictionaries. For example, the auto complaints file is a 670Mb file of unlabeled data (when unzipped), and comes with a dictionary. Here are some excerpts:
Last updated: April 24, 2014
FIELDS:
=======
Field# Name Type/Size Description
------ --------- --------- --------------------------------------
1 CMPLID CHAR(9) NHTSA'S INTERNAL UNIQUE SEQUENCE NUMBER.
IS AN UPDATEABLE FIELD,THUS DATA FOR A
GIVEN RECORD POTENTIALLY COULD CHANGE FROM
ONE DATA OUTPUT FILE TO THE NEXT.
2 ODINO CHAR(9) NHTSA'S INTERNAL REFERENCE NUMBER.
THIS NUMBER MAY BE REPEATED FOR
MULTIPLE COMPONENTS.
ALSO, IF LDATE IS PRIOR TO DEC 15, 2002,
THIS NUMBER MAY BE REPEATED FOR MULTIPLE
PRODUCTS OWNED BY THE SAME COMPLAINANT.
Some of the fields have foreign keys listed like so:
21 CMPL_TYPE CHAR(4) SOURCE OF COMPLAINT CODE:
CAG =CONSUMER ACTION GROUP
CON =FORWARDED FROM A CONGRESSIONAL OFFICE
DP =DEFECT PETITION,RESULT OF A DEFECT PETITION
EVOQ =HOTLINE VOQ
EWR =EARLY WARNING REPORTING
INS =INSURANCE COMPANY
IVOQ =NHTSA WEB SITE
LETR =CONSUMER LETTER
MAVQ =NHTSA MOBILE APP
MIVQ =NHTSA MOBILE APP
MVOQ =OPTICAL MARKED VOQ
RC =RECALL COMPLAINT,RESULT OF A RECALL INVESTIGATION
RP =RECALL PETITION,RESULT OF A RECALL PETITION
SVOQ =PORTABLE SAFETY COMPLAINT FORM (PDF)
VOQ =NHTSA VEHICLE OWNERS QUESTIONNAIRE
There are import instructions for Microsoft Access, which I don't have and would not use if I did. But I THINK this data dictionary was meant to be machine-readable.
My question: Is this data dictionary a standard format of some kind? I've tried to Google around, but it's hard to do so without the right terminology. I would like to import into R, though I'm flexible so long as it can be done programmatically.

Resources