rvest doesn't seem to offer any way to extract text from parent object only (ignoring children). One workaround uses xml_remove(), which mutates the original object - all the way up the memory chain given R's default lazy evaluation.
I look to rlang::duplicate(), which is supposed for "modifying the copy leaves the original object intact", but the clone does not appear to be truly independent. For example:
require(rvest)
h = '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc = xml2::read_html(h)
x = html_node(doc, '#target')
html_text(x)
#> [1] "\ntext to extract\ntext to ignorethis too"
Now clone x, remove its children, and extract the text:
x2 = rlang::duplicate(x, shallow = FALSE)
children = html_children(x2)
xml2::xml_remove(children)
html_text(x2)
#> [1] "\ntext to extract\n"
That works as intended, however x has also been mutated:
html_text(x)
#> [1] "\ntext to extract\n"
Any suggestions why and how to workaround this? I do not want to start re-attaching children..
First of all let me say that I think yoo can solve the issue without copying the data. I'm not an expert in xpath, but I think you can use it to just select only direct text descendents, ignoring text nested in other xml nodes. I.e. the following seems to do the trick without any copy (x defined as in your question):
html_text(html_elements(x, xpath = "text()"))
# [1] "\ntext to extract\n"
That being said, I also have an answer to the question on how to make a deep copy:
The problem is that rlang::duplicate() can only copy R data structures. However, rvest builds on xml2, and xml2 builds on the C library libxml2.
When you create the xml_node object in R, the corresponding data structure is created in libxml2. On the R side, there is basically just a pointer to the libxml2 object. So rlang::duplicate() will only create a copy of that pointer, but not of the underlying data. It cannot do so, because it has no access to it as it is in a different library (that rlang doesn't know of).
The easiest way to create a copy of the underlying data seems to be to serialize and deserialze the xml. I suspect this is not very efficent though.
Example:
Read in the original data:
require(rvest)
h <- '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc <- xml2::read_html(h)
x <- html_node(doc, '#target')
Create two copies - one with rlang:duplicate() and one with xml2::xml_unserialize():
x1 <- rlang::duplicate(x, shallow = FALSE)
x2 <- xml2::xml_unserialize(xml2::xml_serialize(x, NULL))
Check that x and x1 are in fact identical, while x2 is a true copy (the memory locations you get will be of course be different to the ones shown here):
x$doc
# <pointer: 0x0000023911334ea0>
x1$doc
# <pointer: 0x0000023911334ea0>
# --> same as x
x2$doc
# <pointer: 0x00000239113377d0>
# --> different to x
Test that everything works as intented:
children <- html_children(x2)
xml2::xml_remove(children)
html_text(x2)
# [1] "\n text to extract\n "
html_text(x)
# [1] "\n text to extract\n text to ignorethis too"
Another potential solution (maybe a more general approach) is to use the html_children() function to obtain the text of all the child nodes and then remove that from the full text.
require(rvest)
h = '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc = xml2::read_html(h)
x = html_node(doc, '#target')
fulltext <- html_text(x)
# [1] "\ntext to extract\ntext to ignorethis too"
#find the text in the children nodes
childtext <- html_children(x) %>% html_text()
# "text to ignorethis too"
#replace the child node text with a numm
gsub(childtext, "", fulltext) %>% trimws()
#"text to extract"
#alternative using the text from the first child node
firstchild <- xml_child(x, search=1) %>% xml_text()
gsub(paste0(firstchild, ".*"), "", fulltext)
Of course, if there are additional newline "\n" or formatting character, the gsub() may break.
Related
I'm working with the WRS2 package and there are cases where it'll output its analysis (bwtrim) into a list with a special class of the analysis type class = "bwtrim". I can't as.data.frame() it, but I found that there is a custom print method called print.bwtrim associated with it.
As an example let's say this is the output: bwtrim.out <- bwtrim(...). When I run the analysis output in an Rmarkdown chunk, it seems to "steal" part of the text output and make it into a dataframe.
So here's my question, how can I either access print.bwtrim or how does R markdown automatically format certain outputs into dataframes? Because I'd like to take this outputted dataframe and use it for other purposes.
Update: Here is a minimally working example -- put the following in a chunk in Rmd file."
```{r}
library(WRS2)
df <-
data.frame(
subject = rep(c(1:100), each = 2),
group = rep(c("treatment", "control"), each = 2),
timepoint = rep(c("pre", "post"), times = 2),
dv = rnorm(200, mean = 2)
)
analysis <- WRS2::bwtrim(dv ~ group * timepoint,
id = subject,
data = df,
tr = .2)
analysis
```
With this, a data.frame automatically shows up in the chunk afterwards and it shows all the values very nicely. My main question is how can I get this data.frame for my own uses. Because if you do str(analysis), you see that it's a list. If you do class(analysis) you get "bwtrim". if you do methods(class = "bwtrim"), you get the print method. And methods(print) will have a line that says print.bwtrim*. But I can't seem to figure out how to call print.bwtrim myself.
Regarding what Rmarkdown is doing, compare the following
If you run this in a chunk, it actually steals the data.frame part and puts it into a separate figure.
```{r}
capture.output(analysis)
```
However, if you run the same line in the console, the entire output comes out properly. What's also interesting is that if you try to assign it to another object, the output will be stolen before it can be assigned.
Compare x when you run the following in either a chunk or the console.
```{r}
x<-capture.output(analysis)
```
This is what I get from the chunk approach when I call x
[1] "Call:"
[2] "WRS2::bwtrim(formula = dv ~ group * timepoint, id = subject, "
[3] " data = df, tr = 0.2)"
[4] ""
[5] ""
This is what I get when I do it all in the console
[1] "Call:"
[2] "WRS2::bwtrim(formula = dv ~ group * timepoint, id = subject, "
[3] " data = df, tr = 0.2)"
[4] ""
[5] " value df1 df2 p.value"
[6] "group 1.0397 1 56.2774 0.3123"
[7] "timepoint 0.0001 1 57.8269 0.9904"
[8] "group:timepoint 0.5316 1 57.8269 0.4689"
[9] ""
My question is what can I call whatever Rstudio/Rmarkdown is doing to make data.frames, so that I can have an easy data.frame myself?
Update 2: This is probably not a bug, as discussed here https://github.com/rstudio/rmarkdown/issues/1150.
Update 3: You can access the method by using WRS2:::bwtrim(analysis), though I'm still interested in what Rmarkdown is doing.
Update 4: It might not be the case that Rmarkdown is stealing the output and automatically making dataframes from it, as you can see when you call x after you've already captured the output. Looking at WRS2:::print.bwtrim, it prints a dataframe that it creates, which I'm guessing Rmarkdown recognizes then formats it out.
See below for the print.bwtrim.
function (x, ...)
{
cat("Call:\n")
print(x$call)
cat("\n")
dfx <- data.frame(value = c(x$Qa, x$Qb, x$Qab), df1 = c(x$A.df[1],
x$B.df[1], x$AB.df[1]), df2 = c(x$A.df[2], x$B.df[2],
x$AB.df[2]), p.value = c(x$A.p.value, x$B.p.value, x$AB.p.value))
rownames(dfx) <- c(x$varnames[2], x$varnames[3], paste0(x$varnames[2],
":", x$varnames[3]))
dfx <- round(dfx, 4)
print(dfx)
cat("\n")
}
<bytecode: 0x000001f587dc6078>
<environment: namespace:WRS2>
In R Markdown documents, automatic printing is done by knitr::knit_print rather than print. I don't think there's a knit_print.bwtrim method defined, so it will use the default method, which is defined as
function (x, ..., inline = FALSE)
{
if (inline)
x
else normal_print(x)
}
and normal_print will call print().
You are asking why the output is different. I don't see that when I knit the document to html_document, but I do see it with html_notebook. I don't know the details of what is being done, but if you look at https://rmarkdown.rstudio.com/r_notebook_format.html you can see a discussion of "output source functions", which manipulate chunks to produce different output.
The fancy output you're seeing looks a lot like what knitr::knit_print does for a dataframe, so maybe html_notebook is substituting that in place of print.
I am having problem with getting the right text after stemming in R.
Eg. 'papper' should show as 'papper' but instead shows up as 'papp', 'projekt' becomes 'projek'.
The frequency cloud generated thus shows these shortened versions which loses the actual meaning or becomes incomprehensible.
What can I do to get rid of this problem? I am using the latest version of snowball(0.6.0).
R Code:
library(tm)
library(SnowballC)
text_example <- c("projekt", "papper", "arbete")
stem_doc <- stemDocument(text_example, language="sv")
stem_doc
Expected:
stem_doc
[1] "projekt" "papper" "arbete"
Actual:
stem_doc
[1] "projek" "papp" "arbet"
What you describe here is actually not stemming but is called lemmatization (see #Newl's link for the difference).
To get the correct lemmas, you can use the R package UDPipe, which is a wrapper around the UDPipe C++ library.
Here is a quick example of how you would do what you want:
# install.packages("udpipe")
library(udpipe)
dl <- udpipe_download_model(language = "swedish-lines")
#> Downloading udpipe model from https://raw.githubusercontent.com/jwijffels/udpipe.models.ud.2.3/master/inst/udpipe-ud-2.3-181115/swedish-lines-ud-2.3-181115.udpipe to C:/Users/Johannes Gruber/AppData/Local/Temp/RtmpMhaF8L/reprex8e40d80ef3/swedish-lines-ud-2.3-181115.udpipe
udmodel_swed <- udpipe_load_model(file = dl$file_model)
text_example <- c("projekt", "papper", "arbete")
x <- udpipe_annotate(udmodel_swed, x = text_example)
x <- as.data.frame(x)
x$lemma
#> [1] "projekt" "papper" "arbete"
As far as we know, the parsing library like XML and xml2 can read standard table on web page perfectly. But there are some sorts of table which has no grid of table but organizing labels, such as “<span>” and “<div>”.
Now I am coping with a table like this,
The structure of table marks with “<span>”, and every 4 “<span>” Labels organize one record. I’ve used a loop to solve this problem and succeed. But I want to process it without loop. I heard that library purrr may help on this problem, but I don’t know how to use it in this situation.
I do my analysis by both “XML” and “xml2”:
Analysis with “XML” package
pg<-"http://www.irgrid.ac.cn/simple-search?fq=eperson.unique.id%3A311007%5C-000920"
library(XML)
tableNodes <- getNodeSet(htmlParse(pg), "//table[#class='miscTable2']")
itemlines <- xpathApply(tableNodes[[1]], "//tr[#class='itemLine']/td[#width='750']")
ispan <- xmlElementsByTagName(itemlines[[2]], "span")
title <- xmlValue(ispan$span)
isuedate <- xmlValue(ispan$span[1,2])
author <- xmlValue(ispan$span[3])
In this case, “XML” got a list of one span, but this list is very strange but met my expectations:
> attributes(ispan)
$names
[1] "span" "span" "span" "span"
It seems have one row only, but four columns. However, it doesn’t. The 2-4 “span” couldn’t be select by column. The first “span” occupied 2 columns, and other “span” could not get.
> val <- xmlValue(ispan$span[[1]])
> val
[1] "超高周疲劳裂纹萌生与初始扩展的特征尺度"
> isuedate <- xmlValue(ispan$span[[2]])
> isuedate
[1] " \r\n [科普文章]"
> isuedate <- xmlValue(ispan$span[[3]])
> isuedate
[1] NA
> author <- xmlValue(ispan$span[[4]])
> author
[1] NA
None of the selection method used in list works:
> title <- xmlValue(ispan$span[1,1])
Error in UseMethod("xmlValue") :
no applicable method for 'xmlValue' applied to an object of class "c('XMLInternalNodeList', 'XMLNodeList')"
title <- xmlValue(ispan$span[1,])
Error in UseMethod("xmlValue") :
no applicable method for 'xmlValue' applied to an object of class "c('XMLInternalNodeList', 'XMLNodeList')"
author <- xmlValue(ispan[1,3])
Error in ispan[1, 3] : incorrect number of dimensions
Analysis with “xml2”
Use “xml2” the obstacle of “span” makes same problem
pg<-"http://www.irgrid.ac.cn/simple-search?fq=eperson.unique.id%3A311007%5C-000920"
library(xml2)
tableSource <- xml_find_all(read_html(pg, encoding = "UTF-8"), "//table[#class='miscTable2']")
itemspan <- xml_child(itemspantab, "span")
It could not gether any of these “span” labels:
> itemspan
{xml_nodeset (1)}
[1] <NA>
If we make a step further to locate the “span” labels, it only get nothing:
> itemspanl <- xml_find_all(itemspantab, '//tr[#class="itemLine"]/td/span')
> itemspan <- xml_child(itemspanl, "span")
> itemspan
{xml_nodeset (40)}
[1] <NA>
[2] <NA>
[3] <NA>
...
An suggest told me use library(purrr) to do this, but the “purrr” process dataframe only, the “list” prepared by “xml2” could not be analyzed.
I want not to use loop and get the result like below, can we do it? I hope the scholars who have experience on “XML” and “xml2” could give me some advise on how to cope with this non-standard table. Thanks a lot.
I have a very long file that looks like this :
"Ach! Hans, Run!"
2RRGG
Enchantment
At the beginning of your upkeep, you may say "Ach! Hans, run! It's the . . ." and name a creature card. If you do, search your library for the named card, put it into play, then shuffle your library. That creature has haste. Remove it from the game at end of turn.
UNH-R
A Display of My Dark Power
Scheme
When you set this scheme in motion, until your next turn, whenever a player taps a land for mana, that player adds one mana to his or her mana pool of any type that land produced.
ARC-C
AErathi Berserker
2RRR
Creature -- Human Berserker
2/4
Rampage 3 (Whenever this creature becomes blocked, it gets +3/+3 until end of turn for each creature blocking it beyond the first.)
LE-U
AEther Adept
1UU
Creature -- Human Wizard
2/2
When AEther Adept enters the battlefield, return target creature to its owner's hand.
M11-C, M12-C, DDM-C
...
I'd like to load this file into a data.frame or vector "oracle", split by each empty line(actually a space and a newline) so that
oracle[1]
gives output like
"Ach! Hans, Run!" 2RRGG Enchantment At the beginning of your upkeep, you may say "Ach! Hans, run! It's the . . ." and name a creature card. If you do, search your library for the named card, put it into play, then shuffle your library. That creature has haste. Remove it from the game at end of turn. UNH-R
I've tried code like
oracle <- read.table(file = "All Sets.txt", quote = "", sep="\n")
as well as scan(), but
oracle[1]
gives very long, undesired output.
Thanks!
Try this, based on your edited question:
oracle <- readLines("BenYoung2.txt")
nvec <- length(oracle)
breaks <- which(! nzchar(oracle))
nbreaks <- length(breaks)
if (breaks[nbreaks] < nvec) {
breaks <- c(breaks, nvec + 1L)
nbreaks <- nbreaks + 1L
}
if (nbreaks > 0L) {
oracle <- mapply(function(a,b) paste(oracle[a:b], collapse = " "),
c(1L, 1L + breaks[-nbreaks]),
breaks - 1L)
}
oracle[1]
# [1] "\"Ach! Hans, Run!\" 2RRGG Enchantment At the beginning of your upkeep, you may say \"Ach! Hans, run! It's the . . .\" and name a creature card. If you do, search your library for the named card, put it into play, then shuffle your library. That creature has haste. Remove it from the game at end of turn. UNH-R"
Edit: though this works fine if you always have truly-empty lines as breaks, you can use this line instead to use lines with white-space only:
breaks <- which(grepl("^[[:space:]]*$", oracle))
This gives the same results when the lines are truly empty.
I think it's easiest to build a new variable that says which group the line belongs in, then group by that and call paste. In base R:
lines <- readLines(textConnection(txt))
i <- cumsum(lines == '')
by(lines, i, paste, collapse='\n')
The most straight forward way to do that is first splitting on a line break (i.e. \n), then throwing away empty lines.
text = "line1
line2
line3
"
split1 = unlist(strsplit(text, "\n"))
filter = split1[split1 != ""]
# [1] "line1" "line2" "line3"
When using SearchTwitter, I converted to dataframe and then exported to JSON. However, all the text is in one line, etc (sample below). I need to separate so that each tweet is its own.
phish <- searchTwitteR('phish', n = 5, lang = 'en')
phishdf <- do.call("rbind", lapply(phish, as.data.frame))
exportJson <-toJSON(phishdf)
write(exportJson, file = "phishdf.json")
json_phishdf <- fromJSON(file="phishdf.json")
I tried converting to a list and am wondering if maybe converting to a data frame is a mistake.
However, for a list, I tried:
newlist['text']=phish[[1]]$getText()
But this will just give me the text for the first tweet. Is there a way to iterate over the entire data set, maybe in a for loop?
{"text":["#ilazer #abbijacobson I do feel compelled to say that I phind phish awphul... sorry, Abbi!","#phish This on-sale was an embarrassment. Something needs to change.","FS: Have 2 Tix To Phish In Chula Vista #Phish #facevaluetickets #phish #facevalue GO: https://t.co/dFdrpyaotp","RT #WKUPhiDelt: Come unwind from a busy week of class and kick off the weekend with a Phish Fry! 4:30-7:30 at the Phi Delt house. Cost is $\u2026","RT #phish: Tickets for Phish's July 15 & 16 shows at The Gorge go on sale in fifteen minutes at 1PM ET: https://t.co/tEKLNjI5u7 https://t.c\u2026"],
"favorited":[false,false,false,false,false],
"favoriteCount":[0,0,0,0,0],
"replyToSN":["rAlexandria","phish","NA","NA","NA"],
"created":[1456521159,1456521114,1456521022,1456521016,1456520988],
"truncated":[false,false,false,false,false],
"replyToSID":["703326502629277696","703304948990222337","NA","NA","NA"],
"id":["703326837720662016","703326646074343424","703326261045829632","703326236722991105","703326119328686080"],
"replyToUID":["26152867","14503997","NA","NA","NA"],"statusSource":["Mobile Web (M5)","Twitter for iPhone","CashorTrade - Face Value Tickets","Twitter for iPhone","Twitter for Android"],
"screenName":["rAlexandria","adamgelvan","CashorTrade","Kyle_Smith1087","timogrennell"],
"retweetCount":[0,0,0,2,5],
"isRetweet":[false,false,false,true,true],
"retweeted":[false,false,false,false,false],
"longitude":["NA","NA","NA","NA","NA"],
"latitude":["NA","NA","NA","NA","NA"]}
I followed your code and don't have the issue you're describing. Are you using library(twitteR) and library(jsonlite)?
Here is the code, and a screenshot of it working
library(twitteR)
library(jsonlite)
phish <- searchTwitteR('phish', n = 5, lang = 'en')
phishdf <- do.call("rbind", lapply(phish, as.data.frame))
exportJson <-toJSON(phishdf)
write(exportJson, file = "./../phishdf.json")
## note the `txt` argument, as opposed to `file` used in the question
json_phishdf <- fromJSON(txt="./../phishdf.json")