Text Line Count in R [closed] - r

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I'm doing some basic text analysis in R and want to count the number of lines for a transcript from a .txt file that I load into R. With the example below to yield a count in which each speaker gets a new line attached to the linecount such that Mr. Smith = 4, Mr. Gordon = 6, Mr. Catalano = 3.
[71] "\"511\"\t\"MR Smith: Mr Speaker, I like the spirit in which we are agreeing on this. The administration of FUFA is present here. FUFA could be used as a conduit, but the intention of what hon. Beti Kamya brought up and what hon. Rose Namayanja has said was okufuwa - just giving a token of appreciation to the players who achieved this.\""
[72] "\"513\"\t\"MR Gordon: Thank you very much, Mr Speaker. FUFA is an organisation and the players are the ones who got the cup for us. To promote motivation in all activities, not only football, you should remunerate people who have done well. In this case, we have heard about FUFA with their problems. They have not paid water bills and they can take this money to pay the water bills. If we agree that this money is supposed to go to the players and the coaches, then when it goes there they would know the amount and they will sit among themselves and distribute according to what we will have given. (Applause) I thank you.\""
[73] "\"515\"\t\"MR Catalano: Mr Speaker, I want to give information to my dear colleagues. The spirit is very good but you must be mindful that the administration of FUFA is what has made this happen. The money to the players. That indicates to you that FUFA is very trustworthy. This is not the old FUFA we are talking about.\""
The function countLine() doesn't work since it requires a connection - these are just .txt imported into R. I realize that the line count is subject to the formatting of what the text is opened up in, but any general help on if this is feasible would help. Thanks.

I didn't think your example was reproducible, so I edited it to get it to contain what you posted but I do not know if the names will match:
txtvec <- structure(list(`'511' ` = "MR Smith: Mr Speaker, I like the spirit in which we are agreeing on this. The administration of FUFA is present here. FUFA could be used as a conduit, but the intention of what hon. Beti Kamya brought up and what hon. Rose Namayanja has said was okufuwa - just giving a token of appreciation to the players who achieved this.\"",
`'513' ` = "MR Gordon: Thank you very much, Mr Speaker. FUFA is an organisation and the players are the ones who got the cup for us. To promote motivation in all activities, not only football, you should remunerate people who have done well. In this case, we have heard about FUFA with their problems. They have not paid water bills and they can take this money to pay the water bills. If we agree that this money is supposed to go to the players and the coaches, then when it goes there they would know the amount and they will sit among themselves and distribute according to what we will have given. (Applause) I thank you.\"",
`'515' ` = "MR Catalano: Mr Speaker, I want to give information to my dear colleagues. The spirit is very good but you must be mindful that the administration of FUFA is what has made this happen. The money to the players. That indicates to you that FUFA is very trustworthy. This is not the old FUFA we are talking about.\""), .Names = c("'511'\t",
"'513'\t", "'515'\t"))
So it's only a matter or running a regex expression across it and tabling the results:
> table( sapply(txtvec, function(x) sub("(^MR.+)\\:.+", "\\1", x) ) )
#MR Catalano MR Gordon MR Smith
1 1 1
There was concern expressed that the names were not in the original structure. This is another version with unnamed vector and a slightly modified regex:
txtvec <- c("\"511\"\t\"\nMR Smith: Mr Speaker, I like the spirit in which we are agreeing on this. The administration of FUFA is present here. FUFA could be used as a conduit, but the intention of what hon. Beti Kamya brought up and what hon. Rose Namayanja has said was okufuwa - just giving a token of appreciation to the players who achieved this.\"",
"\"513\"\t\"\nMR Gordon: Thank you very much, Mr Speaker. FUFA is an organisation and the players are the ones who got the cup for us. To promote motivation in all activities, not only football, you should remunerate people who have done well. In this case, we have heard about FUFA with their problems. They have not paid water bills and they can take this money to pay the water bills. If we agree that this money is supposed to go to the players and the coaches, then when it goes there they would know the amount and they will sit among themselves and distribute according to what we will have given. (Applause) I thank you.\"",
"\"515\"\t\"\nMR Catalano: Mr Speaker, I want to give information to my dear colleagues. The spirit is very good but you must be mindful that the administration of FUFA is what has made this happen. The money to the players. That indicates to you that FUFA is very trustworthy. This is not the old FUFA we are talking about.\""
)
table( sapply(txtvec, function(x) sub(".+\\n(MR.+)\\:.+", "\\1", x) ) )
#MR Catalano MR Gordon MR Smith
# 1 1 1
To count the number of "lines" these would occupy on a wrapping device with 80 characters per line you could use this code (which could easily be converted to a function):
sapply(txtvec, function(tt) 1+nchar(tt) %/% 80)
#[1] 5 8 4

This is raised in the comments, but it really bares being it's own answer:
You cannot "count lines" without defining what a "line" is. A line is a very vague concept and can vary by the program being used.
Unless of course the data contains some indicator of a line break, such as \n. But even then, you would not be counting lines, you would be counting linebreaks. You would then have to ask yourself if the hardcoded line break is in accord with what you are hoping to analyze.
--
If your data does not contain linebreaks, but you still want to count the number of lines, then we're back to the question of "how do you define a line"? The most basic way, is as #flodel suggests, which is to use character length. For example, you can define a line as 76 characters long, and then take
ceiling(nchar(X) / 76))
This of course assumes that you can cut words. (If you need words to remain whole, then you have to get craftier)

Related

R read data from a txt space delimited file with quoted text

I'm trying to load a dataset into R Studio, where the dataset itself is space-delimited, but it also contains spaces in quoted text like in csv files. Here is the head of the data:
DOC_ID LABEL RATING VERIFIED_PURCHASE PRODUCT_CATEGORY PRODUCT_ID PRODUCT_TITLE REVIEW_TITLE REVIEW_TEXT
1 __label1__ 4 N PC B00008NG7N "Targus PAUK10U Ultra Mini USB Keypad, Black" useful "When least you think so, this product will save the day. Just keep it around just in case you need it for something."
2 __label1__ 4 Y Wireless B00LH0Y3NM Note 3 Battery : Stalion Strength Replacement 3200mAh Li-Ion Battery for Samsung Galaxy Note 3 [24-Month Warranty] with NFC Chip + Google Wallet Capable New era for batteries Lithium batteries are something new introduced in the market there average developing cost is relatively high but Stallion doesn't compromise on quality and provides us with the best at a low cost.<br />There are so many in built technical assistants that act like a sensor in their particular forté. The battery keeps my phone charged up and it works at every voltage and a high voltage is never risked.
3 __label1__ 3 N Baby B000I5UZ1Q "Fisher-Price Papasan Cradle Swing, Starlight" doesn't swing very well. "I purchased this swing for my baby. She is 6 months now and has pretty much out grown it. It is very loud and doesn't swing very well. It is beautiful though. I love the colors and it has a lot of settings, but I don't think it was worth the money."
4 __label1__ 4 N Office Products B003822IRA Casio MS-80B Standard Function Desktop Calculator Great computing! I was looking for an inexpensive desk calcolatur and here it is. It works and does everything I need. Only issue is that it tilts slightly to one side so when I hit any keys it rocks a little bit. Not a big deal.
5 __label1__ 4 N Beauty B00PWSAXAM Shine Whitening - Zero Peroxide Teeth Whitening System - No Sensitivity Only use twice a week "I only use it twice a week and the results are great. I have used other teeth whitening solutions and most of them, for the same results I would have to use it at least three times a week. Will keep using this because of the potency of the solution and also the technique of the trays, it keeps everything in my teeth, in my mouth."
6 __label1__ 3 N Health & Personal Care B00686HNUK Tobacco Pipe Stand - Fold-away Portable - Light Weight - For Single Pipe not sure I'm not sure what this is supposed to be but I would recommend that you do a little more research into the culture of using pipes if you plan on giving this as a gift or using it yourself.
7 __label1__ 4 N Toys B00NUG865W ESPN 2-Piece Table Tennis PING PONG TABLE GREAT FOR YOUTHS AND FAMILY "Pleased with ping pong table. 11 year old and 13 year old having a blast, plus lots of family entertainment too. Plus better than kids sitting on video games all day. A friend put it together. I do believe that was a challenge, but nothing they could not handle"
8 __label1__ 4 Y Beauty B00QUL8VX6 "Abundant Health 25% Vitamin C Serum with Vitamin E and Hyaluronic Acid for Youthful Looking Skin, 1 fl. oz." Great vitamin C serum "Great vitamin C serum... I really like the oil feeling, not too sticky. I used it last week on some of my recent bug bites and it helps heal the skin faster than normal."
9 __label1__ 4 N Health & Personal Care B004YHKVCM PODS Spring Meadow HE Turbo Laundry Detergent Pacs 77-load Tub wonderful detergent. "I've used tide pods laundry detergent for many years,its such a great detergent to use having a nice scent and leaver the cloths smelling fresh."
Problem is that it looks tab-delimited but it is not, example would be DOC_ID = 1, where there are only two spaces between useful and "When least...", this way passing sep = "/t" to read.table throws an error saying that line 1 did not have 10 elements, which for some reason is incorrect, because the number of elements should be 9. Here are the parameters that I'm passing(without the original path):
read.table(file = "path", sep ="\t", header = TRUE, strip.white = TRUE)
Also relying on quotes is not a good strategy, because some lines do not have their text quoted, so the delimiter should be something like a double space, which combined with strip.white should work properly, but read.table only accepts single byte delimiters.
So the question is how would you parse such corpus in R or with any other third party software that could convert it adequately to a csv or atleast a tab-delimited file?
Parsing the data using python pandas.read_csv(filename, sep='\t', header = 0, ...) seems to have parsed the data successfully and from this point anything could be done with it. Closing this out.

Is there another way to write tables in Inform 7?

Inform 7 tables are written using "tab separated values". Whenever a table contains text, it ends up looking like this:
Table of Aristotelian Questions
name difficulty text explanation follow-up
the opposite question easy "WHAT IS THE OPPOSITE OF [the subject]?[line break]" "SOMETIMES A GOOD WAY TO DESCRIBE SOMETHING IS BY TELLING[line break]WHAT IT IS NOT. THERE MAY OR MAY NOT BE A DIRECT[line break]OPPOSITE OF [the subject], BUT[line break]SEE IF YOU CAN THINK OF ONE.[line break][line break]FOR EXAMPLE, IF I WERE WRITING A PAPER ON SOLAR[line break]ENERGY, AN ANSWER TO THIS QUESTION MIGHT PRODUCE A[line break]LIST OF EARTH'S NATURAL ENERGY RESOURCES.[line break]" prompt A
the good consequences question easy "WHAT ARE THE GOOD CONSEQUENCES OF[line break][the subject]?[line break]" "WHAT GOOD WILL COME ABOUT FROM MANKIND'S CONCERN ABOUT[line break][the subject]?[line break][line break]FOR EXAMPLE, IF I WERE WRITING A PAPER ABOUT COLLEGE[line break]ACADEMICS, SOME OF THE GOOD CONSEQUENCES MAY BE A BETTER[line break]JOB IN THE FUTURE, A FULLER UNDERSTANDING[line break]ABOUT OUR WORLD, AND AN APPRECIATION FOR GOOD STUDY HABITS.[line break](STOP THE SNICKERING AND GET ON WITH AN ANSWER.)[line break]" prompt C
Is there another way to write the same table? (I'm imagining something sort of YAML-like.)
No, that's the only way to make a table in I7. But you can extract all of the text as constants:
Table of Aristotelian Questions
name difficulty text explanation follow-up
the opposite question easy Opposite question Opposite follow up prompt A
the good consequences question easy Good question Good follow up prompt C
Opposite question is always "WHAT IS THE OPPOSITE OF [the subject]?[line break]".
Opposite follow up is always "SOMETIMES A GOOD WAY TO DESCRIBE SOMETHING IS BY TELLING[line break]WHAT IT IS NOT. THERE MAY OR MAY NOT BE A DIRECT[line break]OPPOSITE OF [the subject], BUT[line break]SEE IF YOU CAN THINK OF ONE.[line break][line break]FOR EXAMPLE, IF I WERE WRITING A PAPER ON SOLAR[line break]ENERGY, AN ANSWER TO THIS QUESTION MIGHT PRODUCE A[line break]LIST OF EARTH'S NATURAL ENERGY RESOURCES.[line break]".
Good question is always "WHAT ARE THE GOOD CONSEQUENCES OF[line break][the subject]?[line break]".
Good follow up is always "WHAT GOOD WILL COME ABOUT FROM MANKIND'S CONCERN ABOUT[line break][the subject]?[line break][line break]FOR EXAMPLE, IF I WERE WRITING A PAPER ABOUT COLLEGE[line break]ACADEMICS, SOME OF THE GOOD CONSEQUENCES MAY BE A BETTER[line break]JOB IN THE FUTURE, A FULLER UNDERSTANDING[line break]ABOUT OUR WORLD, AND AN APPRECIATION FOR GOOD STUDY HABITS.[line break](STOP THE SNICKERING AND GET ON WITH AN ANSWER.)[line break]".

Bayesian networks for text analysis in R

I have one page story (i.e. text data), I need to use Bayesian network on that story and analyse the same. Could someone tell me whether it is possible in R? If yes, that how to proceed?
The objective of the analysis is - Extract Action Descriptions from
Narrative Text.
The data considered for analysis -
Krishna’s Dharam-shasthra to Arjuna:
The Gita is the conversation between Krishna and Arjuna leading up to the battle.
Krishna emphasised on two terms: Karma and Dharma. He told Arjun that this was a righteous war; a war of Dharma. Dharma is the way of righteousness or a set of rules and laws laid down. The Kauravas were on the side of Adharma and had broken rules and laws and hence Arjun would have to do his Karma to uphold Dharma.
Arjuna doesn't want to fight. He doesn't understand why he has to shed his family's blood for a kingdom that he doesn't even necessarily want. In his eyes, killing his evil and killing his family is the greatest sin of all. He casts down his weapons and tells Krishna he will not fight. Krishna, then, begins the systematic process of explaining why it is Arjuna's dharmic duty to fight and how he must fight in order to restore his karma.
Krishna first explains the samsaric cycle of birth and death. He says there is no true death of the soul simply a sloughing of the body at the end of each round of birth and death. The purpose of this cycle is to allow a person to work off their karma, accumulated through lifetimes of action. If a person completes action selflessly, in service to God, then they can work off their karma, eventually leading to a dissolution of the soul, the achievement of enlightenment and vijnana, and an end to the samsaric cycle. If they act selfishly, then they keep accumulating debt, putting them further and further into karmic debt.
What I want is - post tagger to separate verbs, nouns etc. and then create a meaningful network using that.
The steps that should be followed in pre-processing are:
syntactic processing (post tagger)
SRL algorithm (semantic role labelling of characters of the story)
conference resolution
Using all of the above I need to create a knowledge database and create a Bayesian network.
This is what I have tried so far to get post tagger:
txt <- c("As the years went by, they remained isolated in their city. Their numbers increased by freeing women from slavery.
Doom would come to the world in the form of Ares the god of war and the Son of Zeus. Ares was unhappy with the gods as he wanted to prove just how foul his father’s creation was. Hence, he decided to corrupt the mortal men created by Zeus. Fearing his wrath upon the world Zeus decided to create the God killer in order to stop Ares. He then commanded Hippolyta to mould a baby from the sand and clay of the island. Then the five goddesses went back into the Underworld, drawing out the last soul that remained in the Well and giving it incredible powers. The soul was merged with the clay and became flesh. Hippolyta had her daughter and named her Diana, Princess of the Amazons, the first child born on Paradise Island.
Each of the six members of the Greek Pantheon granted Diana a gift: Demeter, great strength; Athena, wisdom and courage; Artemis, a hunter's heart and a communion with animals; Aphrodite, beauty and a loving heart; Hestia, sisterhood with fire; Hermes, speed and the power of flight. Diana was also gifted with a sword, the Lasso of truth and the bracelets of penance as weapons to defeat Ares.
The time arrived when Diana, protector of the Amazons and mankind was sent to the Man's World to defeat Ares and rid the mortal men off his corruption. Diana believed that only love could truly rid the world of his influence. Diana was successfully able to complete the task she was sent out by defeating Ares and saving the world.
")
writeLines(txt, tf <- tempfile())
library(stringi)
library(cleanNLP)
cnlp_init_tokenizers()
anno <- cnlp_annotate(tf)
names(anno)
get_token(anno)
cnlp_init_spacy()
anno <- cnlp_annotate(tf)
get_token(anno)
cnlp_init_corenlp()

Replicate Postgres pg_trgm text similarity scores in R?

Does anyone know how to replicate the (pg_trgm) postgres trigram similarity score from the similarity(text, text) function in R? I am using the stringdist package and would rather use R to calculate these on a matrix of text strings in a .csv file than run a bunch of postgresql quires.
Running similarity(string1, string2) in postgres give me a number score between 0 and 1.
I tired using the stringdist package to get a score but I think I still need to divide the code below by something.
stringdist(string1, string2, method="qgram",q = 3 )
Is there a way to replicate the pg_trgm score with the stringdist package or another way to do this in R?
An example would be getting the similarity score between the description of a book and the description of a genre like science fiction. For example, if I have two book descriptions and the using the similarity score of
book 1 = "Area X has been cut off from the rest of the continent for decades. Nature has reclaimed the last vestiges of human civilization. The first expedition returned with reports of a pristine, Edenic landscape; the second expedition ended in mass suicide, the third expedition in a hail of gunfire as its members turned on one another. The members of the eleventh expedition returned as shadows of their former selves, and within weeks, all had died of cancer. In Annihilation, the first volume of Jeff VanderMeer's Southern Reach trilogy, we join the twelfth expedition.
The group is made up of four women: an anthropologist; a surveyor; a psychologist, the de facto leader; and our narrator, a biologist. Their mission is to map the terrain, record all observations of their surroundings and of one anotioner, and, above all, avoid being contaminated by Area X itself.
They arrive expecting the unexpected, and Area X delivers—they discover a massive topographic anomaly and life forms that surpass understanding—but it’s the surprises that came across the border with them and the secrets the expedition members are keeping from one another that change everything."
book 2= "From Wall Street to Main Street, John Brooks, longtime contributor to the New Yorker, brings to life in vivid fashion twelve classic and timeless tales of corporate and financial life in America
What do the $350 million Ford Motor Company disaster known as the Edsel, the fast and incredible rise of Xerox, and the unbelievable scandals at GE and Texas Gulf Sulphur have in common? Each is an example of how an iconic company was defined by a particular moment of fame or notoriety; these notable and fascinating accounts are as relevant today to understanding the intricacies of corporate life as they were when the events happened.
Stories about Wall Street are infused with drama and adventure and reveal the machinations and volatile nature of the world of finance. John Brooks’s insightful reportage is so full of personality and critical detail that whether he is looking at the astounding market crash of 1962, the collapse of a well-known brokerage firm, or the bold attempt by American bankers to save the British pound, one gets the sense that history repeats itself.
Five additional stories on equally fascinating subjects round out this wonderful collection that will both entertain and inform readers . . . Business Adventures is truly financial journalism at its liveliest and best."
genre 1 = "Science fiction is a genre of fiction dealing with imaginative content such as futuristic settings, futuristic science and technology, space travel, time travel, faster than light travel, parallel universes, and extraterrestrial life. It often explores the potential consequences of scientific and other innovations, and has been called a "literature of ideas".[1] Authors commonly use science fiction as a framework to explore politics, identity, desire, morality, social structure, and other literary themes."
How can I get a similarity score for the description of each book against the description of the science fiction genre like pg_trgm using an R script?
How about something like this?
library(textcat)
?textcat_xdist
# Compute cross-distances between collections of n-gram profiles.
round(textcat_xdist(
list(
text1="hello there",
text2="why hello there",
text3="totally different"
),
method="cosine"),
3)
# text1 text2 text3
#text1 0.000 0.078 0.731
#text2 0.078 0.000 0.739
#text3 0.731 0.739 0.000

which one? permutation or combination?

Let say we have a bookshelf that can fit 6 books, we want 4 computer science books and 2 physics books but computer books should be together and physics books also should be together, we have 8 computer and 6 physics book in total in how many ways we can do that?
I believe that the answer is like this:
c(8,4)*c(6,2) + c(6,2)*c(8,4)
but my instructor sloved it in this way(by p I mean permutation):
p(8,4)*p(6,2) + p(6,2)*p(8,4)
could u please tell me which one is right?
Your answer is correct if you don't care what order the books go on the shelf. Your instructor's answer is correct if you count different orderings of the books within each of the two groups as different ways of filling the bookshelf.

Resources