Manually Specifying a Topic Model in R - r

I have a corpus of text with each line in the csv file uniquely specifying a "topic" I am interested in. If I were to run an topic model on this corpus using an LDA or Gibbs method from either the topicmodels package or lda, as expected I would get multiple topics per "document" (a line of text in my CSV which I have a-priori defined to be my unique topic of interest). I get that this is a result of the topic model's algorithm and the bag of words assumption.
What I am curious about however is this
1) Is there a pre-fab'd package in R that is designed for the user to specify the topics using the empirical word distribution? That is, I don't want the topics to be estimated; I want to tell R what the topics are. I suppose I could run a topic model with the correct number of Topics, use that structure of the object and then overwrite its contents. I was just hoping there was an easier or more obvious way that I'm just not seeing at this point.
Thoughts?
edit: added -
I just thought about the alpha and beta parameters having control over the topic/term distributions within the LDA modeling algorithm. What settings might I be able to use that would force the model to only find 1 topic per document? Or is there a setting which would allow for that to occur?
If these seem like silly questions I understand - I'm quite new to this particular field and I am finding it fascinating.

What are you trying to accomplish with this approach? If you want to tell R what the topics are so it can predict the topics in other lines or documents, then RTextTools may be a helpful package.

Related

Extract sample of features used to build each tree in H2O

In GBM model, following parameters are used -
col_sample_rate
col_sample_rate_per_tree
col_sample_rate_change_per_level
I understand how the sampling works and how many variables get considered for splitting at each level for every tree. I am trying to understand how many times each feature gets considered for making a decision. Is there a way to easily extract all sample of features used for making a splitting decision from the model object?
Referring to the explanation provided by H2O, http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/col_sample_rate.html, is there a way to know 60 randomly chosen features for each split?
Thank you for your help!
If you want to see which features were used at a given split in a give tree you can navigate the H2OTree object.
For R see documentation here and here
For Python see documentation here
You can also take a look at this Blog (if this link ever dies just do a google search for H2OTree class)
I don’t know if I would call this easy, but the MOJO tree visualizer spits out a graphviz dot data file which is turned into a visualization. This has the information you are interested in.
http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#viewing-a-mojo

Is it possible to make SVM probabiility predictions without tm and RTextTools using e1071 in R?

I am trying to create a topic classifier from an employee satisfaction survey. The survey contains several commentary fields, and therefore want to produce an effective way of classifying what a single comment is about, and later also whether it is positive or negative (pretty standard sentiment analysis).
I already have a sample data from last years survey, where comments have been given a category manually.
The data is structured in a CSV file with three rows:
The document (or comment) - The topic - The sentiment
One example could be:
Document: I am afraid of violence from our customers, since my position does not have sufficient sercurity
Topic: Violence
Sentiment: Negative
(Very crude example, but bear with me)
My tool for making this classifier is RStudio, but I only have access to a limited number of packages. I do not have access to tm or RTextTools, which are the packages I usually use when I am doing projects outside of work. I pretty much only have access to e1071, and that is why I figured a support vector machine might do the trick. I have bad experiences with NaiveBayes when dealing with text analytics, but I am of course open to any advice. Is it possible at all to do text mining without tm or RTextTools? I have access to the NLP and tau packages
from the help page of predict.svm
# S3 method for svm
predict(object, newdata, decision.values = FALSE,
probability = FALSE, ..., na.action = na.omit)
you could use the option probability by setting it toTRUE.
ie. predict(foo,bar, probability = TRUE)

How to do Language Modeling using HTK

I am in confusion on how to use HTK for Language Modeling.
I followed the tutorial example from the Voxforge site
http://www.voxforge.org/home/dev/acousticmodels/linux/create/htkjulius/tutorial
After training and testing I got around 78% accuracy. I did this for my native language.Now I have to use HTK for Language Modeling.
Is there any tutorial available for doing the same? Please help me.
Thanks
speech_tri
If I understand your question correctly, you are trying to change from a "grammar" to an "n-gram language model" approach. These two methods are alternative ways of specifying what combinations of words are permissible in the responses that a recognizer will return. Having followed the Voxforge process you will probably have a grammar in place.
A language model comes from the analysis of a corpus of text which defines the probabilities of words appearing together. The text corpus used can be very specialized. There are a number of analysis tools such as SRILM (http://www.speech.sri.com/projects/srilm/) and MITLM (https://github.com/mitlm/mitlm) which will read a corpus and produce a model.
Since you are using words from your native language you will need a unique corpus of text to analyze. One way to get a test corpus would be to artificially generate a number of sentences from your existing grammar and use that as the corpus. Then with the new language model in place, you just point the recognizer at it instead of the grammar and hope for the best.

Text classification with R and SVM. Matrix features

I am playing a bit with text classification and SVM.
My understanding is that typically the way to pick up the features for the training matrix is essentially to use a "bag of words" where we essentially end up with a matrix with as many columns as different words are in our document and the values of such columns is the number of occurrences per word per document (of course each document is represented by a single row).
So that all works fine, I can train my algorithm and so on, but sometimes i get an error like
Error during wrapup: test data does not match model !
By digging it a bit, I found the answer in this question Error in predict.svm: test data does not match model which essentially says that if your model has features A, B and C, then your new data to be classified should contain columns A, B and C. Of course with text this is a bit tricky, my new documents to classify might contain words that have never been seen by the classifier with the training set.
More specifically I am using the RTextTools library whith uses SparseM and tm libraries internally, the object used to train the svm is of type "matrix.csr".
Regardless of the specifics of the library my question is, is there any technique in document classification to ensure that the fact that training documents and new documents have different words will not prevent new data from being classified?
UPDATE The solution suggested by #lejlot is very simple to achieve in RTextTools by simply making use of the originalMatrix optional parameter when using the create_matrix function. Essentially, originalMatrix should be the SAME matrix that one creates when one uses the create_matrix function for TRAINING the data. So after you have trained your data and have your models, keep also the original document matrix, when using new examples, make sure of using such object when creating the new matrix for your prediction set.
Regardless of the specifics of the library my question is, is there any technique in document classification to ensure that the fact that training documents and new documents have different words will not prevent new data from being classified?
Yes, and it is very trivial one. Before applying any training or classification you create a preprocessing object, which is supposed to map text to your vector representation. In particular - it stores whole vocabulary used for training. Later on you reuse the same preprocessing object on test documents, and you simply ignore words from outside of vocabulary stored before (OOV words, as they are often refered in the literature).
Obviously there are plenty other more "heuristic" approaches, where instead of discarding you try to map them to existing words (although it is less theoreticalyy justified). Rather - you should create intermediate representation, which will be your new "preprocessing" object which can handle OOV words (through some levenstein distance mapping etc.).

Rough Set-based Attribute Reduction

I tried RSAR, a free package, but I wonder if there any other good attribute reducers out there. Even packages for R or MATLAB, any resource capable of letting me find the minimal set of attributes which classify data.
For example, having a set with hundreds of examples of mail and different attributes which describe them and classified as spam or not spam, I want to find the minimal set of attributes that describe all the data, to discard useless information.
Considering the type of problem you describe, that is: choosing the right attributes for email classification, the best way might be to use Weka (Weka home). It has several feature-selection algorithms, which could be applied both interactively to visualize their effect, or in conjunction with various classification algorithms, to evaluate their effect on actual classification. (note that choosing attributes for classification without proper validation for a specific classifier might lead to less than optimal results in real life).
Some relevant links:
Weka's manual regarding attribute selection
A (somewhat outdated) hands-on example
you can use RoughSets package of R language. See the description of FS.one.reduct.computation in R (after installing RoughSets package)
e.g: HIRING2Matrix is a Decision Table with number of attributes. reduct1 is the reduced set of attributes
reduct1<- FS.one.reduct.computation(HIRING2Matrix, greedy = TRUE, power = 1)

Resources