Training Data Recommendations for Text classification using Azure Machine learning - azure-machine-learning-studio

We want to work on a text classification problem using Azure machine learning. The problem that we have is client has ONLY approx 1300 rows of data for training.
We have 2 questions
So considering the few rows for training, do we have any recommendation on the training data?
Any suggestions on how to handle a problem like this in AML?
Attachment shows the training data distribution across 15 categories.
Update:
we have added all categories with small data volume into a super category and have made the training data to have only 5 categories. Even then we are having an over all accuracy of 30%. Any Ideas on how we can improve overall accuracy of the AML model?

Related

Unbalanced classification models with binary response

I'm working on a churn model, the main idea it's to predict if a customer whether be a churner or no within 30 days.
I've been struggling with my dataset, I have 100k rows and my target variable is unbalanced, 95% no churn and 5% churn.
I'm trying with GLM and RF, if I train both models with raw data, I don't get any churn prediction, so, It doesn't work for me. I have tried balancing, taking all churners and same amount of no churners (50% churn, 50% no churn), training with that and then testing with my data and I get a lot of churn predictions when they not. I tried oversampling, undersampling, ROSE, SMOTE, and it seems that nothing's working for me.
With luck both models predict a maximum 20% of all my churners, then my gain and lift are not that good. I think I've tried everything, but I don't get more than 20% prediction of what I need.
I have customer behavior variables, personal information and more. I also made an exploratory analysis, calculating percentage of churn per age, per sex, per behavior and I saw that every group have the same churn percentage, so, I'm thinking that maybe I lack of more variables that separates groups in a better form (this last idea it's just personal).
Thank you everyone, greetings.

How can Keras predict sequences of sales (individually) of 11106 distinct customers, each a series of varying length (anyway from 1 to 15 periods)

I am approaching a problem that Keras must offer an excellent solution for, but I am having problems developing an approach (because I am such a neophyte concerning anything for deep learning). I have sales data. It contains 11106 distinct customers, each with its time series of purchases, of varying length (anyway from 1 to 15 periods).
I want to develop a single model to predict each customer's purchase amount for the next period. I like the idea of an LSTM, but clearly, I cannot make one for each customer; even if I tried, there would not be enough data for an LSTM in any case---the longest individual time series only has 15 periods.
I have used types of Markov chains, clustering, and regression in the past to model this kind of data. I am asking the question here, though, about what type of model in Keras is suited to this type of prediction. A complication is that all customers can be clustered by their overall patterns. Some belong together based on similarity; others do not; e.g., some customers spend with patterns like $100-$100-$100, others like $100-$100-$1000-$10000, and so on.
Can anyone point me to a type of sequential model supported by Keras that might handle this well? Thank you.
I am trying to achieve this in R. Haven't been able to build a model that gives me more than about .3 accuracy.
I don't think the main difficulty is coming from which model to use as much as how to frame the problem.
As you mention, "WHO" is spending the money seems as relevant as their past transaction in knowing how much they will likely spend.
But you cannot train 10k+ models either for each customers.
Instead I would suggest clustering your customers base, and instead trying to fit a model by cluster, using all the time series combined for the customers in that cluster to train the same model.
This would allow each model to learn the spending pattern of that particular group.
For that you can use LTSM or RNN model.
Hi here's my suggestion and I will edit it later to provide you with more information
Since its a sequence problem you should use RNN based models: LSTM, GRU's

Merging Tree Models from two random forest models into one random forest model at H2O in R

I am relatively new to the machine learning ocean, please excuse me if some of my questions are really basic.
Current situation: The overall goal was trying to improve some code for h2o package in r running on the supercomputer cluster. However, since the data is too large that single node with h2o really takes more than a day, therefore, we have decided to use multiple nodes to run the model. I came up with an idea:
(1) Distribute each node to build (nTree/num_node) trees and saved into a model;
(2) running on the cluster at each node for (nTree/num_node) number of trees in the forest;
(3) Merging the trees back together and reform the original forest, and using the measurement results in average.
I later realized this could be risky. But I cannot find the actual support or against statement since I am not machine learning focused programmer.
Questions:
if this way of handling random forest will result in some risk, please reference me the link so I can have a basic idea why this is not right.
If this way is actually an "ok" way to do so. What should I be do to merge the trees, is there a package or method I can borrow from?
If this is actually a solved problem, please reference me the link, I may have searched the wrong keywords, and thank you!
The real number-involved example I can present here is:
I have a random forest task with 80k rows and 2k columns and wanted the number of trees are 64. What I have done is put 16 trees on each node running with the whole dataset, and each one of four nodes come up with an RF model. I am now trying to merge the trees from each model into this one big RF model and average the measurements (from each of those four models).
There is no need to merge the models. Unlike with boosting methods, every tree in a Random Forest is grown independently (just don't set the same seed prior to kicking off RF on each node!).
You are basically doing what Random Forest does on its own, which is to grow X independent trees and then average across the votes. Many packages provide an option to specify the number of cores or threads, in order to take advantage of this feature of RF.
In your case, since you have the same number of trees per node, you'll get 4 "models" back, but those are really just collections of 16 trees. To use it, I'd just keep the 4 models separate and when you want a prediction, average the prediction from each of the 4 models. Assuming you're going to be doing that more than once, you could write a small wrapper function to predict with the 4 models and average the output.
10,000 rows by 1,000 columns is not overly large and should not take that long to train an RF model.
It sound like something unexpected is happening.
While you can try to average models if you know what you are doing, I don't think it should be necessary in this case.

Training model with batches of training data-R

I am new to R and data analysis.
I am hitting the wall as my hardware is not able to process the whole training set for computing a model.
I was thinking by using the caret package, am I able to train the model by breaking the training data in batches i.e. training the model with the first 1000 rows, followed by the next 1000 rows and so on and so forth? I then will be able to trim the model at every stage to save memory.
Will the model be “updated” with every feed of the batch of training data?
I know this method is known as sequential training but wasn’t able to find a practical example/case study.
Hope to get some guidance on this. Thanks.

Build GBM classification model with customer post-stratification weights

I am attempting to produce a classification model based on the work of qualitative survey data. About 10K of our customers were researched and as a result a segmentation model was built and subsequently each customer categorised into 1 of 8 customer segments. The challenge is to now classify the TOTAL customer base into those segments. As only certain customers responded the researcher used overall demographics to apply post-stratification weights (or frequency weights).
My task is to now use our customer data as explanatory variables on this 10K in order to build a classification model for the whole base.
In order to handle the customer weights I simply duplicated each customer record by each respective frequency weight and the data set exploded to about 72K. I then split this data into train and test and used the R caret package to train a GBM and using the final chosen model classified my hold-out test set.
I was getting 82% accuracy and thought the results were too good to be true. After thinking about it I think the issue is that the model is inadvertently seeing records in train that are exactly the same in test (some records might be exactly duplicated up to 10 times).
I know that the GLM model function allows you to use the weight parameter to refer to a vector of weights but my question is how to utilise other machine learning algorithms, such as GBM or Random Forests, in R?
Thanks
You can use case weights with gbm and train. In general, the list of models in caret that can use case weights is here.

Resources