I have unstructured data (screenshot of app) and semi-structured data(screen dumping file), i chose store it in hbase. my goal is find defect or issue on app (meaningfull data). Now, I'd like to apply data mining on these, so that is kind of text mining ? and how can i apply some data mining technical on this data ?
To begin with, you can use rule based approach where you define set of rules which detects the defect scenario.
Then you can prepare training data set which has many instances of defect, non-defect scenarios. In this step, for each screenshot or screen dump file you collect; you would manually tag it as defect or non-defect.
Then you can train classifier using this training data. Classifier would try to generalize training samples to predict the output label for the samples not seen in the past.
Since, your input is non-standard you might need some preprocessing to convert your input to standard form. For example, to process screenshots you might need some image processing, OCR, computer vision libraries.
Related
My goal is to deploy a Mask RCNN model trained with the well known Matterport's repo with Nvidia deepstream.
To do so, first I have to convert the generated .h5 model into a .uff. This operation is decribed here.
After the conversion, I have run the generated .uff model with TensoRT and deepstream and it has a very poor performance compared to the .h5model (almost never detects/masks the objects).
Before the conversion, I have done the corresponding changes to handle NCWH models and configured the number of classes and backbone (in this case resnet50).
I don't know how to continue. Any advice could really healp me. Thanks!
To solve the problem one must use the same configuration for the training and the conversion.
In particular, since most of models start from tranfering learning from the pretrained coco model, one has to use its very same config.
In adition, the input images sizes have to be coherent with the trainning configuration.
This is a long shot and more of a code designing sort of ask for a rookie like me but I think it has real value for real world applications
The core questions are:
Can I save a trained ML model, such as Random Forest (RF), in R and call/use it later without the need to reload all the data used for training it?
When, in real life, I have a massive folder of hundreds and thousands files of data to be tested, can I load that model I saved somewhere in R and ask it to go read the unknown files one by one (so I am not limited by RAM size) and perform regression/classification etc analysis for each of the file read in, and store ALL the output together into a file.
For example,
If I have 100,000 csv files of data in a folder, and I want to use 30% of them as training set, and the rest as test for a Random Forest (RF) classification.
I can select the files of interest, call them "control files". Then use fread() then randomly sample 50% of the data in those files, call the CARET library or RandomForest library, train my "model"
model <- train(,x,y,data,method="rf")
Now can I save the model somewhere? So I don't have to load all the control files each time I want to use the model?
Then I want to apply this model to all the remaining csv files in the folder, and I want it to read those csv files one by one when applying the model, instead of reading them all in, due to RAM issue.
In GBM model, following parameters are used -
col_sample_rate
col_sample_rate_per_tree
col_sample_rate_change_per_level
I understand how the sampling works and how many variables get considered for splitting at each level for every tree. I am trying to understand how many times each feature gets considered for making a decision. Is there a way to easily extract all sample of features used for making a splitting decision from the model object?
Referring to the explanation provided by H2O, http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/col_sample_rate.html, is there a way to know 60 randomly chosen features for each split?
Thank you for your help!
If you want to see which features were used at a given split in a give tree you can navigate the H2OTree object.
For R see documentation here and here
For Python see documentation here
You can also take a look at this Blog (if this link ever dies just do a google search for H2OTree class)
I don’t know if I would call this easy, but the MOJO tree visualizer spits out a graphviz dot data file which is turned into a visualization. This has the information you are interested in.
http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#viewing-a-mojo
Is it best to split your data into training and test sets before doing any exploratory data analysis, or do all exploration based solely on training data?
I'm working on my first full machine learning project (a recommendation system for a course capstone project) and am looking for clarification on order of operations. My rough outline is to import and clean, do exploratory analysis, train my model, and then evaluate on a test set.
I am doing exploratory data analysis now - nothing special initially, just starting with variable distributions and whatnot. But I am not sure: should I split my data into training and test sets before or after exploratory analysis?
I don't want to potentially contaminate algorithm training by inspecting the test set. However, I also don't want to miss visual trends that might reflect real signal that my poor human eye might not see after filtering, and thus potentially miss investigating an important and relevant direction while designing my algorithm.
I checked other threads, like this, but the ones I found seem to ask more about things like regularization or actual manipulation of the original data. The answers I found were mixed but prioritized splitting first. However, I don't plan to do any actual manipulation of the data before splitting it (beyond inspecting distributions and potentially doing some factor conversions).
What do you do in your own work and why?
Thanks for helping a new programmer!
To answer this question, we should remind ourselves of why, in machine learning, we split data into training, validation and testing sets (see also this question).
Training sets are used for model development. We often carefully explore this data to get ideas for feature engineering and the general structure of the machine learning model. We then train the model using the training data set.
Usually, our goal is to generate models that will perform well not only on the training data, but also on previously unseen data. Therefore, we want to avoid models that capture the peculiarities of the data we have available now rather than the general structure of the data we will see in the future ("overfitting"). To do so, we assess the quality of the models we're training by evaluating their performance on a different set of data, the validation data, and choose the model that performs best on the validation data.
Having trained our final model, we often want to have an unbiased estimate of its performance. Since we have already used the validation data in the process of model development (we chose the model that performed best on the validation data), we cannot be sure that our model will perform equally well on unseen data. So, to assess model quality, we test performance unsing a new batch of data, the testing data.
This discussion gives the answer your question: We should not use the testing (or validation) data set for exploratory data analysis. Because if we did, we would run the risk of overfitting the model to the peculiarities of the data we have, for example by engineering features that work well for the testing data. At the same time, we would lose the ability of getting an unbiased estimate of our model's performance.
I would take the problem the other way round; is it bad to use the test set ?
The objective of modeling is to end up with a model with low variance (and small bias): that's why the test set is keeping a bunch of data aside to assess how your model behaves with new data (i.e. its variance). If you use the test set during modeling you are left with nothing to do that, and you are overfitting your data.
The objective of EDA is to understand the data you're working with; the distributions of features, their relationships, their dynamics, etc ... If you leave your test set in the data, is there a risk of "overfitting" your understanding of data ? If that was the case, you would observe on say 70% of your data some properties that are not valid for the 30% remaining (test set) ... knowing that the split is random, this is impossible, or you have been extremely unlucky.
From my understanding in Machine Learning Pipeline is exploratory data analysis should be done before splitting the data into train and test.
Here are my reasons:
The data may not be cleaned in the beginning. It might have missing values, mismatch datatypes and outliers.
Need to understand every features with the target variable in the dataset. This will help to understand the importance of every features with respect to the business problem and will help to derive the additional features as well.
The data visualization will also help to get the insights information from the dataset.
Once the above operations done, then we can split the dataset into train and test. Because the features must be similar in both train and test.
Can someone help me if the steps I have written below is how it is done in real world predictive modelling .
Source is Oracle database.
Using Apache Sqoop to bring the data into HDFS.(I use --query to bring the features into hdfs)
Here, I am confused as to process the data further in HDFS or directly bring the data into Hive.
Access data from Hive or Hdfs.
Data manipulation and pruning using R
Sample the data into training and test data
Build the model using R and save it as PMML.
Evaluate the model using ROC curve or AUC
Deploy the model.
Predict the value using new datasets.
10.Visualize the new values using Tableau.
Please let me know whether it is a best practice to make Hive as the source of training data and test data or whether to directly make processing files from HDFS to build the model and store the results to Hive.
Which one adopts in real production environment.
The steps above seem quite strange on several levels. If you have your data in Oracle, you are likely using it or another RDBMS to do your modeling. There is no need to lose structure going to hdfs and then regain it in hive. There are several analytically focused columnar dbs that could do quite well. Also the PMML step seems unnecessary. Compliant PMML is only available for a few models and I have only seen it used in linear regressions. If your data is big enough that it needs hive, using R is possible but may not be the best choice. Using R with data that size (out of core) is fairly advanced R.
In short, there is likely a scenario where that set of steps is right, but it raises a lot of issues for me. Ask a lot of questions before proceeding.