I am totally new to NN and want to classify the almost 6000 images that belong to different games (gathered by IR). I used the steps introduced in the following link, but I get the same training accuracy in each round.
some info about NN architecture: 2 convloutional, activation, and pooling layers. Activation type: relu, Number of filters in first and second layers are 30 and 70 respectively.
2 fully connected layer with 500 and 2 hidden layers respectively.
http://firsttimeprogrammer.blogspot.de/2016/07/image-recognition-in-r-using.html
I had a similar problem, but for regression. After trying several things (different optimizers, varying layers and nodes, learning rates, iterations etc), I found that the way initial values are given helps a lot. For instance I used a -random initializer with variance of 0.2 (initializer = mx.init.normal(0.2)).
I came upon this value from this blog. I recommend you read it. [EDIT]An excerpt from the same,
Weight initialization. Worry about the random initialization of the weights at the start of learning.
If you are lazy, it is usually enough to do something like 0.02 * randn(num_params). A value at this scale tends to work surprisingly well over many different problems. Of course, smaller (or larger) values are also worth trying.
If it doesn’t work well (say your neural network architecture is unusual and/or very deep), then you should initialize each weight matrix with the init_scale / sqrt(layer_width) * randn. In this case init_scale should be set to 0.1 or 1, or something like that.
Random initialization is super important for deep and recurrent nets. If you don’t get it right, then it’ll look like the network doesn’t learn anything at all. But we know that neural networks learn once the conditions are set.
Fun story: researchers believed, for many years, that SGD cannot train deep neural networks from random initializations. Every time they would try it, it wouldn’t work. Embarrassingly, they did not succeed because they used the “small random weights” for the initialization, which works great for shallow nets but simply doesn’t work for deep nets at all. When the nets are deep, the many weight matrices all multiply each other, so the effect of a suboptimal scale is amplified.
But if your net is shallow, you can afford to be less careful with the random initialization, since SGD will just find a way to fix it.
You’re now informed. Worry and care about your initialization. Try many different kinds of initialization. This effort will pay off. If the net doesn’t work at all (i.e., never “gets off the ground”), keep applying pressure to the random initialization. It’s the right thing to do.
http://yyue.blogspot.in/2015/01/a-brief-overview-of-deep-learning.html
Related
I am using the h2o package to train a GBM for a churn prediction problem.
all I wanted to know is what influences the size of the fitted model saved on disk (via h2o.saveModel()), but unfortunately I wasn't able to find an answer anywhere.
more specifically, when I tune the GBM to find the optimal hyperparameters (via h2o.grid()) on 3 non-overlapping rolling windows of the same length, I obtain models whose sizes are not comparable (i.e. 11mb, 19mb and 67mb). the hyperparameters grid is the same, and also the train set sizes are comparable.
naturally the resulting optimized hyperparameters are different across the 3 intervals, but I cannot see how this can produces such a difference in the model sizes.
moreover, when I train the actual models based on those hyperparameters sets, I end up with models with different sizes as well.
any help is appreciated!
thank you
ps. I'm sorry but I cannot share any dataset to make it reproducible (due to privacy restrictions)
It’s the two things you would expect: the number of trees and the depth.
But it also depends on your data. For GBM, the trees can be cut short depending on the data.
What I would do is export MOJOs and then visualize them as described in the document below to get more details on what was really produced:
http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/index.html
Note the 60 MB range does not seem overly large, in general.
If you look at the model info you will find out things about the number of trees, their average depth, and so on. Comparing those between the three best models should give you some insight into what is making the models large.
From R, if m is your model, just printing it gives you most of that information. str(m) gives you all the information that is held.
I think it is worth investigating. The cause is probably that two of those data windows are relatively clear-cut, and only a few fields can define the trees, whereas the third window of data is more chaotic (in the mathematical sense), and you get some deep trees being made as it tries to split that apart into decision trees.
Looking into that third window more deeply might suggest some data engineering you could do, that would make it easier to learn. Or, it might be a difference in your data. E.g. one column is all NULL in your 2016 and 2017 data, but not in your 2018 data, because 2018 was the year you started collecting it, and it is that extra column that allows/causes the trees to become deeper.
Finally, maybe the grid hyperparameters are unimportant as regards performance, and this a difference due to noise. E.g. you have max_depth as a hyperparameter, but the influence on MSE is minor, and noise is a large factor. These random differences could allow your best model to go to depth 5 for two of your data sets (but 2nd best model was 0.01% worse but went to depth 20), but go to depth 30 for your third data set (but 2nd best model was 0.01% worse but only went to depth 5).
(If I understood your question correctly, you've eliminated this as a possibility, as you then trained all three data sets on the same hyperparameters? But I thought I'd include it, anyway.)
I'm first year student in machine learning and I really recently started to immersing.
So, professor gave me a task, calculate number of:
matrix additions
matrix multiplications
matrix divisions
Which are processed in the well known convolutional neural network - AlexNet.
I found some matherials about it, but I really confused where to start.
So, the overall structure might looks like:
But, how can I calculate operations for each type distinctly?
It's a convolutional network. These limit parameters to make the calculations manageable while still giving good results.
This particular network is described in many places and papers, so it's not difficult to get the figures for the number of parameters and conv nets involved.
But you need to start with an understanding of how a convolutional network works. I find this is a good place to start: http://cs231n.github.io/convolutional-networks/
I have been trying to make a biologically accurate 2D spatial model of tissue layers, where different physiological processes happen. This includes mainly chemical reactions, diffusion and fluxes over boundaries.
I am making this model in COMSOL Multiphysics, a finite element software package that solves different physics like reaction-diffusion systems, although for my question this might not be really relevant.
In my geometry, I have really small regions between the cells of the tissue layers. These regions serve as openings where diffusion can take place between the cells (junctions). The quality of the mesh is not great here and if I want to improve the quality (mainly by introducing more elements and such), my simulation time increases drastically. The lesser quality mesh also causes convergence to take longer. I added a picture of the geometry to give an idea. I tried different meshes, all with different qualities of the elements and the number of elements ranging from 16000 to 50000.
My background in FEM is really limited and I wanted to know if I can tackle this problem in such a way that it
doesn't negatively affect the biology (keep the tissue domain sizes/problem etc as biologically accurate as possible),
doesn't increase the simulation time drastically,
give a better mesh quality.
So I really want to know what the best way to go is, since I have already thought of some things.
Can I go with the lesser quality mesh (which is not really bad, but not good either), so that I can keep the small regions for optimum biological accuracy and have a relatively small computation time (and hope I won't run into convergence errors).
But maybe there are possibilities that I am missing, for instance: is it possible to make the small domain bigger and then add some kind of factor to the diffusion rates. In other words, if I want to make the domain twice as large, do I factor the diffusion rate with half? Is that even accurate in chemical/physical laws :S.
Hopefully I made the problem a bit clear and thank you greatly in advance for the help.
Cheers,
Mesh of the tissue model
I know this thread was posted some months back but I am unsure if you found a solution.
In order to find the relationship between accuracy and computational time would be that you run a mesh analysis on your model and see how the mesh size directly affects the results you are expecting to obtain (pore pressure, fluid velocity, strain, etc.) This will allow you to determine the most appropriate mesh strategy for your specific problem.
Also, you might need to keep in mind that the diffusion rate of a material will depend on the pore size and the permeability (by means of Darcy's law) so depending on the assumptions you are making for the implementation of your constitutive law and your problem boundary conditions you might simplify/enlarge some of the smaller domains you have in your model so long they are within your previously made assumptions.
I am trying to understand how neural network can predict different outputs by learning different input/output patterns..I know that weights changes are the mode of learning...but if an input brings about weight adjustments to achieve a particular output in back propagtion algorithm.. won't this knowledge(weight updates) be knocked of when presented with a different set of input pattern...thus making the network forget what it had previously learnt..
The key to avoid "destroying" the networks current knowledge is to set the learning rate to a sufficiently low value.
Lets take a look at the mathmatics for a perceptron:
The learning rate is always specified to be < 1. This forces the backpropagation algorithm to take many small steps towards the correct setting, rather than jumping in large steps. The smaller the steps, the easier it will be to "jitter" the weight values into the perfect settings.
If, on the other hand, used a learning rate = 1, we could start to experience trouble with converging as you mentioned. A high learning rate would imply that the backpropagation should always prefer to satisfy the currently observed input pattern.
Trying to adjust the learning rate to a "perfect value" is unfortunately more of an art, than science. There are of course implementations with adaptive learning rate values, refer to this tutorial from Willamette University. Personally, I've just used a static learning rate in the range [0.03, 0.1].
What I currently have:
I have a data frame with one column of factors called "Class" which contains 160 different classes. I have 1200 variables, each one being an integer and no individual cell exceeding the value of 1000 (if that helps). About 1/4 of the cells are the number zero. The total dataset contains 60,000 rows. I have already used the nearZeroVar function, and the findCorrelation function to get it down to this number of variables. In my particular dataset some individual variables may appear unimportant by themselves, but are likely to be predictive when combined with two other variables.
What I have tried:
First I tried just creating a random forest model then planned on using the varimp property to filter out the useless stuff, gave up after letting it run for days. Then I tried using fscaret, but that ran overnight on a 8-core machine with 64GB of RAM (same as the previous attempt) and didn't finish. Then I tried:
Feature Selection using Genetic Algorithms That ran overnight and didn't finish either. I was trying to make principal component analysis work, but for some reason couldn't. I have never been able to successfully do PCA within Caret which could be my problem and solution here. I can follow all the "toy" demo examples on the web, but I still think I am missing something in my case.
What I need:
I need some way to quickly reduce the dimensionality of my dataset so I can make it usable for creating a model. Maybe a good place to start would be an example of using PCA with a dataset like mine using Caret. Of course, I'm happy to hear any other ideas that might get me out of the quicksand I am in right now.
I have done only some toy examples too.
Still, here are some ideas that do not fit into a comment.
All your attributes seem to be numeric. Maybe running the Naive Bayes algorithm on your dataset will gives some reasonable classifications? Then, all attributes are assumed to be independent from each other, but experience shows / many scholars say that NaiveBayes results are often still useful, despite strong assumptions?
If you absolutely MUST do attribute selection .e.g as part of an assignment:
Did you try to process your dataset with the free GUI-based data-mining tool Weka? There is an "attribute selection" tab where you have several algorithms (or algorithm-combinations) for removing irrelevant attributes at your disposal. That is an art, and the results are not so easy to interpret, though.
Read this pdf as an introduction and see this video for a walk-through and an introduction to the theoretical approach.
The videos assume familiarity with Weka, but maybe it still helps.
There is an RWeka interface but it's a bit laborious to install, so working with the Weka GUI might be easier.