Split a pre-trained CoreML model into two - coreml

I have a Sound Classification model from turicreate example here:
https://apple.github.io/turicreate/docs/userguide/sound_classifier/
I am trying to split this model into two and save the two parts as separate CoreML Models using coremltools library. Can anyone please guide me on how to do this?
I am able to load the model and even print out the spec of the model. But don't know where to go from here.
import coremltools
mlmodel = coremltools.models.MLModel('./EnvSceneClassification.mlmodel')
# Get spec from the model
spec = mlmodel.get_spec()
Output should be two CoreML Models i.e. the above model split into two parts.

I'm not 100% sure on what the sound classifier model looks like. If it's a pipeline, you can just save each sub-model from the pipeline as its own separate mlmodel file.
If it's not a pipeline, it requires some model surgery. You will need to delete layers from the spec (with del spec.neuralNetworkClassifier.layers[a:b]).
You'll also need to change the inputs of the first model and the outputs of the second model to account for the deleted layers.

Related

Getting class labels from a keras model in R

I am developing an image classification workflow which uses keras through R. The workflow will likely be run multiple times, potentially by multiple users. I save a custom trained version of keras' Iv3 model, as a .h5 file.
Once the file is saved and loaded back in with load_model_hdf5(), is there a way to see the class labels with which the model has been trained?
I understand that the classes are the alphabetized names of the folders in the training directory but there will be cases where the model is loaded on a new machine without access to the training directory.
Right now I am manually loading in the list of class labels (as strings) which is not a good solution.
Ideally, I would load in my trained model and then access a list of class labels...
pseudocode might look like this
model_fn <- # some example model file (.h5)
model <- load_model_hdf5(model_fn)
classes <- model$classes

Do I need to train on my own data in using bert model as an embedding vector?

When I try the huggingface models and it gives the following error message:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello world!", return_tensors="pt")
outputs = model(**inputs)
And the error message:
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
My purpose is to find a pretrained model to create embedding vectors for my text, so that it can be used in downstream text. I don't want to create my own pretrained models to generate the embedding vector. In this case, can I ignore those warning messages, or I need to continue to train on my own data? In another post I learn that "Most of the official models don't have pretrained output layers. The weights are randomly initialized. You need to train them for your task." My understanding is that I don't need to train if I just want to get generic embedding vector for my text based on the public models, like Huggingface. Is that right?
I am new to transformer and please comment.
Indeed the bert-base-uncased model is already pre-trained and will produce contextualised outputs, which should not be random.
If you're aiming to get a vector representation for entire the input sequence, this is typically done by running your sequence through your model (as you have done) and extracting the representation of the [CLS] token.
The position of the [CLS] token may change depending on the base model you are using, but it is typically the first dimension in the output.
The FeatureExtractionPipeline (documentation here) is a wrapper for the process of extracting contextualised features from the model.
from transformers import FeatureExtractionPipeline
nlp = FeatureExtractionPipeline(
model=model,
tokenizer=tokenizer,
)
outputs = nlp(sentence)
embeddings = outputs[0]
cls_embedding = embeddings[0]
Some things to help verify things are going as expected:
Check that the [CLS] embedding has the expected dimensionality
Check that the [CLS] embedding produces similar vectors for similar text, and different vectors for different text (e.g. by applying cosine similarity)
Additional References: https://github.com/huggingface/transformers/issues/1950

Large neural network model size in neuralnet R

I see very large file sizes when I try to save my trained NN model using neuralnet in R. I know one (main) reason for that is that the model contains extra information that I may not need in order to reuse the model, like, the results (net$net.result), and some other elements that are saved with the network.
Anyway I can remove them from the model before saving?

How to retrain model using old model + new data chunk in R?

I'm currently working on trust prediction in social networks - from obvious reasons I model this problem as data stream. What I want to do is to "update" my trained model using old model + new chunk of data stream. Classifiers that I am using are SVM, NB (e1071 implementation), neural network (nnet) and C5.0 decision tree.
Sidenote: I know that this solution is possible using RMOA package by defining "model" argument in trainMOA function, but I don't think I can use it with those classifiers implementations (if I am wrong please correct me).
According to strange SO rules, I can't post it as comment, so be it.
Classifiers that you've listed need full data set at the time you train a model, so whenever new data comes in, you should combine it with previous data and retrain the model. What you are probably looking for is online machine learning. One of the very popular implementations is Vowpal Wabbit, it also has bindings to R.

R - How to get one "summary" prediction map instead for 5 when using 5-fold cross-validation in maxent model?

I hope I have come to the right forum. I'm an ecologist making species distribution models using the maxent (version 3.3.3, http://www.cs.princeton.edu/~schapire/maxent/) function in R, through the dismo package. I have used the argument "replicates = 5" which tells maxent to do a 5-fold cross-validation. When running maxent from the maxent.jar file directly (the maxent software), an html file with statistics will be made, including the prediction maps. In R, an html file is also made, but the prediction maps have to be extracted afterwards, using the function "predict" in the dismo package in r. When I do this, I get 5 maps, due to the 5-fold cross-validation setting. However, (and this is the problem) I want only one output map, one "summary" prediction map. I assume this is possible, although I don't know how maxent computes it. The maxent tutorial (see link above) says that:
"...you may want to avoid eating up disk space by turning off the “write output grids” option, which will suppress writing of output grids for the replicate runs, so that you only get the summary statistics grids (avg, stderr etc.)."
A list of arguments that can be put into R is found in this forum https://groups.google.com/forum/#!topic/maxent/yRBlvZ1_9rQ.
I have tried to use the argument "outputgrids=FALSE" both in the maxent function itself, and in the predict function, but it doesn't work. I still get 5 maps, even though I don't get any errors in R.
So my question is: How do I get one "summary" prediction map instead of the five prediction maps that results from the cross-validation?
I hope someone can help me with this, I am really stuck and haven't found any answers anywhere on the internet. Not even a discussion about this. Hope my question is clear. This is the R-script that I use:
model1<-maxent(x=predvars, p=presence_points, a=target_group_absence, path="//home//...//model1", args=c("replicates=5", "outputgrids=FALSE"))
model1map<-predict(model1, predvars, filename="//home//...//model1map.tif", outputgrids=FALSE)
Best regards,
Kristin
Sorry to be the bearer of bad news, but based on the source code, it looks like Dismo's predict function does not have the ability to generate a summary map.
Nitty-gritty details for those who care: When you call maxent with replicates set to something greater than 1, the maxent function returns a MaxEntReplicates object, rather than a normal MaxEnt object. When predict receives a MaxEntReplicates object, it just iterates through all of the models that it contains and calls predict on them individually.
So, what next? Fortunately, all is not lost! The reason that Dismo doesn't have this functionality is that for most kinds of model-building, there isn't actually a valid way to average parameters across your cross-validation models. I don't want to go so far as to say that that's definitely the case for MaxEnt specifically, but I suspect it is. As such, cross-validation is usually used more as a way of checking that your model building methodology works for your data than as a way of building your model directly (see this question for further discussion of that point). After verifying via cross-validation that models built using a given procedure seem to be accurate for the phenomenon you're modelling, it's customary to build a final model using all of your data. In theory this new model should only be better than models trained on a subset of your data.
So basically, assuming your cross-validated models look reasonable, you can run MaxEnt again with only one replicate. Your final result will be a model accuracy estimate based on the cross-validation and a map based on the second run with all of your data lumped together. Depending on what exactly your question is, there might be other useful summary statistics from the cross-validation that you want to use, but those are all things you've already seen in the html output.
I may have found this a couple of years later. But you could do something like this:
xm <- maxent(predictors, pres_train) # basically the maxent model
px <- predict(predictors, xm, ext=ext, progress= '' ) #prediction
px2 <- predict(predictors, xm2, ext=ext, progress= '' ) #prediction #02
models <- stack(px,px2) # create a stack of prediction from all the models
final_map <- mean(px,px2) # Take a mean of all the prediction
plot(final_map) #plot the averaged map
xm1,xm2,.. would be the maxent models for each partitions in cross-validation, and px, px2,.. would be the predicted maps.

Resources