How to perform parallel growth mixture modeling (GMM) using R? - r

Usually, i'm using "lcmm" package in R - in order to perform growth mixture modeling (GMM).
However, i want to create a classification or groups based on two variables rather than one. I've read that this kind of modeling called "parallel growth mixture modeling". Is anyone knows how to perform such an analysis using R script?

Related

Call R script from Anylogic and return recults

I need to call a forecast model from R within Anlylogic, and return the resulting outputs in R. It is a specific timeseries that I have built in R, and just copying the coefficients to Anylogic is not efficient. I have seen a couple of older posts on similar questions, but I am not sure I can follow. Any advice would be very appreciated.
I have a regression forecast model that uses predictors to provide a forecast along with Prediction Intervals. I need these outputs to be updated by the different values of the predictors and then used in Anylogic.

Ensemble machine learning model with NNETAR and BRNN

I used the forecast package to forecast the daily time-series of variable Y using its lag values and a time series of an external parameter X. I found nnetar model (a NARX model) was the best in terms of overall performance. However, I was not able to get the prediction of peaks of the time series well despite my various attempts with parameter tuning.
I then extracted the peak values (above a threshold) of Y (and of course this is not a regular time series anymore) and corresponding X values and tried to fit a regression model (note: not an autoregression model) using various models in carat package. I found out the prediction of peak values using brnn(Bidirectional recurrent neural networks) model just using X values is better than that of nnetar which uses both lag values and X values.
Now my question is how do I go from here to create ensamples of these two models (i.e whenever the prediction using brnn regression model ( or any other regression model) is better I want to replace the prediction using nnetar and move forward - I am mostly concerned about the peaks)? Is this a commonly used approach?
Instead of trying to pick one model that would be the superior at anytime, it's typically better to do an average of the models, in order to include as many individual views as possible.
In the experiments I've been involved in, where we tried to pick one model that would outperform, based on historical performance, it's typically shown that a simple average was as good or better. Which is in line with the typical results on this problem: https://otexts.com/fpp2/combinations.html
So, before you try to go more advanced at it by using trying to pick a specific model based on previous performance, or by using an weighted average, consider doing a simple average of the two models.
If you want to continue with a sort of selection/weighted averaging, try to have a look at the FFORMA package in R: https://github.com/pmontman/fforma
I've not tried the specific package (yet), but have seen promising results in my test using the original m4metalearning package.

Decisional boundary SVM caret (R)

I have built an SVM-RBF model in R using Caret. Is there a way of plotting the decisional boundary?
I know it is possible to do so by using other R packages but unfortunately I’m forced to use the Caret package because this is the only package I found that allows me to calculate the variables importance.
In alternative, can you suggest a package that allows to plot the decision boundaries AND gives also the vars importance?
Thank you very much
First of all, unlike other methods, SVM does not produce feature importance. In your case, the importance score caret reports is calculated independent of the method itself: https://topepo.github.io/caret/variable-importance.html#model-independent-metrics
Second, the decision boundary (or hyperplane) you see in most textbook example is based on a toy problem with only two or three features. If you have more than three features, it is not trivial to visualize this hyperplane.

Can MXNET fit a regression LSTM model in R?

I would like to fit an LSTM model using MXNET in R for the purpose of predicting a continuous response (i.e., regression) given several continuous predictors. However, the mx.lstm() function seems to be geared toward NLP as it requires arguments which don't seem applicable to a regression problem (such as those related to embedding).
Is MXNET capable of this sort of modeling and, if not, what is an example of an appropriate tool (preferably in R)? Are there any tutorials relevant to the problem I've described?
LSTM is used for working with temporal data: text, speech, time series. If you want to predict a continuous response, then I assume you want to do something similar to time series analysis.
If my assumption is correct, then, please, take a look here. It gives quite a good example on how to use MxNet with R for time series on CPU. The GPU version is also available here.

how to perform semi-supervised k-mean clustering

I am new in r. I am trying to perform semi-supervised k-means clustering. I plan to divide my 2/3 of my data as a training set, and 1/3 as a test set. My objective is to train a model using the known clusters, and then propagate the training model to the test set. the propagation result will be compare with the prior clusters. my objective is to check the prediction accuracy of kmeans clustering. Therefore I am wondering if there is a way we can do semi-supervised kmeans clustering using r? any package is needed. thank you.
thank you
regards,
Use kmeans(). It should come with the stats package, which you should have if you've installed R correctly. You can read how to use functions by putting a ? before the function call, e.g. ?kmeans().
Search online if you're still lost about how to use the function - there are plenty of guides and toy examples online.
M

Resources