Using LSTM detective in GAN to improve deepfake - generative-adversarial-network

i read a paper about detecting deepfaked video by LSTM
but deepfake model is trained by GAN
so how about using LSTM detective as discriminator of GAN
then it will be possible to deceive LSTM detective

Related

How to train and fit a LSTM model on an encrypted dataset

I am beginner in deep learning, and I am trying to do a project on detection of encrypted malicious traffic using Long Short Term Memory (LSTM). I have the two benign and malicious data sets. What dependent do I need to import and how to train and fit the model on the dataset? Do I need to combine two dataset and train the model over it, since LSTM gets the features automatically? Any help deeply appreciated.

how to create a updatable coreml model?

I tried to build a pre-trained core-ml model with the help of create ML framework, but the model created is not updatable, Is there a way to create a pre-trained core-ml model which can be updated on the device itself (newly introduced feature in Core-ML 3) ?
Not directly with Create ML, you'll have to use coremltools to make the model updatable. See here for examples: https://github.com/apple/coremltools/tree/main/examples
However... this will only work for neural networks and k-nearest neighbors models. Create ML does not actually produce these kinds of models (at the moment).
For example, an image classifier trained with Create ML is a GLM on top of a fixed neural network. You cannot make GLM models updatable at this point.
So in short, no, you can't make models trained with Create ML updatable.

R - Building Autoencoder model in Caret

I want to build an autoencoder model with the Caret package with the following features:
1) Build an unsupervised neural network model using deep learning autoencoders
2) Using the autoencoder model in (1) as a pre-training input for a supervised model.
Online examples on using autoencoder in caret are quite few and far in between, offering no real insight into practical use cases.
I'm under data privacy and resource constraints so I'm unable to use H2o or Keras for neural networks.
Sample data for the model can be found at:
https://www.kaggle.com/nodarokroshiashvili/credit-card-fraud/data
An example of this in H2o is at this link:
https://shiring.github.io/machine_learning/2017/05/01/fraud
Any help or pointers in the right direction in this regard will be appreciated.
EDIT:
Thanks to Lauren and Erin, staff at H20 commenting that data privacy should not be a concern because H20 creates a cluster which is located on premise and not in an 'H20.cloud'

R programming storing Model after training

How to save caret trained models so that it can be used later for building ensemble models in RStudio?

Machine learning Multi label text classification using R

I am building an machine learning text classification model in R. I want to classify the sentence into more than one label if it falls into multiple categories.
e.g.: "The phone screen resolution is awesome and the battery life as well" - currently I am able to classify the sentence into either Battery or Phone feature category but I want it to be classified into both.
The output can be like:
It will be great if anyone can help me with ideas or methods to get the above result.
I would suggest training a binary classifier for each label.
With some algorithms - like logistic regression - all you can do is train every binary classifier independently.
There are also so-called multilabel algorithms - they train all binary classifiers at the same time, and extract the same features from data for every classifier. An example is a neural network with a sigmoid last layer. See "support multilabel" section in http://scikit-learn.org/stable/modules/multiclass.html for a list of multilabel algorithms.
Of course, a multilabel algorithm will not necessarily outperform logistic regression, you have to try and see what works best for your problem.

Resources