How to create a statefull SimpleRNN for TensorFlowLite - recurrent-neural-network

Looking for code example of a stateful RNN model in Tensorflow Lite.
I created a model in Keras with a SimpleRNN stateful layer.
Then I converted it to TensorFlow Lite (tflite).
How can I manage the state when invoking a TensorFlow lite model?
According to this link it should be somehow possible but I did not find any example.

Related

Form Recognizer - model for each environment?

On our test environment we created custom Form Recognizer model. Is there a way to reuse this model on PROD environment? Prod environment is under different subscription.
I cannot find a way to somehow "export" model and move it to other environment. Do I need to create new model from scratch?
You can use the Copy Model API to copy a model between regions and subscriptions. See here for more details -
How to Copy API - https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/disaster-recovery#copy-api-overview
Copy Model API reference - https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/CopyCustomFormModel

Keras Deploy for Tensorflow.js Usage

I need to be able to deploy a keras model for Tensorflow.js prediction, but the Firebase docs only seem to support a TFLite object, which tf.js cannot accept. Tf.js appears to accept JSON files for loading (loadGraphModel() / loadLayersModel() ), but not a keras SavedModel (.pb + /assets + /variables).
How can I attain this goal?
Note for the Tensorflow.js portion: There are a lot of pointers to the tfjs_converter, but the closest API function offered to what I'm looking for is the loadFrozenModel() function, which requires both a .pb and a weights_manifest.json. It seem to me like I'd have to programmatically assemble this before before sending it up to GCloud as a keras SavedModel doesn't contain both (mine contains .pb + /assets + /variables).
This seems tedious for a straightforward deployment feature, and I'd imagine my question only hits upon common usage of each tool.
What I'm looking for is a simple pathway from Keras => Firebase/GCloud => Tensorflow.js.
So I understand your confusion but you have half part ready.
So your keras model has the following files and folders if I understand correctly:
saved_model.pb
/assests
/variables
This is enough to convert the keras model to tensorflow.js model.
Use the converter script in the following manner. Make sure you have the latest version of tfjs. If you do not have the latest version, try creating a virtual environment and install latest tfjs otherwise it will disrupt your tensorflow version.
import tensorflowjs as tfjs
import tensorflow as tf
model=tf.keras.models.load_model('path/to/keras/model')
tfjs.converters.save_keras_model(model, 'path/where/you/will/like/to/have/js/model/converted')
Once you have converted the model you will receive following files for js model.
model.json
something.bin
You will have to host those files using a webserver and just make it available for loadLayersModel API something like this:
const model = await tf.loadLayersModel(
'location/of/model.json');
That is it and you have converted the model from Keras to Tensorflowjs and uploaded as well in js.
I hope my answer helps you.

onnx 1.2 models (Custom Vision) on HoloLens

This is a more specific continuation of my previous Post: Custom Vision on HoloLens
I'm still using the Unity Project from this blogpost: https://mtaulty.com/2018/03/29/third-experiment-with-image-classification-on-windows-ml-from-uwp-on-hololens-in-unity/
I had issues that my own exported models don't work with the the code at some point. Now it is possible to export onnx models of version 1.2, but the the old code seems to not be compatible with the new version.
in the line var evalOutput = await this.learningModel.EvaluateAsync(this.inputData); in the MainScript it throws The binding is incomplete or does not match the input/output description. (Exception from HRESULT: 0x88900002)
Does someone know what I need to change so it works with the new version on HoloLens? Thanks in advance!
You can find a similar question here: Windows ML's OS requirement
In summary, you are right about needing to update the PC and the Hololens, but the build number you need is 17763 to be on the production version of RS5.
You could also be hitting this issue: Cannot load model using WinML
where the binding isn't quite setup properly.
If you're still having issues, please post the SDK and OS version you're on, as well as the ONNX model version.

How do I create integration tests when leveraging Xamarin.Forms?

How do I create integration tests when leveraging Xamarin.Forms?
Specifically, I do NOT want to rely on UI automation to test integration between the components of a system (i.e. database using SQLite).
I want my integration tests to target the layer beneath the UI.
For this I would recommend xUnit (there are others as well), that can test directly against PCL's. The native projects should be fairly empty and your ViewModels and Views should be void of most code, which means you can test directly on the Model and below.
Place a mock ISQLite DB connection to test the code without the SQLite DB, or place another one in that actually connects to a local SQLite DB, either way.
xUnit Project
https://github.com/xunit/devices.xunit
Though download the packages from NuGet, its easier. Then the test can also be run from VS which is a nice addition.

flex/air how to load a class that is on another module

I´m trying to build my flex modular app, and got the following scenario
Portal (which includes, 2 modules:)
-Mod1 (.swf)
-Mod2 (.swf)
Also, i have Mod1-API (.swc)
The Mod1-API, defines interfaces which are implemented on the Mod1 (.swf).
Both the Mod1 and Mod2 swfs import the Mod1-API swc.
I´m trying to call the API method on the Mod2. On Mod2 I´ve the interface, since it is shared via the Mod1-API project.
What I´m trying to achieve is the real implementation class loading on Mod2, via reflection, using the getDefinitionByName method, but it says its not defined.
So, is there a way to achieve it?
I mean, how from the mod2 load a class that is on the Mod1 project, returning just the interfacce to the mod2 so it can call methods just like an ordinary API method?
It depends on where you are loading your class definitions. Flex uses both Security Domains and Application Domains to partition code which has been loaded.
If you want Module 2 to access code loaded via Module 1, they need to be loaded into both the same security and application domains.
This should give you a good start.

Resources