CustomVision Classification in Docker container: Cannot feed value of shape - azure-cognitive-services

I have created a classification model in CustomVision and exported it to a Dockerfile (Linux). The model works fine when testing it from inside CustomVision GUI, but when connecting to the docker container and calling it like
curl -X POST http://127.0.0.1/image -F imageData=#some_file_name.jpg
I always get a error like
"Error: Could not preprocess image for prediction. Cannot feed value of shape (1, 227, 227, 3) for Tensor 'Placeholder:0', which has shape '(?, 224, 224, 3)'"
This is even when some_file_name.jpg is one of the files the model was trained from...
An observation: Models we created in August'18 and exported to Dockerfiles works fine. When modifying these models now (e.g. removing a file from the trainingdata) and rebuilding the model, it fails like noted above. The zip file created when exporting the model is nearly double in size now compared to in August. No configuration has been changed and the model is still build on the same datacenter.
Any tips/help is most appreciated.

In the app folder of the export there's a file predict.py. Change the line,
network_input_size = 227
to
network_input_size = 224
I then rebuilt and reran my docker container and it worked.

Related

Rendering a Quarto blog post trips an error when reading in a brms file object

First, I'll apologize for not having a fuller reproducable example, but I'm not entirely sure how to go about that given the various layers to the question/problem.
I'm moving a blog over from Blogdown to a new Quarto-based website and blog. I have three saved brms object files that I'm trying to read into a code chunk in one of the posts. The code chunks work fine when I run them manually, but when I try to render the blog post I get the following error:
Quitting from lines 75-86 (tables-modelsummary-brms.qmd)
Error in stri_replace_all_charclass(str, "[\\u0020\\r\\n\\t]", " ", merge = TRUE) :
invalid UTF-8 byte sequence detected; try calling stri_enc_toutf8()
Calls: .main ... stri_trim -> stri_trim_both -> stri_replace_all_charclass
Execution halted
I've checked the primary data frame contained in the brms model object and all of the character vectors there are valid UTF-8 vectors. These models objects can be quite large, so it's possible I'm missing something buried deep within the model object, but so far it's nothing apparent.
I tried re-running the models again to ensure that the model objects' files weren't corrupted, and also to make sure that the encoding issue wasn't somehow introduced the last time they were run, which would have been on a Windows machine and a different version of brms.
I've also moved the brms files around to different directories to see if it's a file path issue. The same error comes up regardless of whether the files are in the same folder as the blog post qmd file or in a parent directory file I use for storing site data.
I've also migrated several other posts to the new Quarto site successfully, and some of them also contain R code, but it's all rendering without a problem.
Finally, I don't quite understand how to implement the suggersted alternate function found in the error message either.

Location of downloaded model when using Predictor.from_path in AllenNLP?

I'm following AllenNLP's example code for correference resolution, which has a method Predictor.from_path:
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/coref-spanbert-large-2021.03.10.tar.gz")
I'm running this in a Jupyter notebook, in a conda environment. When I execute the example snippet, the model file is downloaded, which is ~1.5gb in size.
I'm wondering where this file was downloaded to, so I can potentially clean it up if necessary? It's not in ~/conda/env/<my env>, ~/.jupyter, ~/.local/share/jupyter/, /tmp, or the working directory where the notebook is.
It goes into ~/.allennlp/cache/.

Maxent in R Error: Bias grid cannot be used with SWD-format background

I get this error when running;
me_1 <- maxent(pred, occ, args='biasfile=bias')
Error: Bias grid cannot be used with SWD-format background
where 'pred' is a raster stack, 'occ' is a csv file of lat/lon, and 'bias' is a raster. The model runs fine without the 'args'. The error suggests that it thinks I'm using a SWD (species-with-data) file but I'm not. Have checked the str() on each input file. I've updated everything (R, RStudio, dismo, maxent.jar). Running on Windows 10. I see others have had this problem on the Maxent user group, but no solutions. Any help appreciated. thanks.
To update - it may be that dismo does not have biasfile implemented. However a workaround can be found here
https://groups.google.com/forum/#!msg/Maxent/kUZTkSiDxbU/i_-cDR4aN8MJ

Jedox Integrator RScript Transform: Failed to retrieve data

Currently I'm working with Jedox and try to use the RScript Transform component.
The installation of R itself on the server was a little bit tricky, but after several attempts it finally worked.
For the installation helpful were the infos on this blog: jedoxtools.wordpress.com
The key challenge though was to enter the correct directory path in the 'Path' (C:\Program Files\R\R-3.4.1\bin\x64) and in the 'R_Home' (C:\Program Files\R\R-3.4.1) variables.
But now where the 'hard part' should already be done I simply can't get the transform component running.
Based on the example Rscript in this presentation everytime I try simple scripts, I got the following error message:
Failed to retrieve data from source [my RScript components name] : null
The script I run is as simple as this:
data <- my_datasource
Result <- data
There is data in the source and if I do the test locally in RStudio it works perfectly fine.
Anyone here with R experiences in Jedox?
A few attempts later I found the solution myself and it's of course super easy, u just have to know about it.
In the Jedox documentation the given example shows a script which indicates the returned result set is called 'result'.
Instead you can return any object, all you have to do is to name the result set in an extra field which is above the script-box.
The working script (input=output) is shown here:
rscript solution

Running a windows executable within R using wine in ubuntu

I am trying to execute a windows only executable called (groundfilter.exe from FUSION) within Rstudio on Ubuntu.
I am able to run groundfilter.exe from a terminal using wine as follows:
wine C:/FUSION/groundfilter.exe /gparam:0 /wparam:1 /tolerance:1 /iterations:10 test_Grnd.las 1 test.las
The executes fine and produces file test_Grnd.las OK.
But when i try to do this from within Rstudio using system() it doesn't quite work, and no output file is produced (unlike from terminal). I do this:
command<-paste("wine C:/FUSION/groundfilter.exe",
"/gparam:0 /wparam:1 /tolerance:1 /iterations:10",
"/home/martin/Documents/AUAV_Projects/test_FUSION/test_FUSION/test_GroundPts.las",
"1",
"/home/martin/Documents/AUAV_Projects/test_FUSION/test_FUSION/test.las",sep=" ")
system(command)
The executable appears to be called OK in Rstudio console, but run as if no file names were supplied. The output( truncated ) is:
system(command)
GroundFilter v1.75 (FUSION v3.60) (Built on Oct 6 2016 08:45:14) DEBUG
--Robert J. McGaughey--USDA Forest Service--Pacific Northwest Research Station
Filters a point cloud to identify bare-earth points
Syntax: GroundFilter [switches] outputfile cellsize datafile1 datafile2 ...
outputfile Name for the output point data file (stored in LDA format)
This is the same output from the terminal if the file names are left off, so somehow my system call in R is not correct?
I think wine will not find paths like /home/martin/....
One possibility would be to put groundfilter.exe (and possibly dlls it needs) into the directory you want to work with, and set the R working directory to that directory using setwd().
The other possibility I see would be to give a path that wine understands, like Z:/home/martin/....
This is not an authoritative answer, just my thoughts, so please refer to the documentation for the real story.

Resources