Azure AutoML seems to add extra input? - azure-machine-learning-studio

I'm using azure Automated ML to do some proof of concepts. I'm trying to identify a person based on some parameters.
In my dataset I have 4 columns of floats and 1 column containing the name of the person. My ambition is to be able to detect the person, based on the input of these 4 floats.
I have successfully trained some models based on this information. The data transformation chart looks like this, which is as I would expect:
So it ignores one column (the "person" column I assume) and uses the remaining 4 as input to a RandomForrest classifier. All is well and good so far.
When I then go and deploy the model, I now need to add a new variable simply called "Column2". This variable seems to have significant influence on the output data
When I make a request to the endpoint with two inputs where the only difference is the value of the "Column2" I get two different probabilities back:
{'PCA_0': -574.0043295463845, 'PCA_1': 3455.9091610620617, 'PCA_2': 2352.2555893520835, 'PCA_3': -6941.596091271862, 'Column2': '0'} = [0.24, 0.4, 0.06, 0.3]
{'PCA_0': -574.0043295463845, 'PCA_1': 3455.9091610620617, 'PCA_2': 2352.2555893520835, 'PCA_3': -6941.596091271862, 'Column2': '1'} = [0.26, 0.19, 0.54, 0.01]
Anyone has any idea about what I'm doing wrong here?

Here is the link to AutoML support for Test Datasets and samples. https://github.com/Azure/automl-testdataset-preview
Testing a model takes as input a labeled test dataset (not used at all in the training and validation process) and outputs predictions and test metrics related to those predictions.
The predictions and test metrics are stored within the test run in Azure ML so users can download the predictions and see the test metrics in the UI (or through the SDK) at any time after testing the model.

Related

Using Predefined Splits in PCR function R PLS package

In order to to ensure a good population representation I have created custom validation sets from my training data. However, I am not sure how I interface this in PCR in R
I have tried to add a list in the segments argument with each index similar to what you do in python predefined splits cv iterator, which runs but takes forever. So I feel I must be making an error somewhere
pcr(y~X,scale=FALSE,data=tdata,validation="CV",segments=test_fold)
where test fold is a list containing the validation set which belongs in the index
For example if the training data is composed on 9 samples and I want to use the first three as the first validation set on son
test_fold<-c(1,1,1,2,2,2,3,3,3)
This runs but it is very slow where if I do regular "CV" it runs in minutes. So far the results look okay but I have a over a thousand runs I need to do and it took 1 hr to get through one. So if anybody knows how I can speed this up I would be grateful.
So the segments parameters needs to be a list of multiple vectors. So going again with 9 samples if I want the first three to be in the first validation set, the next three in the second validation set and so on it should be
test_vec<-list(c(1,2,3),c(4,5,6),c(7,8,9))

Integration of Time series model of R in Tableau

I am trying to integrate Time series Model of R in Tableau and I am new to integration. Please help me in resolving below mentioned Error. Below is my code in tableau for integration with R. Calculation is Valid bur getting an error.
SCRIPT_REAL(
"library(forecast);
cln_count_ts <- ts(.arg1,frequency = 7);
arima.fit <- auto.arima(log10(cln_count_ts));
forecast_ts <- forecast(arima.fit, h =10);",
SUM([Count]))
Error : Error in auto.arima(log10(cln_count_ts)) : No suitable ARIMA model found
When Tableau calls R, Python, or another tool, it does so as a "table calc". That means it sends the external system one or more vectors as arguments and expects a single vector in response.
Depending on your data and calculation, you may want to send all your data to R in a single call, passing a very large vector, or call it several times with different vectors - say forecasting each region separately. Or even call R multiple times with many vectors of size one (aka scalars).
So with table calcs, you have other decisions to make beyond just choosing the function to invoke. Chiefly, you have to decide how to partition your data for analysis. And in some cases, you also need to determine the order that the data appears in the vectors you send to R - say if the order implies a time series.
The Tableau terms for specifying how to divide and order data for table calculations are "partitioning and addressing". See the section on that topic in the online help. You can change those settings by using the "Edit Table Calc" menu item.

Predictive modelling for a 3 dimensional dataframe

I have a dataset which contains all the quotes made by a company over the past 3 years. I want to create a predictive model using the library caret in R to predict whether a quote will be accepted or rejected.
The structure of the dataset is causing me some problems. It contains 45 variables, however, I have only included two bellow as they are the only variables that are important to this problem. An extract of the dataset is shown below.
contract.number item.id
0030586792 32X10AVC
0030586792 ZFBBDINING
0030587065 ZSTAIRCL
0030587065 EMS164
0030591125 YCLEANOFF
0030591125 ZSTEPSWC
contract.number <- c("0030586792","0030586792","0030587065","0030587065","0030591125","0030591125")
item.id <- c("32X10AVC","ZFBBDINING","ZSTAIRCL","EMS164","YCLEANOFF","ZSTEPSWC")
dataframe <- data.frame(contract.number,item.id)
Each unique contract.number corresponds to a single quote made. The item.id corresponds to the item that is being quoted for. Therefore, quote 0030586792 includes both items 32X10AVC and ZFBBDINING.
If I randomise the order of the dataset and model it in its current form I am worried that a model would just learn which contract.numbers won and lost during training and this would invalidate my testing as in the real world this is not known prior to the prediction being made. I also have the additional issue of what to do if the model predicts that the same contract.number will win with some item.id's and loose with others.
My ideal solution would be to condense each contract.number into a single line with multiple item.ids per line to form a 3 dimensional dataframe. But i am not aware if caret would then be able to model this? It is not realistic to split the item.ids into multiple columns as some quotes have 100s of item.id's. Any help would be much appreciated!
(Sorry if I haven't explained well!)

Creating 'structured' missing data in a data table in R

I am doing research in a lab with a mentor who has developed a model that analyzes genetic data, which utilizes an ANOVA. I have simulated a dataset that I want to use in evaluating our model's ability to handle varying levels of missing data.
Our dataset consists of 15 species, with 4 individuals each, which we represent by naming the columns 'A'(x4) 'B'(x4)...etc. Each row represents a gene.
I'm trying to come up with a code that removes 1% of the data randomly, but such that each species has at least 2 individuals with valid data, because otherwise our model will just quit out (since it's ANOVA-based).
I realize this makes the 'randomly' missing data not so random, but we're trying different methods. It's important that the missing data is otherwise randomized. I'm hoping someone could help me with setting this up?
I try to do a toy example that maybe can help
is_valid_df<-function(df,col,val){
all(table(df[col])>val)
}
filter_function<-function(df,perc,col,val){
n=dim(df)[1]
filter<-sample(1:n,n*perc)
if(is_valid_df(df[-filter,],col,val)){
return(df[-filter,])
}else{
filter_function(df,perc,col,val)
cat("resampling\n")
}
}
set.seed(20)
a<-(filter_function(iris,0.1,"Species",44))

Accessing class values in R's poLCA

I am trying my hand at learning Latent Component Analysis, while also learning R. I'm using the poLCA package, and am having a bit of trouble accessing the attributes. I can run the sample code just fine:
ds = read.csv("http://www.math.smith.edu/r/data/help.csv")
ds = within(ds, (cesdcut = ifelse(cesd>20, 1, 0)))
library(poLCA)
res2 = poLCA(cbind(homeless=homeless+1,
cesdcut=cesdcut+1, satreat=satreat+1,
linkstatus=linkstatus+1) ~ 1,
maxiter=50000, nclass=3,
nrep=10, data=ds)
but in order to make this more useful, I'd like to access the attributes within the objects created by the poLCA class as such:
attr(res2, 'Nobs')
attr(res2, 'maxiter')
but they both come up as 'Null'. I expect Nobs to be 453 (determined by the function) and maxiter to be 50000 (dictated by my input value).
I'm sure I'm just being naive, but I could use any help available. Thanks a lot!
Welcome to R. You've got the model-fitting syntax right, in that you can get a model out (don't know how latent component analysis works, so can't speak to the statistical validity of your result). However, you've mixed up the different ways in which R can store information pertaining to a model.
poLCA returns an object of class poLCA, which is
a list containing the following elements:
(. . .)
Nobs number of fully observed cases (less than or equal to N).
maxiter maximum number of iterations through which the estimation algorithm was set
to run.
Since it's a list, you can extract individual elements from your model object using the $ operator:
res2$Nobs # number of observations
res2$maxiter # maximum iterations
In some cases, there might be extractor functions to get this information without having to do low-level indexing. For example, many model-fitting functions will have a fitted method, which pulls out the vector of fitted values on the training data; and similarly residuals pulls out the vector of residuals. You should check whether there are such extractor functions provided by the poLCA package and use them if possible; that way, you're not making assumptions about the structure of the model object that might be broken in the future.
This is distinct to getting the attributes of an object, which is what you use attr for. Attributes in R are what you might call metadata: they contain R-specific information about an object itself, rather than information about whatever it is the object relates to. Examples of common attributes include class (the class of an object), dim (the dimensions of an array or matrix), names (names of individual elements of a vector/list/array) and so on.

Resources