Extract data points of figure file in MAT LAB - plot

I have imported a shape file and save it as a MAT LAB figure file. I need to extract data points along the boundary of the figure file as (x,y) coordinates. can someone help me with this!

It depends on your figure (2D image, 3D image , chart ,...).
You can read a fig files by this command
>> figFile = load('1.fig','-mat')
figFile =
hgS_070000: [1x1 struct]
after that you can find all data in this structure by using dot
>> figFile.hgS_070000
ans =
type: 'figure'
handle: 1
properties: [1x1 struct]
children: [1x1 struct]
special: []
>> figFile.hgS_070000.children
ans =
type: 'axes'
handle: 176.08251953125
properties: [1x1 struct]
children: [1x1 struct]
special: [4x1 double]
>> figFile.hgS_070000.children.children
ans =
type: 'graph2d.lineseries'
handle: 177.0830078125
properties: [1x1 struct]
children: []
special: []
>> figFile.hgS_070000.children.children.properties
ans =
Color: [0 0 1]
XData: [1 2 3 4 5 6 7 8 9]
YData: [2 1 5 6 4 8 8 4 8]
ApplicationData: [1x1 struct]
.
.
.
Ploted data can be extracted by this
>> Y = figFile.hgS_070000.children.children.properties.YData
>> X = figFile.hgS_070000.children.children.properties.XData

Related

Multiplication of linear forms not allowed error

I am trying to solve a facility location problem. This is my code:
set S;
param prod{i in S};
param distri{i in S};
param fixed{i in S};
param cap{i in S};
param demand{i in S};
var x{i in S, j in S}, >= 0;
var y{i in S}, binary;
minimize obj :
sum{i in S} fixed[i]*y[i] +
sum{i in S, j in S} x[i,j]*(prod[i] + distri[i])*y[i];
s.t. c1{i in S}:
sum{j in S} x[i,j]*y[i] <= cap[i];
s.t. c2{i in S}:
sum{j in S} x[j,i]*y[j] = demand[i];
display S;
solve;
printf '\n Solution: \nMinimum Cost = %.2f\n', obj;
display x;
display y;
data;
set S := 0 1 2 3;
param prod :=
0 20
1 30
2 40
3 50;
param distri :=
0 60
1 70
2 80
3 90;
param fixed :=
0 10
1 15
2 10
3 15;
param cap :=
0 100
1 110
2 120
3 130;
param demand :=
0 120
1 60
2 70
3 100;
end;
When I run this .mod file, I get the following error:
example.mod:13: multiplication of linear forms not allowed
Context: S } x [ i , j ] * ( prod [ i ] + distri [ i ] ) * y [ i ] ;
MathProg model processing error
Here x is the fractional demand provided by facility 'i' for client 'j'.
I removed y[i] from the line mentioned and the error was gone. But if I do that, I get the same multiplication error but this time in c1 constraint.
What is the correct approach? Thank you.
gplk is restricted to linear problems.
It is allowed to multiply parameters and variables. But products of variables are non-linear. You either have to rewrite the model or use a non-linear solver.
Linearization of products involving a binary variable is discussed in this related answer.

Why does R.predict.svm return a list of the wrong size?

I am trying to use the R type provider to fit and predict a Support Vector Machines model. I was able to fit the model but when I try to predict the returned vector has the same length as the training vector, which it should not have.
I tried the equivalent code directly in R and the returned list has the correct length.
Why is this happening?
Here is an example:
open System
open RDotNet
open RProvider
open RProvider.stats
open RProvider.e1071
// Random number generator
let rng = Random()
let rand () = rng.NextDouble()
// Generate fake X1 and X2
let X1s = [ for i in 0 .. 9 -> 10. * rand () ] // length = 10
let X2s = [ for i in 0 .. 9 -> 5. * rand () ] // length = 10
let Z1s = [ for i in 0 .. 5 -> 10. * rand () ] // length = 6
let Z2s = [ for i in 0 .. 5 -> 5. * rand () ] // length = 6
// Build Ys
let Ys = [0;1;0;1;0;1;0;1;0;1]
let XMat =
["X1", box X1s; "X2", box X2s]
|> namedParams
|> R.cbind
let ZMat =
["Z1", box Z1s; "Z2", box Z2s]
|> namedParams
|> R.cbind
let svm_model =
["x", box XMat; "y", box Ys ; "type", box "C"; "gamma", box 1.0]
|> namedParams
|> R.svm
let svm_predict = R.predict(svm_model, ZMat)
let res =
if svm_predict.Type = RDotNet.Internals.SymbolicExpressionType.IntegerVector then
svm_predict.AsInteger()
|> List.ofSeq
else failwithf "Expecting a Numeric but got a %A" svm_predict.Type
printfn "The predicted values are: %A" res
// The predicted values are: [1; 2; 1; 2; 1; 2; 1; 1; 1; 2]
And here is the original R code:
library(stats)
library(e1071)
// Random number generator
x1 <- 10 * rnorm(10)
x2 <- 5 * rnorm(10)
x = cbind(x1, x2)
z1 <- 10 * rnorm(5)
z2 <- 5 * rnorm(5)
z = cbind(z1, z2)
zs <- c(0,1,0,1,0,1,0,1,0,1)
svm_fit = svm(x=x,y=zs,type="C",gamma=1.0)
svm_pred = predict(svm_fit, z)
print(svm_pred)
1 2 3 4 5
1 0 1 1 1
Levels: 0 1
I suspect the issue might be when passing parameters to the R.predict function. I'm not an expert on SVMs, so I'm not sure what is the result this should give, but when I call it as follows, I get results similar to your R version:
let svm_predict =
namedParams ["object", box svm_model; "newdata", box ZMat ]
|> R.predict
I think what's going on is that the R type provider infers some information about parameter names of the predict function, but is not able to figure out exactly what the second parameter is - and so rather than providing it as newdata, it provides it as something else.

Standard ML: Backtracking Confusion

I'm confused by an example Harper gives in his Intro to SML on p. 110. He's writing a function to make a certain amount of change from a list of coin values, backtracking when needed.
e.g. If I run change [5, 2] 16, I get [5, 5, 2, 2, 2] (algorithm should be greedy when possible).
exception Change
fun change _ 0 = nil
| change nil _ = raise Change
| change (coin::coins) amt =
if coin > amt then
change coins amt
else
(coin :: change (coin::coins) (amt - coin))
handle Change => change coins amt
A few questions:
1) I'm a bit confused about how this algorithm implements backtracking. It looks like when change is called with an empty list of coin values as its first argument, it raises the Change exception.
But the Change exception handler calls change coins amt. How is this "undoing the most recent greedy decision?
2) Why is the handler placed within the else clause? I would have thought it'd be totally separate...
Thanks for the help,
bclayman
Here's a execution trace for the call change [5,2] 16. The parts to the left
of handle represent what the function has computed thus far, while the part
on the right is the state to continue with when backtracking is requested via a Change signal.
> change [5, 2] 16
> 5 :: change [5, 2] (16 - 5) handle Change: change [2] 16
> 5 :: change [5, 2] 11 handle Change: change [2] 16
> 5 :: 5 :: change [5, 2] (11 - 5) handle Change: 5 :: change [2] 11
> 5 :: 5 :: change [5, 2] 6 handle Change: 5 :: change [2] 11
> 5 :: 5 :: 5 :: change [5, 2] (6 - 5) handle Change: 5 :: 5 :: change [2] 6
> 5 :: 5 :: 5 :: change [5, 2] 1 handle Change: 5 :: 5 :: change [2] 6
> 5 :: 5 :: 5 :: change [2] 1
> 5 :: 5 :: 5 :: change nil 1
> raise Change => 5 :: 5 :: change [2] 6
> 5 :: 5 :: 2 :: change [2] (6 - 2) handle Change
> 5 :: 5 :: 2 :: change [2] 4 handle Change
> 5 :: 5 :: 2 :: 2 :: change [2] (4 - 2) handle Change
> 5 :: 5 :: 2 :: 2 :: change [2] 2 handle Change
> 5 :: 5 :: 2 :: 2 :: 2 :: change [2] (2 - 2) handle Change
> 5 :: 5 :: 2 :: 2 :: 2 :: change [2] 0 handle Change
> 5 :: 5 :: 2 :: 2 :: 2 :: nil
> [5, 5, 2, 2, 2]
As you can see, when the Change exception is caught, the algorithm goes back two
stack frames, drops the third 5 coin from the result list and recurses only with
a 2 coin in the list of coins. The amount is also reset to 6.
The first line, the part before handle tries to use another 5 as a possible
decomposition, while the exception handler represents the backtracking option,
i.e., remove the 5 we've just tried and adjust the coin list and the remaining
amount.
The final line signals the last installed exception handler to backtrack.
> 5 :: 5 :: 5 :: change [5, 2] 1 handle Change: 5 :: 5 :: change [2] 6
> 5 :: 5 :: 5 :: change [2] 1
> 5 :: 5 :: 5 :: change nil 1
> raise Change => 5 :: 5 :: change [2] 6
In other words, the algorithm backtracks when it reaches a state where no more
coin types are available to choose, but the amount is still positive. It's greedy
because the algorithm will use the same coin until it catches an exception.
The exception handler is attached to the else expression because it's where
the greedy choice is made.
Hopefully I've been intelligible with my explanation.

predict function not evaluating for all given points with F# R-type-provider

I am converting a small code snippet to F# using the R type provider. Everything works fine and evaluates, however I cannot seem to make the predict function use all the points I give it for its prediction.
The R code snippet:
Nit = c(0,0,0,1,1,1,2,2,2,3,3,3,4,4,4,6,6,6)
AOB = c(4.26,4.15,4.68,6.08,5.87,6.92,6.87,6.25,6.84,6.34,6.56,6.52,7.39,7.38,7.74,7.76,8.14,7.22)
AOBm=tapply(AOB,Nit,mean) #means of AOB
Nitm=tapply(Nit,Nit,mean) #means of Nit
fitAOB=lm(AOBm∼ns(Nitm,df=2)) #natural spline
predict(fitAOB,data.frame(Nitm=seq(xmin,xmax,.5))
and the coresponding F# code:
let Nit = [0;0;0;1;1;1;2;2;2;3;3;3;4;4;4;6;6;6]
let AOB = [4.26;4.15;4.68;6.08;5.87;6.92;6.87;6.25;6.84;6.34;6.56;6.52;7.39;7.38;7.74;7.76;8.14;7.22]
let AOBm = R.tapply(AOB,Nit, "mean")
let Nitm = R.tapply(Nit, Nit, "mean")
let fitAOB =
namedParams [
"AOBm", box AOBm
"Nitm", box Nitm
]
|> R.data_frame
|> fun d -> R.lm(formula="AOBm~splines::ns(Nitm,df=2)", data=d)
let xmin, xmax = float(List.min Nit), float(List.max Nit)
let prediction1 =
namedParams [ "Nitm",[xmin .. 0.5 .. xmax]]
|> R.data_frame
|> fun data -> R.predict(fitAOB, data)
prediction1.Print()
The R code snippet gives me following for the prediction:
1 2 3 4 5 6 7 8
4.753486 5.177103 5.590795 5.984636 6.348702 6.673067 6.947806 7.166302
9 10 11 12 13
7.335171 7.464340 7.563733 7.643276 7.712893
And the F# code snippet gives me following for the prediction:
val it : string =
" 0 1 2 3 4 6
4.753486 5.668817 6.470509 7.048984 7.388006 7.660199
"
What am I missing?, eg. why doesn't the predict function take all the [0.0 .. 0.5 .. 6.0] points into account when predicting?
Found a way to make R.predict yield the same result as the predict function used from within R.
let prediction1 =
namedParams [
"type",box "prediction"
"Nitm",box [0.0 .. 0.5 .. 6.0]
]
|> R.data_frame
|> fun d -> R.predict(fitAOB, [0.0 .. 0.5 .. 6.0], d)
prediction1.Print()
which gives
val it : string =
" 1 2 3 4 5 6 7 8
4.753486 5.177103 5.590795 5.984636 6.348702 6.673067 6.947806 7.166302
9 10 11 12 13
7.335171 7.464340 7.563733 7.643276 7.712893
"
However, I don't quite understand why I have to repeat the [0.0 .. 0.5 .. 6.0] sequence..

Get the coordinates of a matrix from its flatten index

How can we get the coordinates of a n dimensions matrix from its shape and its flatten index?
I mean, if for example I have the following (2,3) matrix of 2 dimensions:
[ [ 0, 1 ],
[ 2, 3 ],
[ *4*, 5 ] ]
...and I want to find the value of the index in bold from the coordinates [0,2], how can I do?
Or if I have this (2,2,5) matrix of 3 dimensions:
[ [ [ nil, nil ],
[ nil, nil ] ],
[ [ nil, nil ],
[ nil, nil ] ],
[ [ nil, *9* ],
[ nil, nil ] ],
[ [ nil, nil ],
[ nil, nil ] ],
[ [ nil, nil ],
[ nil, nil ] ] ]
...and I know the coordinates that I want have a flatten index value of 9, how can I find the relative coordinates are: [1,0,2]?
If possible, I would like to know a general and simple method, which work on matrix of any shape.
Many thanks for your help.
You can use this simple algorithm:
Let's say you have the matrix A[a][b][c][d] (where a,b,c,d are the dimensions) and the index X.
To get the first coordinate of the index X you simply divide X by b*c*d.
Let it be this next matrix, having the sizes [2][5] and the index X=7
0 1 2 3 4
5 6 7 8 9
You first divide X by the last dimension to find the first coordinate. X/5=1 . Then, from there you move forward and give X the value X%=5 . So you'll have X = 7%5 =2. Now you have to search the coordinates for the remaining dimensions using the same algorithm. If you reach the last dimension , the coordinate will be the remaining X, in this case 2. So the coordinates for X=7 are [1][2] , which is actually the answear.
Again, for the general case, where you have a,b,c,d dimensions.
I'll note with (yd) the y'th dimension.
X=index
(1d)=X/b*c*d
X gets value X % b*c*d
(2d)=X/c*d
X gets value X % c*d
(3d)=X/d
X gets value X % d
(4d)=X
If you had the dimensions [2][2][5] you would get:
X=9;
(1d) = 9/2*5 = 0
X = 9%10 = 9
(2d) = 9/5 = 1
X = 9%5 = 4
(3d) = 4
Result: [0][1][4] is the 9th element.
To get from [0][1][4] to the index 9 , you do the reverse algorithm by multiplying:
X=(1d)*b*c + (2d)*c + 3d = 0 + 1*5 +4 =9

Resources