Experienced programmer playing around with Gamemaker2 Studio.
Trying to draw some random squares on the screen using a 2D array to store the "map"
Step 1 : declare a 2D array MyMap[25,25] this works
Step 2 : Set 100 random locations in Map[]=1 this works
I get a crash when I try to look up the values I have stored in the array.
Its crashing with:
**Execution Error - Variable Index [3,14] out of range [26,14] **
So it looks like it is trying to read 26 element, when you can see from my code the for next loop only goes to 20 and the array bound is 25.
Oddly enough it does the first two loops just fine?
Looking like a bug, I've spent so much time trying to work it out, anyone got an idea what is going on?
var tx=0;
var ty=0;
var t=0;
MyMap[25,25]=99; **// Works**
for( t=1; t<100; t+=1 ) **// Works**
{
MyMap[random(20),random(15)]=1
}
for( tx=1; tx<20; tx+=1 )
{
for( ty=1; ty<15; ty+=1 )
{
show_debug_message(string(tx) + ":" + string(ty))
t = MyMap[tx,ty]; /// **<---- Crashes Here**
if t=1 then {draw_rectangle(tx*32,ty*32,tx*32+32,ty*32+32,false) }
}
}
The line MyMap[random(20),random(15)]=1 does not initialize values in the entire array, creating a sparse array(where some elements do not exist).
The line MyMap[25,25]=99;
Should read:
for( tx=1; tx<20; tx+=1 )
{
for( ty=1; ty<15; ty+=1 )
{
MyMap[tx,ty]=99;
}
}
This will pre-initialize the all of the array values to 99. Filling out the array.
Then you can randomly assign the ones. (You will probably get less than 100 ones the due to duplicates in the random function and the random returning zeros.)
You should have the above code in the Create Event, or in another single fire or controlled fire event, and move the loops for the draw into the Draw Event.
All draw calls should be in the Draw Event. If the entire block were in Draw, it would randomize the blocks each step.
I would like to convert this code from R into Matlab.
require(tseries)
require(fracdiff)
require(matrixStats)
DCCA_beta_avg<-function(y,x,smin,smax,step){
XTX<-var(x)*(length(x)-1)
betas<-rep(0,(smax-smin)/step+1)
for(s in seq(smin,smax,by=step)){
betas[(s-smin)/step+1]<-DCCA_beta_sides(y,x,s)
}
DCCA_beta<-mean(betas)
DCCA_res<-(y-DCCA_beta*x)-mean(y-DCCA_beta*x)
DCCA_sigma2<-sum(DCCA_res^2)/(length(DCCA_res)-2)
DCCA_SE<-sqrt(DCCA_sigma2/XTX)
DCCA_R2<-1-var(DCCA_res)/var(y)
OLS_beta<-lm(y~x)$coefficients[2]
OLS_res<-(y-OLS_beta*x)-mean(y-OLS_beta*x)
OLS_sigma2<-sum(OLS_res^2)/(length(OLS_res)-2)
OLS_SE<-sqrt(OLS_sigma2/XTX)
OLS_R2<-1-var(OLS_res)/var(y)
return(c(OLS_beta,OLS_SE,OLS_R2,DCCA_beta,DCCA_SE,DCCA_R2))
}
DCCA_beta<-DCCdccafunction(y,x,s){
xx<-cumsum(x-mean(x))
yy<-cumsum(y-mean(y))
t<-1:length(xx)
F2sj_xy<-runif(floor(length(xx)/s))
F2sj_xx<-F2sj_xy
for(ss in seq(1,(floor(length(xx)/s)*s),by=s)){
F2sj_xy[(ss-1)/s+1]<-sum((summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(yy[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
F2sj_xx[(ss-1)/s+1]<-sum((summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
}
beta<-mean(F2sj_xy)/mean(F2sj_xx)
return(beta)
}
DCCA_beta_F<-function(y,x,s){
xx<-cumsum(x-mean(x))
yy<-cumsum(y-mean(y))
t<-1:length(xx)
F2sj_xy<-runif(floor(length(xx)/s))
F2sj_xx<-F2sj_xy
F2sj_yy<-F2sj_xy
for(ss in seq(1,(floor(length(xx)/s)*s),by=s)){
F2sj_xy[(ss-1)/s+1]<-sum((summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(yy[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
F2sj_xx[(ss-1)/s+1]<-sum((summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(xx[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
F2sj_yy[(ss-1)/s+1]<-sum((summary(lm(yy[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(yy[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
}
beta<-mean(F2sj_xy)/mean(F2sj_xx)
return(c(beta,mean(F2sj_xx),mean(F2sj_yy)))
#return(c(beta,sum(F2sj_xx),sum(F2sj_yy)))
}
DCCA_beta_SE<-function(y,x,s){
r<-DCCA_beta_F(y,x,s)
beta<-r[1]
yhat<-beta*x
alpha<-mean(y)-beta*mean(x)
res<-y-yhat
residuals<-res-mean(res)
resres<-cumsum(residuals-mean(residuals))
F2sj_res<-runif(floor(length(residuals)/s))
t<-1:length(resres)
for(ss in seq(1,(floor(length(residuals)/s)*s),by=s)){
F2sj_res[(ss-1)/s+1]<-sum((summary(lm(resres[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(resres[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
}
#SE<-mean(residuals^2)/((length(residuals)-2)*r[2])
SE<-mean(F2sj_res)/((length(residuals)-2)*r[2])
SE_a<-(mean(F2sj_res)/r[2])*(sum(x^2)/(length(residuals)*(length(residuals)-2)))
R<-1-mean(F2sj_res)/(r[3])
return(c(alpha,sqrt(SE_a),beta,sqrt(SE),R))
}
DCCA_beta_SE_F<-function(y,x,s){
r<-DCCA_beta_F(y,x,s)
beta<-r[1]
yhat<-beta*x
alpha<-mean(y)-beta*mean(x)
res<-y-yhat
residuals<-res-mean(res)
res_R<-y-x
resres<-cumsum(residuals-mean(residuals))
resres_R<-cumsum(res_R)
F2sj_res<-runif(floor(length(residuals)/s))
F2sj_res_R<-runif(floor(length(res_R)/s))
t<-1:length(resres)
for(ss in seq(1,(floor(length(residuals)/s)*s),by=s)){
F2sj_res[(ss-1)/s+1]<-sum((summary(lm(resres[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(resres[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
F2sj_res_R[(ss-1)/s+1]<-sum((summary(lm(resres_R[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals)*(summary(lm(resres_R[ss:(ss+s-1)]~t[ss:(ss+s-1)]))$residuals))/(s-1)
}
#SE<-mean(residuals^2)/((length(residuals)-2)*r[2])
#SE<-mean(F2sj_res)/((length(residuals)-2)*r[2])
#SE<-mean(F2sj_res)/((length(F2sj_res)-2)*r[2]) #controlling for uncertainty connected to scales (higher scales have higher uncertainty due to lower number of blocks)
SE<-mean(F2sj_res)/(ceiling(length(residuals)/s)*r[2]) #loosing d.f. due to fitting a and b in each box
#SE_a<-(mean(F2sj_res)/r[2])*(sum(x^2)/(length(residuals)*(length(residuals)-2)))
#SE_a<-(mean(F2sj_res)/r[2])*(sum(x^2)/(length(F2sj_res)*(length(F2sj_res)-2))) #controlling for uncertainty connected to scales (higher scales have higher uncertainty due to lower number of blocks)
SE_a<-(mean(F2sj_res)/r[2])*(sum(x^2)/(length(residuals)*ceiling(length(residuals)/s))) #loosing d.f. due to fitting a and b in each box
R<-1-mean(F2sj_res)/(r[3])
#SSR_U<-sum(residuals^2)
SSR_U<-sum(F2sj_res)
#SSR_R<-sum((y-x)^2) #specific null: alpha=0, beta=1
SSR_R<-sum(F2sj_res_R)
#F_stat<-((SSR_R-SSR_U)/(SSR_U))*((length(residuals)-2)/2)
#F_stat<-((SSR_R-SSR_U)/(SSR_U))*((length(F2sj_res)-2)/2) #controlling for uncertainty connected to scales (higher scales have higher uncertainty due to lower number of blocks)
F_stat<-((SSR_R-SSR_U)/(SSR_U))*(ceiling(length(residuals)/s)/2) #loosing d.f. due to fitting a and b in each box
F_p<-pf(F_stat,2,length(F2sj_res)-2,lower.tail=FALSE)
return(c(alpha,sqrt(SE_a),beta,sqrt(SE),R,F_stat,F_p))
}
DCCA_beta_s<-function(y,x,smin,smax,step){
results<-matrix(rep(0,6*((smax-smin)/step+1)),ncol=6)
for(s in seq(smin,smax,by=step)){
beta<-DCCA_beta_SE(y,x,s)
results[((s-smin)/step+1),1]<-s
results[((s-smin)/step+1),2]<-beta[1]
results[((s-smin)/step+1),3]<-beta[2]
results[((s-smin)/step+1),4]<-beta[3]
results[((s-smin)/step+1),5]<-beta[4]
results[((s-smin)/step+1),6]<-beta[5]
}
return(results)
}
DCCA_beta_s_F<-function(y,x,smin,smax,step){
results<-matrix(rep(0,10*((smax-smin)/step+2)),ncol=10)
for(s in seq(smin,smax,by=step)){
beta<-DCCA_beta_SE_F(y,x,s)
results[((s-smin)/step+1),1]<-s
results[((s-smin)/step+1),2]<-beta[1]
results[((s-smin)/step+1),3]<-beta[2]
results[((s-smin)/step+1),4]<-2*pnorm(abs(beta[1]/beta[2]),lower.tail=FALSE)#p-value for null=0
results[((s-smin)/step+1),5]<-beta[3]
results[((s-smin)/step+1),6]<-beta[4]
results[((s-smin)/step+1),7]<-2*pnorm(abs((beta[3]-1)/beta[4]),lower.tail=FALSE)#p-value for null=1
results[((s-smin)/step+1),8]<-beta[5]
results[((s-smin)/step+1),9]<-beta[6]
results[((s-smin)/step+1),10]<-beta[7]
}
#results[(smax-smin)/step+2,2]<-mean(results[1:(dim(results)[1]-1),2])#A
#results[(smax-smin)/step+2,5]<-mean(results[1:(dim(results)[1]-1),5])#B
results[(smax-smin)/step+2,2]<-sum(results[1:(dim(results)[1]-1),2]*results[1:(dim(results)[1]-1),8])/sum(results[1:(dim(results)[1]-1),8])#A as R2(s) weighted
results[(smax-smin)/step+2,5]<-sum(results[1:(dim(results)[1]-1),5]*results[1:(dim(results)[1]-1),8])/sum(results[1:(dim(results)[1]-1),8])#B as R2(s) weighted
results[(smax-smin)/step+2,3]<-sqrt((sum(x^2)/length(x))*sum((y-results[(smax-smin)/step+2,2]-results[(smax-smin)/step+2,5]*x)^2)/((length(y)-dim(results)[1]+1)*sum((x-mean(x))^2)))#SE_A
results[(smax-smin)/step+2,4]<-2*pnorm(abs(results[(smax-smin)/step+2,2]/results[(smax-smin)/step+2,3]),lower.tail=FALSE)#p-value for null=0
results[(smax-smin)/step+2,6]<-sqrt(sum((y-results[(smax-smin)/step+2,2]-results[(smax-smin)/step+2,5]*x)^2)/((length(y)-dim(results)[1]+1)*sum((x-mean(x))^2)))#SE_B
results[(smax-smin)/step+2,7]<-2*pnorm(abs((results[(smax-smin)/step+2,5]-1)/results[(smax-smin)/step+2,6]),lower.tail=FALSE)#p-value for null=1
results[(smax-smin)/step+2,8]<-1-sum((y-results[(smax-smin)/step+2,2]-results[(smax-smin)/step+2,5]*x)^2)/sum(y^2)#R2
results[(smax-smin)/step+2,9]<-((length(x)-2)/2)*((results[(smax-smin)/step+2,8]-(1-(sum((y-x)^2))/(sum((y-mean(y))^2))))/(1-results[(smax-smin)/step+2,8]))#F_test_R2_based
results[(smax-smin)/step+2,10]<-pf(results[(smax-smin)/step+2,9],2,length(y)-2,lower.tail=FALSE)#F_test p_val
return(results)
}
This aimed to perform a calculations of regression parameters a detrended cross correlation analysis.
Is there a way to do it automatically? or we should do it line by line.
This analysis should be performed on time series with the same length.
The answer is no, or at least I don't know of any way to do it, they are two different languages.
Maybe you want to look to the file exchange in Matlab, there is an script like MATLAB R-link that allows you to connect Matlab with R, this way you can call functions of R in MatLab.
Here is a description of this file:
A COM based interface that allows you to call R functions from within MATLAB. The functions are:
openR - Connect to an R server process.
evalR - Run an R command.
getRdata - Copies an R variable to MATLAB.
putRdata - Copies MATLAB data to an R variable.
closeR - Close connection to R server process.
Rdemo - An example of using R from withing MATLAB.
The other option is that you translate the code line by line.
EDIT:
I keep searching more options and I found this one to allow execute r code into Matlab
I hope this could help you.
We're displaying time series data (utilisation of a compute resource, sampled hourly over months) on a stacked area chart using D3.js:
d3.json("/growth/instance_count_1month.json", function( data ) {
data.forEach(function(d) {
d.datapoints = d.datapoints.map(
function(da) {
// NOTE i'm not sure why this needs to be multiplied by 1000
return {date: new Date(da[1] * 1000),
count: da[0]};
});
});
x.domain(d3.extent(data[0].datapoints, function(d) { return d.date; }));
y.domain([0,
Math.ceil(d3.max(data.map(function (d) {return d3.max(d.datapoints, function (d) { return d.count; });})) / 100) * 100
]);
The result is rather spiky for my tastes:
Is there an easy way to simplify the data, either using D3 or another readily available library? I want to reduce the spikiness, but also reduce the volume of data to be graphed, as it will get out of hand.
I have a preference for doing this at the UI level, rather than touching the logging routines (even though redundant JSON data will have to be transferred.)
You have a number of options, you need to decided what is the best way forward for the type of data you have and the needs of it been used. Without knowing more about your data the best I can suggest is re-sampling. Simply report the data at longer intervals ('rolling up' the data). Alternatively you could use a rolling average or look at various line smoothing algorithms.
I'm trying to write a code for generating standard normals using a double exponential distribution. In other words,
x is a double exponential that I've already coded correctly here:
double.exponential.rv<-function(i)
{
u<-runif(1)
x<--log(1-u)
v<-runif(1)
if (v<.5) x<-(-x)
return(x)
}
where i=1
y is a normal distribution from 0 to (sqrt(exp(1)/(2*pi)))*exp(-abs(x)))
so far my code is this:
std.normal.rv<-function(i)
{
while(1)
{
x<-double.exponential.rv(1)
y<-runif(1)*(sqrt(exp(1)/(2*pi)))*exp(-abs(x))
if (y<=(sqrt(exp(1)/(2*pi)))*exp(-abs(x))) return(x)
}
hist(x,nclass=100,freq=FALSE)
}
I'm not quite sure what I'm doing wrong but I receive the Error:no function to return from, jumping to top level
And it doesn't plot. Any suggestions to fix my code?
Thanks!
I have a problem when reading DICOM file. This is the format 1.2.840.10008.1.2.4.70(Process 14 with a first-order prediction (Selection Value 1). I write my own software.
Here is result of my work.
I also give you a .dcm file.
What can be wrong with it? Only RadiAnt Dicom Viewer open it corectly ( i didn't find any working software with source code ).
Have someone any tutorial about it? Any working code?
I'll be very grateful!
Thanks for help.
I show you how I it do:
//I have:
numCOL= imageWidth;
numROW= imageHeight;
dwCurrentBufferLength;//-> where I in file
//and other stuff...
//First i decode first row:
//[0][0]
DecodeFirstRow(curRowBuf,dwCurrentBufferLength);
//I calculate difrences
HuffDecode ( table , &val, dwCurrentBufferLength);
//and extend
HuffExtend(extend, val);
curRowBuf[0][curComp]=extend+(1<<(Pr-Pt-1));
//[1-n][0]
//... huff stuff
curRowBuf[col][curComp]=extend+curRowBuf[col-1][curComp];
//Then i put row to the vector:
for (col = 0; col < numCol; col++)
{
v=RowBuf[col][0]<<point_transform_parameter;
m_vOutputBuf.push_back(v);
}
//Rest of columns
//[0][m]
curRowBuf[0][curComp]=extend+prevRowBuf[0][curComp];
predictor = left =curRowBuf[leftcol][curComp];
//[1-n][m]
curRowBuf[col][curComp]=extend+predictor;
//and also put it to vector ^^
Where i must sub this 1000??
Most likely you have not accounted for the Rescale Intercept, tag (0028,1052) when calculating your HU values. According to the DICOM file, the intercept is -1000.
To get the appropriate HU values of your image, use this formula:
HU = rescale_slope * pixel_value + rescale_intercept
where Rescale Intercept is thus obtained from tag (0028,1052) and Rescale Slope from tag (0028,1053).
Here is what I see using gdcmviewer:
$ gdcmviewer 3DSlice1.dcm
GDCM is open source you could study it.