I have a problem where I have a bunch of lengths and want to start at the origin (pretend I'm facing to the positive end of the y axis), I make a right and move positively along the x axis for the distance of length_i. At this time I make another right turn, walk the distance of length_i and repeat n times. I can do this but I think there's a more efficient way to do it and I lack a math background:
## Fake Data
set.seed(11)
dat <- data.frame(id = LETTERS[1:6], lens=sample(2:9, 6),
x1=NA, y1=NA, x2=NA, y2=NA)
## id lens x1 y1 x2 y2
## 1 A 4 NA NA NA NA
## 2 B 2 NA NA NA NA
## 3 C 5 NA NA NA NA
## 4 D 8 NA NA NA NA
## 5 E 6 NA NA NA NA
## 6 F 9 NA NA NA NA
## Add a cycle of 4 column
dat[, "cycle"] <- rep(1:4, ceiling(nrow(dat)/4))[1:nrow(dat)]
##For loop to use the information from cycle column
for(i in 1:nrow(dat)) {
## set x1, y1
if (i == 1) {
dat[1, c("x1", "y1")] <- 0
} else {
dat[i, c("x1", "y1")] <- dat[(i - 1), c("x2", "y2")]
}
col1 <- ifelse(dat[i, "cycle"] %% 2 == 0, "x1", "y1")
col2 <- ifelse(dat[i, "cycle"] %% 2 == 0, "x2", "y2")
dat[i, col2] <- dat[i, col1]
col3 <- ifelse(dat[i, "cycle"] %% 2 != 0, "x2", "y2")
col4 <- ifelse(dat[i, "cycle"] %% 2 != 0, "x1", "y1")
mag <- ifelse(dat[i, "cycle"] %in% c(1, 4), 1, -1)
dat[i, col3] <- dat[i, col4] + (dat[i, "lens"] * mag)
}
This gives the desired result:
> dat
id lens x1 y1 x2 y2 cycle
1 A 4 0 0 4 0 1
2 B 2 4 0 4 -2 2
3 C 5 4 -2 -1 -2 3
4 D 8 -1 -2 -1 6 4
5 E 6 -1 6 5 6 1
6 F 9 5 6 5 -3 2
Here it is as a plot:
library(ggplot2); library(grid)
ggplot(dat, aes(x = x1, y = y1, xend = x2, yend = y2)) +
geom_segment(aes(color=id), size=3, arrow = arrow(length = unit(0.5, "cm"))) +
ylim(c(-10, 10)) + xlim(c(-10, 10))
This seems slow and clunky. I'm guessing there's a better way to do this than the items I do in the for loop. What's a more efficient way to keep making programatic rights?
(As suggested by #DWin) Here is a solution using complex numbers, which is flexible to any kind of turn, not just 90 degrees (-pi/2 radians) right angles. Everything is vectorized:
set.seed(11)
dat <- data.frame(id = LETTERS[1:6], lens = sample(2:9, 6),
turn = -pi/2)
dat <- within(dat, { facing <- pi/2 + cumsum(turn)
move <- lens * exp(1i * facing)
position <- cumsum(move)
x2 <- Re(position)
y2 <- Im(position)
x1 <- c(0, head(x2, -1))
y1 <- c(0, head(y2, -1))
})
dat[c("id", "lens", "x1", "y1", "x2", "y2")]
# id lens x1 y1 x2 y2
# 1 A 4 0 0 4 0
# 2 B 2 4 0 4 -2
# 3 C 5 4 -2 -1 -2
# 4 D 8 -1 -2 -1 6
# 5 E 6 -1 6 5 6
# 6 F 9 5 6 5 -3
The turn variable should really be considered as an input together with lens. Right now all turns are -pi/2 radians but you can set each one of them to whatever you want. All other variables are outputs.
Now having a little fun with it:
trace.path <- function(lens, turn) {
facing <- pi/2 + cumsum(turn)
move <- lens * exp(1i * facing)
position <- cumsum(move)
x <- c(0, Re(position))
y <- c(0, Im(position))
plot.new()
plot.window(range(x), range(y))
lines(x, y)
}
trace.path(lens = seq(0, 1, length.out = 200),
turn = rep(pi/2 * (-1 + 1/200), 200))
(My attempt at replicating the graph here: http://en.wikipedia.org/wiki/Turtle_graphics)
I also let you try these:
trace.path(lens = seq(1, 10, length.out = 1000),
turn = rep(2 * pi / 10, 1000))
trace.path(lens = seq(0, 1, length.out = 500),
turn = seq(0, pi, length.out = 500))
trace.path(lens = seq(0, 1, length.out = 600) * c(1, -1),
turn = seq(0, 8*pi, length.out = 600) * seq(-1, 1, length.out = 200))
Feel free to add yours!
This is yet another method using complex numbers. You can rotate a vector "to the right" in the complex plane by multiplying by -1i. The code below makes the first traversal go in the positive X (the Re()-al axis) and each subsequent traversal would be rotated to the "right"
imVecs <- lengths*c(0-1i)^(0:3)
imVecs
# [1] 9+0i 0-5i -9+0i 0+9i 8+0i 0-5i -8+0i 0+7i 8+0i 0-1i -5+0i 0+3i 4+0i 0-7i -4+0i 0+2i
#[17] 3+0i 0-7i -5+0i 0+8i
cumsum(imVecs)
# [1] 9+0i 9-5i 0-5i 0+4i 8+4i 8-1i 0-1i 0+6i 8+6i 8+5i 3+5i 3+8i 7+8i 7+1i 3+1i 3+3i 6+3i 6-4i 1-4i
#[20] 1+4i
plot(cumsum(imVecs))
lines(cumsum(imVecs))
This is the approach to using complex plane rotations to do 45 degree turns to the right:
> sqrt(-1i)
[1] 0.7071068-0.7071068i
> imVecs <- lengths*sqrt(0-1i)^(0:7)
Warning message:
In lengths * sqrt(0 - (0+1i))^(0:7) :
longer object length is not a multiple of shorter object length
> plot(cumsum(imVecs))
> lines(cumsum(imVecs))
And the plot:
This isn't a pretty plot, but I've included it to show that this 'vectorized' coordinate calculation produces correct results which shouldn't be too hard to adapt to your needs:
xx <- c(1,0,-1,0)
yy <- c(0,-1,0,1)
coords <- suppressWarnings(cbind(x = cumsum(c(0,xx*dat$lens)),
y = cumsum(c(0,yy*dat$lens))))
plot(coords, type="l", xlim=c(-10,10), ylim=c(-10,10))
It might be useful to think about this in terms of distance and bearing. Distance is given by dat$lens, and bearing is the angle of movement relative to some arbitrary reference line (say, the x-axis). Then, at each step,
x.new = x.old + distance * cos(bearing)
y.new = y.old + distance * sin(bearing)
bearing = bearing + increment
Here, since we start at the origin and move in the +x direction, (x,y)=(0,0) and bearing starts at 0 degrees. A right turn is simply a bearing increment of -90 degrees (-pi/2 radians). So in R code, using your definition of dat:
x <-0
y <- 0
bearing <- 0
for (i in 1:nrow(dat)){
dat[i,c(3,4)] <- c(x,y)
length <- dat[i,2]
x <- x + length * cos(bearing)
y <- y + length * sin(bearing)
dat[i,c(5,6)] <- c(x,y)
bearing <- bearing - pi/2
}
This produces what you had and has the advantage that you can update it very simply to make left turns, or 45 degree turns, or whatever. You can even add a bearing.increment column to dat to create a random walk.
Very similar to Josh's solution:
lengths <- sample(1:10, 20, repl=TRUE)
x=cumsum(lengths*c(1,0,-1,0))
y=cumsum(lengths*c(0,1,0,-1))
cbind(x,y)
x y
[1,] 9 0
[2,] 9 5
[3,] 0 5
[4,] 0 -4
[5,] 8 -4
[6,] 8 1
[7,] 0 1
[8,] 0 -6
[9,] 8 -6
[10,] 8 -5
[11,] 3 -5
[12,] 3 -8
[13,] 7 -8
[14,] 7 -1
[15,] 3 -1
[16,] 3 -3
[17,] 6 -3
[18,] 6 4
[19,] 1 4
[20,] 1 -4
Base graphics:
plot(cbind(x,y))
arrows(cbind(x,y)[-20,1],cbind(x,y)[-20,2], cbind(x,y)[-1,1], cbind(x,y)[-1,2] )
This does highlight the fact that both Josh's and my solutions are "turning the wrong way", so you need to change the signs on our "transition matrices". And we probably should have started at (0,0), but You should have not trouble adapting this you your needs.
Related
I have a function :
f <- function(x,y){ return (x + y)}
I have to make a plot 2 D (not 3 D) with on the X and Y aes c(30:200). So I have to map both the x and the y to the function and based on the result of that function I have to color the point f(xi,yi) > ? and so on. How would I achieve this ?
I tried :
range <- c(30:200)
ys = matrix(nrow = 171,ncol = 171 )
for (i in range){
for (y in range){
ys[i-29,y-29] <- f(i,y) # exemple if f(i,j) < 0.5 color (i,j) red
}
}
df <- data.frame(x= c(30:200), y= c(30:200))
Now the x and y axes are correct however how would I be able to plot this since I cant just bind ys to the y axes. Using a ys seems like it isnt the right way to achieve this, how would I do this
Thx for the help
Here's a sample given a small matrix.
First, I'll generate the matrix ... you use whatever data you want.
m <- matrix(1:25, nr=5)
m
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1 6 11 16 21
# [2,] 2 7 12 17 22
# [3,] 3 8 13 18 23
# [4,] 4 9 14 19 24
# [5,] 5 10 15 20 25
Now, convert it to the "long" format that ggplot2 prefers:
library(dplyr)
library(tidyr)
longm <- cbind(m, x = seq_len(nrow(m))) %>%
as.data.frame() %>%
gather(y, val, -x) %>%
mutate(y = as.integer(gsub("\\D", "", y)))
head(longm)
# x y val
# 1 1 1 1
# 2 2 1 2
# 3 3 1 3
# 4 4 1 4
# 5 5 1 5
# 6 1 2 6
And a plot:
library(ggplot2)
ggplot(longm, aes(x, y, fill=val)) + geom_tile()
# or, depending on other factors, otherwise identical
ggplot(longm, aes(x, y)) + geom_tile(aes(fill=val))
It's notable (to me) that the top-left value in the matrix (m[1,1]) is actually the bottom-left in the heatmap. This can be adjusted with scale_y_reverse(). From here, it should be primarily aesthetics.
I am trying interpolate splines for the following example data:
trt depth root carbon
A 2 1 14
A 4 2 18
A 6 3 18
A 8 3 17
A 10 1 12
B 2 3 16
B 4 4 18
B 6 4 17
B 8 2 15
B 10 1 12
in the following way:
new_df<-df%>%
group_by(trt)%>%
summarise_each(funs(splinefun(., x=depth, method="natural")))
I get an Error: not a vector, but I don't see why not. Am I not expressing the function in the right way?
Do you want a dataset that contains the values interpolated? If so, I've expanded the dataset to contain the desired x locations before the splines are calculated.
The resolution of those points are determined in the second line of the expand.grid() function. Just make sure the original depth points are a subset of the expanded depth points (eg, don't use something uneven like by=.732).
library(magrittr)
ds <- readr::read_csv("trt,depth,root,carbon\nA,2,1,14\nA,4,2,18\nA,6,3,18\nA,8,3,17\nA,10,1,12\nB,2,3,16\nB,4,4,18\nB,6,4,17\nB,8,2,15\nB,10,1,12")
ds_depths_possible <- expand.grid(
depth = seq(from=min(ds$depth), max(ds$depth), by=.5), #Decide resolution here.
trt = c("A", "B"),
stringsAsFactors = FALSE
)
ds_intpolated <- ds %>%
dplyr::right_join(ds_depths_possible, by=c("trt", "depth")) %>% #Incorporate locations to interpolate
dplyr::group_by(trt) %>%
dplyr::mutate(
root_interpolated = spline(x=depth, y=root , xout=depth)$y,
carbon_interpolated = spline(x=depth, y=carbon, xout=depth)$y
) %>%
dplyr::ungroup()
ds_intpolated
Output:
Source: local data frame [34 x 6]
trt depth root carbon root_interpolated carbon_interpolated
(chr) (dbl) (int) (int) (dbl) (dbl)
1 A 2.0 1 14 1.000000 14.00000
2 A 2.5 NA NA 1.195312 15.57031
3 A 3.0 NA NA 1.437500 16.72917
4 A 3.5 NA NA 1.710938 17.52344
5 A 4.0 2 18 2.000000 18.00000
6 A 4.5 NA NA 2.289062 18.21094
7 A 5.0 NA NA 2.562500 18.22917
8 A 5.5 NA NA 2.804688 18.13281
9 A 6.0 3 18 3.000000 18.00000
10 A 6.5 NA NA 3.132812 17.88281
.. ... ... ... ... ... ...
In the graphs above, the little points & lines are interpolated. The big fat points are observed.
library(ggplot2)
ggplot(ds_intpolated, aes(x=depth, y=root_interpolated, color=trt)) +
geom_line() +
geom_point(shape=1) +
geom_point(aes(y=root), size=5, alpha=.3, na.rm=T) +
theme_bw()
ggplot(ds_intpolated, aes(x=depth, y=carbon_interpolated, color=trt)) +
geom_line() +
geom_point(shape=1) +
geom_point(aes(y=carbon), size=5, alpha=.3, na.rm=T) +
theme_bw()
If you want an additional example, here's some recent code and slides. We needed a rolling median for some missing points, and linear stats::approx() for some others. Another option is also stats::loess(), but it's arguments aren't as similar as approx() and spline().
I gave up trying to get dplyr::summarise_each (and also tried dplyr::summarise, since your choice of functions didn't seem to match you desire for multiple column input to return only two functions.) I'm not sure it's possible in dply. Here's what might be called the canonical method of approaching this:
lapply( split(df, df$trt), function(d) splinefun(x=d$depth, y=d$carbon) )
#-------------
$A
function (x, deriv = 0L)
{
deriv <- as.integer(deriv)
if (deriv < 0L || deriv > 3L)
stop("'deriv' must be between 0 and 3")
if (deriv > 0L) {
z0 <- double(z$n)
z[c("y", "b", "c")] <- switch(deriv, list(y = z$b, b = 2 *
z$c, c = 3 * z$d), list(y = 2 * z$c, b = 6 * z$d,
c = z0), list(y = 6 * z$d, b = z0, c = z0))
z[["d"]] <- z0
}
res <- .splinefun(x, z)
if (deriv > 0 && z$method == 2 && any(ind <- x <= z$x[1L]))
res[ind] <- ifelse(deriv == 1, z$y[1L], 0)
res
}
<bytecode: 0x7fe56e4853f8>
<environment: 0x7fe56efd3d80>
$B
function (x, deriv = 0L)
{
deriv <- as.integer(deriv)
if (deriv < 0L || deriv > 3L)
stop("'deriv' must be between 0 and 3")
if (deriv > 0L) {
z0 <- double(z$n)
z[c("y", "b", "c")] <- switch(deriv, list(y = z$b, b = 2 *
z$c, c = 3 * z$d), list(y = 2 * z$c, b = 6 * z$d,
c = z0), list(y = 6 * z$d, b = z0, c = z0))
z[["d"]] <- z0
}
res <- .splinefun(x, z)
if (deriv > 0 && z$method == 2 && any(ind <- x <= z$x[1L]))
res[ind] <- ifelse(deriv == 1, z$y[1L], 0)
res
}
<bytecode: 0x7fe56e4853f8>
<environment: 0x7fe56efc4db8>
I apologize in advance if this has been asked before, or if I have missed something obvious.
I have two data sets, 'olddata' and 'newdata'
set.seed(0)
olddata <- data.frame(x = rnorm(10, 0,5), y = runif(10, 0, 5), z = runif(10,-10,10))
newdata <- data.frame(x = -5:5, z = -5:5)
I create a model from the old data, and want to predict values from the new data
mymodel <- lm(y ~ x+z, data = olddata)
predict.lm(mymodel, newdata)
However, I'd like to restrict the range of variables in 'newdata' to the range of variables on which the model was trained.
of course I could do this:
newnewdata <- subset(newdata,
x < max(olddata$x) & x > min(olddata$x) &
z < max(olddata$z) & z > max(olddata$z))
But this gets intractable over many dimensions. Is there a less repetitive way to do this?
It seems that all the values in your newdata are already within the appropriate ranges, so there's nothing there to subset. If we expand the ranges of newdata:
set.seed(0)
olddata <- data.frame(x = rnorm(10, 0,5), y = runif(10, 0, 5), z = runif(10,-10,10))
newdata <- data.frame(x = -10:10, z = -10:10)
newdata
x z
1 -10 -10
2 -9 -9
3 -8 -8
4 -7 -7
5 -6 -6
6 -5 -5
7 -4 -4
8 -3 -3
9 -2 -2
10 -1 -1
11 0 0
12 1 1
13 2 2
14 3 3
15 4 4
16 5 5
17 6 6
18 7 7
19 8 8
20 9 9
21 10 10
Then all we need to do is identify the ranges for each variable of olddata and then loop through as many iterations of subset as newdata has columns:
ranges <- sapply(olddata, range, na.rm = TRUE)
for(i in 1:ncol(newdata)) {
col_name <- colnames(newdata)[i]
newdata <- subset(newdata,
newdata[,col_name] >= ranges[1, col_name] &
newdata[,col_name] <= ranges[2, col_name])
}
newdata
x z
4 -7 -7
5 -6 -6
6 -5 -5
7 -4 -4
8 -3 -3
9 -2 -2
10 -1 -1
11 0 0
12 1 1
13 2 2
14 3 3
15 4 4
16 5 5
17 6 6
Here is an approach using the *apply family (using SchaunW's newdata):
set.seed(0)
olddata <- data.frame(x = rnorm(10, 0, 5), y = runif(10, 0, 5), z = runif(10,-10,10))
newdata <- data.frame(x = -10:10, z = -10:10)
minmax <- sapply(olddata[-2], range)
newdata[apply(newdata, 1, function(a) all(a > minmax[1,] & a < minmax[2,])), ]
Some care is required because I have assumed the columns of olddata (after dropping the second column) are identical to newdata.
Brevity comes at the cost of speed. After increasing nrow(newdata) to 2000 to emphasis the difference I found:
test replications elapsed relative user.self sys.self user.child sys.child
1 orizon() 100 2.193 27.759 2.191 0.002 0 0
2 SchaunW() 100 0.079 1.000 0.075 0.004 0 0
My guess at the main cause is that repeated subsetting avoids testing whether rows meet the criteria examined after they are excluded.
Imagine I have a function that gives the transition probability of going from state {x,y} to state {X, Y}: transition <- function(x,y,X,Y)
Imagine the x values can assume values in on a discrete set of points x_grid and y assume discrete values in y_grid, and I'd like to compute all possible transitions, e.g. fill out as a 2D matrix like this:
X1Y1 X2Y1 X3Y1 X1Y2 .... X3Y3
x1,y1
x2,y1
x3,y1
x1,y2
x2,y2
x3,y2
...
x3,y3
What's the simplest way to loop over my function in R to generate this matrix?
A cumbersome approach with for loops
x_grid <- 1:3
y_grid <- 1:3
## dummy function
transition <- function(x,y,X,Y)
x == X && y == Y
nx <- length(x_grid)
ny <- length(y_grid)
T <- matrix(NA, ncol = nx * ny, nrow = nx * ny)
for(i in 1:nx)
for(j in 1:ny)
for(k in 1:nx)
for(l in 1:ny)
T[i+(j-1)*ny, k+(l-1)*ny] <-
transition(x_grid[i], y_grid[j], x_grid[k], y_grid[l])
Surely there's a more efficient and more elegant way to do this in R?
For instance,
sapply(x_grid, function(x)
sapply(y_grid, function(y)
sapply(x_grid, function(X)
sapply(y_grid, function(Y)
transition(x,y,X,Y) ))))
works more efficiently but returns an object of the wrong shape. Turning the outermost apply into an lapply and then doing cbind on it's elements corrects this, but feels very crude.
Here's a wild shot in the dark. I hope it's helpful:
#Some simple data grid points
d <- expand.grid(1:3,1:3,1:3,1:3)
#Trivial function
f <- function(x,y,X,Y){x*y*X*Y}
#Wrap mapply in matrix; fills by column by default
matrix(mapply(f,d$Var1,d$Var2,d$Var3,d$Var4),nrow = 9)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 1 2 3 2 4 6 3 6 9
[2,] 2 4 6 4 8 12 6 12 18
[3,] 3 6 9 6 12 18 9 18 27
[4,] 2 4 6 4 8 12 6 12 18
[5,] 4 8 12 8 16 24 12 24 36
[6,] 6 12 18 12 24 36 18 36 54
[7,] 3 6 9 6 12 18 9 18 27
[8,] 6 12 18 12 24 36 18 36 54
[9,] 9 18 27 18 36 54 27 54 81
This creates a transition matrix where probability of going from one state to another is defined as 'prob', then assigns those probabilities to a data set. But I am not sure this does what you want.
set.seed(1234)
tran <- expand.grid(x1 = c(1, 2, 3), y1 = c(1, 2, 3),
x2 = c(1, 2, 3), y2 = c(1, 2, 3))
lin.prob <- -1.75 - 1.18 * ((tran[,1] - tran[,3])^2 +
(tran[,2] - tran[,4])^2) ^ 0.5
e <- exp(1)
prob <- e^lin.prob / (1+e^lin.prob)
tran <- cbind(tran, prob)
colnames(tran) = c("x1","y1","x2","y2", "transition.prob")
nsites <- 25
x1sites <- ceiling(runif(nsites, 0, 3))
y1sites <- ceiling(runif(nsites, 0, 3))
x2sites <- ceiling(runif(nsites, 0, 3))
y2sites <- ceiling(runif(nsites, 0, 3))
site <- seq(1:nsites)
sites <- cbind(site, x1sites, y1sites, x2sites, y2sites)
colnames(sites) = c("site", "x1","y1","x2","y2")
my.data <- merge(sites, tran,
by.x = c("x1", "y1", "x2", "y2"),
by.y = c("x1", "y1", "x2", "y2"),
all = F, sort=F )
my.data=my.data[order(my.data$site),]
my.data
I am trying to fill a matrix in R where the final result will ignore the diagonal entries and the values will be filled in around the diagonal. A simple example of what I mean is, if I take a simple 3x3 matrix like the one shown below:
ab <- c(1:9)
mat <- matrix(ab,nrow=3,ncol=3)
colnames(mat)<- paste0("x", 1:3)
rownames(mat)<- paste0("y", 1:3)
mat
x1 x2 x3
y1 1 4 7
y2 2 5 8
y3 3 6 9
What I want to achieve is to fill the diagonals with 0 and shift all the other values around the diagonal. So, for example if I just use diag(mat)<-0 that results in this:
x1 x2 x3
y1 0 4 7
y2 2 0 8
y3 3 6 0
Whereas, the result I'm looking for is something like this (where the values get wrapped around the diagonal):
x1 x2 x3
y1 0 3 5
y2 1 0 6
y3 2 4 0
I'm not worried about the values that are pushed out of the matrix (i.e., 7,8,9).
Any suggestions?
Thanks
EDIT: The upvoted solution below, seems to have solved the problem
One solution that works for your example is to first declare a matrix full of ones except on the diagonal:
M <- 1 - diag(3)
And then to replace all the ones by the desired off-diagonal values
M[M == 1] <- 1:6
M
# [,1] [,2] [,3]
# [1,] 0 3 5
# [2,] 1 0 6
# [3,] 2 4 0
A more complicated scenario (e.g. diagonal coefficients that are not 0, or an unkonwn number of off-diagonal elements) might need a little bit of additionnal work.
You may need a loop:
n <- 9
seqs <- seq(1:n)
mats <- matrix(0, nrow = 3, ncol = 3)
ind <- 0
for(i in 1:nrow(mats)){
for(j in 1:nrow(mats)){
if(i == j) {
mats[i,j] <- 0 }
else {
ind <- ind + 1
mats[j,i] <- seqs[ind]
}
}
}
Resulting in:
>mats
[,1] [,2] [,3]
[1,] 0 3 5
[2,] 1 0 6
[3,] 2 4 0
This will work ok for your example. Not sure I needed n1 & n2, could be altered to one value if always symmetric
# original data
ab <- c(1:9)
n1 <- 3
n2 <- 3
# You could add the 0's to the diagonal, by adding a 0 before every n1 split
# of the data e.g. 0,1,2,3 & 0,4,5,6 & 0,7,8,9
split_ab <- split(ab, ceiling((1:length(ab))/n1))
update_split_ab <- lapply(split_ab, function(x){
c(0, x)
})
new_ab <- unlist(update_split_ab)
mat <- matrix(new_ab, nrow=n1, ncol=n2)
colnames(mat)<- paste0("x", 1:n2)
rownames(mat)<- paste0("y", 1:n1)
mat
# turn this in to a function
makeShiftedMatrix <- function(ab=1:9, n1=3, n2=3){
split_ab <- split(ab, ceiling((1:length(ab))/n1))
update_split_ab <- lapply(split_ab, function(x){
c(0, x)
})
new_ab <- unlist(update_split_ab)
mat <- matrix(new_ab, nrow=n1, ncol=n2)
colnames(mat)<- paste0("x", 1:n2)
rownames(mat)<- paste0("y", 1:n1)
mat
return(mat)
}
# default
makeShiftedMatrix()
# to read in original matrix and shift:
old_mat <- matrix(ab, nrow=n1, ncol=n2)
makeShiftedMatrix(ab=unlist(old_mat))