How to use a for loop on a matrix - r

Say I have a matrix:
A <- matrix(c(2,4,3,1,5,7), nrow=3, ncol=2)
colnames(A) <- c ("x", "y")
A
x y
[1,] 2 1
[2,] 4 5
[3,] 3 7
Is there a way to access each row of the matrix using a for loop?
What I'm trying to do is total the euclidean distance between each successive point (x,y). So in this example, I would find the total distance between:
(2,1) and (4,5)
(4,5) and (3,7)
So first I would find the distance between each of the two points, ie:
(2,1) and (4,5) => (|4-2|,|5-1|) => (2,4)
(4,5) and (3,7) => (|3-4|,|7-5|) => (1,2)
Then I would turn it into euclidean distance:
(2,4) => sqrt(2^2 + 4^2) => 4.47
(1,2) => sqrt(1^2 + 2^2) => 2.24
And total the distance
4.47 + 2.24 = 6.71
I'm quite confident that if I can access each row of the matrix as a vector, I can easily code this. However, I would love to hear any better ways of doing this.
I was also looking into turning the matrix into a list of lists (ie a list of (x,y) points, where each point is a list of the x and y value), or a list of points (x,y).
I'm not very experienced in programming and I've just started using R, so sorry if I'm not making sense.

You can try the following
for (i in 1:nrow(A))
{
row = A[i,]
% Do something with the row
}

However, I would love to hear any better ways of doing this.
R has built in functions for distance calculations such as dist, e.g.:
out <- as.matrix(dist(A))
# 1 2 3
#1 0.000000 4.472136 6.082763
#2 4.472136 0.000000 2.236068
#3 6.082763 2.236068 0.000000
You can extract the off-diagonal, which is the values you want, using:
row(out) - col(out)==1
# [,1] [,2] [,3]
#[1,] FALSE FALSE FALSE
#[2,] TRUE FALSE FALSE
#[3,] FALSE TRUE FALSE
Thus:
out[row(out) - col(out)==1]
#[1] 4.472136 2.236068
sum(out[row(out) - col(out)==1])
#[1] 6.708204

Related

Algorithm for finding a permutation matrix of a matrix

I see some similar questions:
Generate permutation matrix from permutation vector
https://math.stackexchange.com/questions/345166/what-is-the-name-for-a-non-square-permutation-matrix
Given elements:
elems = [1,2,3,4] # dimensions 1x4
If I have a vector:
M = [4,2,3,1] # dimensions 1x4
I know there is some permutation matrix p that I can multiply elems * p = M, which in this case would be:
p =
[
0 0 0 1
0 1 0 0
0 0 1 0
1 0 0 0
] # dimensions 4x4
# eg:
# elems * P = M
1x4 4x4 = 1x4
Now, for my question, I am interested in what it would look like in the case when M is a non-vector, non-square matrix, like:
M' = [
4 2 3 1
4 3 2 1
1 2 3 4
] # dimensions 3x4
For the same
elems' = [
1 2 3 4
1 2 3 4
1 2 3 4
] # where this is now tripled to be conformant dimensions
# dimensions 3x4
#
# meaning P is still 4x4
You can see M_prime and elems_prime in this case are still just permutations, but now multivariate, rather than just a single vector as originally.
I know I am not able to just do the following kind of thing, because the matrix is not square, and thus not invertible:
elems' * P = M'
P = elems'^-1 * M'
# eg:
# elems' * P = M'
3x4 4x4 = 3x4
When I try, in R at least, I see:
> P <- ginv(elems_prime) %*% M_prime
[,1] [,2] [,3] [,4]
[1,] 0.1 0.07777778 0.08888889 0.06666667
[2,] 0.2 0.15555556 0.17777778 0.13333333
[3,] 0.3 0.23333333 0.26666667 0.20000000
[4,] 0.4 0.31111111 0.35555556 0.26666667
Does this give me back M'?
> elems_prime %*% P
[,1] [,2] [,3] [,4]
[1,] 3 2.333333 2.666667 2
[2,] 3 2.333333 2.666667 2
[3,] 3 2.333333 2.666667 2
!= M' # No, does not.
So this is not right.
My questions are:
What is the right P that would correctly permute the elems' matrix
into the M' matrix?
What is the name of the algorithm to find it?
(implementation in R, Haskell, or pseudocode is great)
Is there a way to restrict the values of P to be integers, preferably 0 or 1?
for R reproducibility
> dput(elems_prime)
structure(c(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4), .Dim = 3:4)
> dput(M_prime)
structure(c(4, 4, 1, 2, 3, 2, 3, 2, 3, 1, 1, 4), .Dim = 3:4)
Notice that column space of M' is of higher order than the column space of elem'. This implies that there does not exist a linear mapping from elem' to M' because a linear mapping cannot increase the row or column space of a matrix (useful to think about this as a transformation of basis).
It follows that the any M' generated by elem' * P can have rank of at most 1, leaving only the conventional permutation matrices as candidates for P'
It is an entirely different question if we look at going from M' back to elem, and this asymmetry is also noteworthy.
When M is not a vector, this is not possible.
Here is why. In general if we multiple a nxm matrix times a mxp matrix we get a nxp matrix. Here elems is a vector that is a 1x4 matrix, so elems * P has to be a 1x? matrix of some sort. By making P longer, you can make M longer, but you'd have to change elems to make M taller.
Incidentally in linear algebra it is standard to flip vectors to be columns and put the matrices on their left. The reason for that is that the matrix represents a linear function, and that puts the matrix in the same place where the linear function goes. So it is very nice when going from functional notation to matrix notation. Also if you've got to write a square matrix anyways, it takes less room on the page to write a vertical vector on the right rather than a horizontal one on the left...

Finding all solutions of a non-square linear system with infinitely many solutions

In this question was found a solution to find a particular solution to a non-square linear system that has infinitely many solutions. This leads to another question:
How to find all the solutions for a non-square linear system with infinitely many solutions, with R? (see below for a possible description of the infinite set of solutions)
Example: the linear system
x+y+z=1
x-y-2z=2
is equivalent to A X = B with:
A=matrix(c(1,1,1,1,-1,-2),2,3,T)
B=matrix(c(1,2),2,1,T)
A
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 1 -1 -2
B
[,1]
[1,] 1
[2,] 2
We can describe the infinite set of solutions with:
x = 3/2 + (1/2) z
y = -1/2 + (-3/2) z
z in R
Thus, R could describe the set of solutions this way:
> solve2(A,B)
$principal
[1] 1 2 # this means that x and y will be described
$free
[1] 3 # this means that the 3rd variable (i.e. z) is free in the set of real numbers
$P
[1] 1.5 -0.5
$Q
[1] 0.5 -1.5
This means that every solution can be created with:
z = 236782 # any value would be ok
solve2(A,B)$P + z * solve2(A,B)$Q # this gives x and y
About the maths: there always exist such a decomposition, when the linear system has infinitely many solutions, this part is ok. The question is: is there something to do this in R?
You can solve equations like thse using the generalized inverse of A.
library(MASS)
ginv(A) %*% B
# 1.2857143
# 0.1428571
#-0.4285714
A %*% ginv(A) %*% B
# 1
# 2
So, with help from #Bhas
gen_soln <- function(vec) {
G <- ginv(A)
W <- diag(3) - G %*% A
(G %*% B + W %*% vec)
}
You can now find many solutions by providing a vector of length 3 to `gen_soln' function. For example,
one_from_inf <- gen_soln(1:3)
one_from_inf
#[1,] 1.35714286
#[2,] -0.07142857
#[3,] -0.2857142
# Test the solution.
A %*% one_from_inf
# [,1]
#[1,] 1
#[2,] 2
# Using random number generator
A %*% gen_soln(rnorm(3))
# [,1]
#[1,] 1
#[2,] 2
The general solution to
A*x = b
is
x = x0 + z
where x0 is any solution and z is in the kernel of A
As pointed out above you can find a particular solution x0 by using the generalised inverse. You can also use the SVD to find a basis for the kernel of A:
A = U*S*V'
where U and V are orthogonal and S diagonal, with, say, the last k entries on the diagonal 0 (and the others non-zero).
If follows that the last k columns of V form a basis for the kernel of A, and if we call these z1,..zk then the solutions of the original equation
are
x = x0 + c1*z1 + .. ck*zk
for any real c1..ck

Efficient Way to Convert Vector of Distances to Distance Object in R (ideally without creating a full distance matrix)

I have a vector of distances which I get from some other procedure and want to convert it to a dist object in the R language .
Below I give an example how such a vector looks like: distVector is computed in the same way said other procedure computes the distance vector. Ideally, I would like to transform this vector into a distance matrix (dist object) without wasting resources.
I think I could just transform it to a matrix by copying it as upper triangular and lower triangular matrix, setting the diagonals to 0, and dealing with the fact that it is sort of upside down compared to the dist object structure (compare outputs below). Then again, first creating a full matrix and then (probably?) reducing it again to a vector in the dist object seems wasteful to me. Is there a better way?
Example code (note: I cannot change how distVector is computed):
rawData<-matrix(c(1,1,1,1.1,1,1,1,1,1.2,2,2,2,2.2,2,2,2,2.2,2.2,3,3,3,3.4,3,3),ncol=3,byrow=TRUE);
distVector<-integer(0);
for(i in 1:dim(rawData)[1]) {
for(j in (i+1):dim(rawData)[1]) {
a <- (rawData[i,]-rawData[j,]);
distVector <- c(distVector, sqrt(a %*% a));
}
}
print(distVector)
print(dist(rawData))
Output: Compare distVector to the output of the dist function, it is upside down)
> print(distVector)
[1] 0.1000000 0.2000000 1.7320508 1.8547237 1.9697716 3.4641016 3.7094474 0.2236068 1.6763055
[10] 1.7916473 1.9209373 3.4073450 3.6455452 1.6248077 1.7549929 1.8547237 3.3526109 3.6055513
[19] 0.2000000 0.2828427 1.7320508 1.9899749 0.3464102 1.6248077 1.8547237 1.5099669 1.8000000
[28] 0.4000000
> print(dist(rawData))
1 2 3 4 5 6 7
2 0.1000000
3 0.2000000 0.2236068
4 1.7320508 1.6763055 1.6248077
5 1.8547237 1.7916473 1.7549929 0.2000000
6 1.9697716 1.9209373 1.8547237 0.2828427 0.3464102
7 3.4641016 3.4073450 3.3526109 1.7320508 1.6248077 1.5099669
8 3.7094474 3.6455452 3.6055513 1.9899749 1.8547237 1.8000000 0.4000000
Many thanks,
Thomas.

R lpsolve how to define constraints travelling salesman

I want to code travelling salesman problem in R. I am going to begin with 3 cities at first then I will expand to more cities. distance matrix below gives distance between 3 cities. Objective (if someone doesn't know) is that a salesman will start from a city and will visit 2 other cities such that he has to travel minimum distance.
In below case he should start either from ny or LA and then travel to chicago and then to the remaining city. I need help to define A_ (my constraint matrix).
My decision variables will of same dimension as distances matrix. It will be a 1,0 matrix where 1 represents travel from city equal to row name to a city equal to column name. For instance if a salesman travels from ny to chicago, 2nd element in row 1 will be 1. My column and row names are ny,chicago and LA
By looking at the solution of the problem I concluded that my constraints will be::
Row sums have to be less than 1 as he cannot leave from same city twice
Column sums have to be less than 1 as he cannot enter the same city twice
total sum of matrix elements has to be 2 as the salesman will be visiting 2 cities and leaving from 2 cities.
I need help to define A_ (my constraint matrix). How should I tie in my decision variables into constraints?
ny=c(999,9,20)
chicago=c(9,999,11)
LA=c(20,11,999)
distances=cbind(ny,chicago,LA)
dv=matrix(c("a11","a12","a13","a21","a22","a23","a31","a32","a33"),nrow=3,ncol=3)
c_=c(distances[1,],distances[2,],distances[3,])
signs = c((rep('<=', 7)))
b=c(1,1,1,1,1,1,2)
res = lpSolve::lp('min', c_, A_, signs, b, all.bin = TRUE)
There are some problems with your solution. The first is that the constraints you have in mind don't guarantee that all the cities will be visited -- for example, the path could just go from NY to LA and then back. This could be solved fairly easily, for example, by requiring that each row and column sum to exactly one rather than at most 1 (although in that case you'd be finding a traveling salesman tour rather than just a path).
The bigger problem is that, even if we fix this problem, your constraints wouldn't guarantee that the selected vertices actually form one cycle through the graph, rather than multiple smaller cycles. And I don't think that your representation of the problem can be made to address this issue.
Here is an implementation of Travelling Salesman using LP. The solution space is of size n^3, where n is the number of rows in the distance matrix. This represents n consecutive copies of the nxn matrix, each of which represents the edge traversed at time t for 1<=t<=n. The constraints guarantee that
At most one edge is traversed each step
Ever vertex is visited exactly once
The startpoint of the i'th edge traversed is the same as the endpoint of the i-1'st
This avoids the problem of multiple small cycles. For example, with four vertices, the sequence (12)(21)(34)(43) would not be a valid solution because the endpoint of the second edge (21) does not match the start point of the third (34).
tspsolve<-function(x){
diag(x)<-1e10
## define some basic constants
nx<-nrow(x)
lx<-length(x)
objective<-matrix(x,lx,nx)
rowNum<-rep(row(x),nx)
colNum<-rep(col(x),nx)
stepNum<-rep(1:nx,each=lx)
## these constraints ensure that at most one edge is traversed each step
onePerStep.con<-do.call(cbind,lapply(1:nx,function(i) 1*(stepNum==i)))
onePerRow.rhs<-rep(1,nx)
## these constraints ensure that each vertex is visited exactly once
onceEach.con<-do.call(cbind,lapply(1:nx,function(i) 1*(rowNum==i)))
onceEach.rhs<-rep(1,nx)
## these constraints ensure that the start point of the i'th edge
## is equal to the endpoint of the (i-1)'st edge
edge.con<-c()
for(s in 1:nx){
s1<-(s %% nx)+1
stepMask<-(stepNum == s)*1
nextStepMask<- -(stepNum== s1)
for(i in 1:nx){
edge.con<-cbind(edge.con,stepMask * (colNum==i) + nextStepMask*(rowNum==i))
}
}
edge.rhs<-rep(0,ncol(edge.con))
## now bind all the constraints together, along with right-hand sides, and signs
constraints<-cbind(onePerStep.con,onceEach.con,edge.con)
rhs<-c(onePerRow.rhs,onceEach.rhs,edge.rhs)
signs<-rep("==",length(rhs))
list(constraints,rhs)
## call the lp solver
res<-lp("min",objective,constraints,signs,rhs,transpose=F,all.bin=T)
## print the output of lp
print(res)
## return the results as a sequence of vertices, and the score = total cycle length
list(cycle=colNum[res$solution==1],score=res$objval)
}
Here is an example:
set.seed(123)
x<-matrix(runif(16),c(4,4))
x
## [,1] [,2] [,3] [,4]
## [1,] 0.2875775 0.9404673 0.5514350 0.6775706
## [2,] 0.7883051 0.0455565 0.4566147 0.5726334
## [3,] 0.4089769 0.5281055 0.9568333 0.1029247
## [4,] 0.8830174 0.8924190 0.4533342 0.8998250
tspsolve(x)
## Success: the objective function is 2.335084
## $cycle
## [1] 1 3 4 2
##
## $score
## [1] 2.335084
We can check the correctness of this answer by using a primitive brute force search:
tspscore<-function(x,solution){
sum(sapply(1:nrow(x), function(i) x[solution[i],solution[(i%%nrow(x))+1]]))
}
tspbrute<-function(x,trials){
score<-Inf
cycle<-c()
nx<-nrow(x)
for(i in 1:trials){
temp<-sample(nx)
tempscore<-tspscore(x,temp)
if(tempscore<score){
score<-tempscore
cycle<-temp
}
}
list(cycle=cycle,score=score)
}
tspbrute(x,100)
## $cycle
## [1] 3 4 2 1
##
## $score
## [1] 2.335084
Note that, even though these answers are nominally different, they represent the same cycle.
For larger graphs, though, the brute force approach doesn't work:
> set.seed(123)
> x<-matrix(runif(100),10,10)
> tspsolve(x)
Success: the objective function is 1.296656
$cycle
[1] 1 10 3 9 5 4 8 2 7 6
$score
[1] 1.296656
> tspbrute(x,1000)
$cycle
[1] 1 5 4 8 10 9 2 7 6 3
$score
[1] 2.104487
This implementation is pretty efficient for small matrices, but, as expected, it starts to deteriorate severely as they get larger. At about 15x15 it starts slowing down quite a bit:
timetsp<-function(x,seed=123){
set.seed(seed)
m<-matrix(runif(x*x),x,x)
gc()
system.time(tspsolve(m))[3]
}
sapply(6:16,timetsp)
## elapsed elapsed elapsed elapsed elapsed elapsed elapsed elapsed elapsed elapsed
## 0.011 0.010 0.018 0.153 0.058 0.252 0.984 0.404 1.984 20.003
## elapsed
## 5.565
You can use the gaoptim package to solve permutation/real valued problems - it's pure R, so it's not so fast:
Euro tour problem (see ?optim)
eurodistmat = as.matrix(eurodist)
# Fitness function (we'll perform a maximization, so invert it)
distance = function(sq)
{
sq = c(sq, sq[1])
sq2 <- embed(sq, 2)
1/sum(eurodistmat[cbind(sq2[,2], sq2[,1])])
}
loc = -cmdscale(eurodist, add = TRUE)$points
x = loc[, 1]
y = loc[, 2]
n = nrow(eurodistmat)
set.seed(1)
# solving code
require(gaoptim)
ga2 = GAPerm(distance, n, popSize = 100, mutRate = 0.3)
ga2$evolve(200)
best = ga2$bestIndividual()
# solving code
# just transform and plot the results
best = c(best, best[1])
best.dist = 1/max(ga2$bestFit())
res = loc[best, ]
i = 1:n
plot(x, y, type = 'n', axes = FALSE, ylab = '', xlab = '')
title ('Euro tour: TSP with 21 cities')
mtext(paste('Best distance found:', best.dist))
arrows(res[i, 1], res[i, 2], res[i + 1, 1], res[i + 1, 2], col = 'red', angle = 10)
text(x, y, labels(eurodist), cex = 0.8, col = 'gray20')

Mystified by qr.Q(): what is an orthonormal matrix in "compact" form?

R has a qr() function, which performs QR decomposition using either LINPACK or LAPACK (in my experience, the latter is 5% faster). The main object returned is a matrix "qr" that contains in the upper triangular matrix R (i.e. R=qr[upper.tri(qr)]). So far so good. The lower triangular part of qr contains Q "in compact form". One can extract Q from the qr decomposition by using qr.Q(). I would like to find the inverse of qr.Q(). In other word, I do have Q and R, and would like to put them in a "qr" object. R is trivial but Q is not. The goal is to apply to it qr.solve(), which is much faster than solve() on large systems.
Introduction
R uses the LINPACK dqrdc routine, by default, or the LAPACK DGEQP3 routine, when specified, for computing the QR decomposition. Both routines compute the decomposition using Householder reflections. An m x n matrix A is decomposed into an m x n economy-size orthogonal matrix (Q) and an n x n upper triangular matrix (R) as A = QR, where Q can be computed by the product of t Householder reflection matrices, with t being the lesser of m-1 and n: Q = H1H2...Ht.
Each reflection matrix Hi can be represented by a length-(m-i+1) vector. For example, H1 requires a length-m vector for compact storage. All but one entry of this vector is placed in the first column of the lower triangle of the input matrix (the diagonal is used by the R factor). Therefore, each reflection needs one more scalar of storage, and this is provided by an auxiliary vector (called $qraux in the result from R's qr).
The compact representation used is different between the LINPACK and LAPACK routines.
The LINPACK Way
A Householder reflection is computed as Hi = I - viviT/pi, where I is the identity matrix, pi is the corresponding entry in $qraux, and vi is as follows:
vi[1..i-1] = 0,
vi[i] = pi
vi[i+1:m] = A[i+1..m, i] (i.e., a column of the lower triangle of A after calling qr)
LINPACK Example
Let's work through the example from the QR decomposition article at Wikipedia in R.
The matrix being decomposed is
> A <- matrix(c(12, 6, -4, -51, 167, 24, 4, -68, -41), nrow=3)
> A
[,1] [,2] [,3]
[1,] 12 -51 4
[2,] 6 167 -68
[3,] -4 24 -41
We do the decomposition, and the most relevant portions of the result is shown below:
> Aqr = qr(A)
> Aqr
$qr
[,1] [,2] [,3]
[1,] -14.0000000 -21.0000000 14
[2,] 0.4285714 -175.0000000 70
[3,] -0.2857143 0.1107692 -35
[snip...]
$qraux
[1] 1.857143 1.993846 35.000000
[snip...]
This decomposition was done (under the covers) by computing two Householder reflections and multiplying them by A to get R. We will now recreate the reflections from the information in $qr.
> p = Aqr$qraux # for convenience
> v1 <- matrix(c(p[1], Aqr$qr[2:3,1]))
> v1
[,1]
[1,] 1.8571429
[2,] 0.4285714
[3,] -0.2857143
> v2 <- matrix(c(0, p[2], Aqr$qr[3,2]))
> v2
[,1]
[1,] 0.0000000
[2,] 1.9938462
[3,] 0.1107692
> I = diag(3) # identity matrix
> H1 = I - v1 %*% t(v1)/p[1] # I - v1*v1^T/p[1]
> H2 = I - v2 %*% t(v2)/p[2] # I - v2*v2^T/p[2]
> Q = H1 %*% H2
> Q
[,1] [,2] [,3]
[1,] -0.8571429 0.3942857 0.33142857
[2,] -0.4285714 -0.9028571 -0.03428571
[3,] 0.2857143 -0.1714286 0.94285714
Now let's verify the Q computed above is correct:
> qr.Q(Aqr)
[,1] [,2] [,3]
[1,] -0.8571429 0.3942857 0.33142857
[2,] -0.4285714 -0.9028571 -0.03428571
[3,] 0.2857143 -0.1714286 0.94285714
Looks good! We can also verify QR is equal to A.
> R = qr.R(Aqr) # extract R from Aqr$qr
> Q %*% R
[,1] [,2] [,3]
[1,] 12 -51 4
[2,] 6 167 -68
[3,] -4 24 -41
The LAPACK Way
A Householder reflection is computed as Hi = I - piviviT, where I is the identity matrix, pi is the corresponding entry in $qraux, and vi is as follows:
vi[1..i-1] = 0,
vi[i] = 1
vi[i+1:m] = A[i+1..m, i] (i.e., a column of the lower triangle of A after calling qr)
There is another twist when using the LAPACK routine in R: column pivoting is used, so the decomposition is solving a different, related problem: AP = QR, where P is a permutation matrix.
LAPACK Example
This section does the same example as before.
> A <- matrix(c(12, 6, -4, -51, 167, 24, 4, -68, -41), nrow=3)
> Bqr = qr(A, LAPACK=TRUE)
> Bqr
$qr
[,1] [,2] [,3]
[1,] 176.2554964 -71.1694118 1.668033
[2,] -0.7348557 35.4388886 -2.180855
[3,] -0.1056080 0.6859203 -13.728129
[snip...]
$qraux
[1] 1.289353 1.360094 0.000000
$pivot
[1] 2 3 1
attr(,"useLAPACK")
[1] TRUE
[snip...]
Notice the $pivot field; we will come back to that. Now we generate Q from the information the Aqr.
> p = Bqr$qraux # for convenience
> v1 = matrix(c(1, Bqr$qr[2:3,1]))
> v1
[,1]
[1,] 1.0000000
[2,] -0.7348557
[3,] -0.1056080
> v2 = matrix(c(0, 1, Bqr$qr[3,2]))
> v2
[,1]
[1,] 0.0000000
[2,] 1.0000000
[3,] 0.6859203
> H1 = I - p[1]*v1 %*% t(v1) # I - p[1]*v1*v1^T
> H2 = I - p[2]*v2 %*% t(v2) # I - p[2]*v2*v2^T
> Q = H1 %*% H2
[,1] [,2] [,3]
[1,] -0.2893527 -0.46821615 -0.8348944
[2,] 0.9474882 -0.01602261 -0.3193891
[3,] 0.1361660 -0.88346868 0.4482655
Once again, the Q computed above agrees with the R-provided Q.
> qr.Q(Bqr)
[,1] [,2] [,3]
[1,] -0.2893527 -0.46821615 -0.8348944
[2,] 0.9474882 -0.01602261 -0.3193891
[3,] 0.1361660 -0.88346868 0.4482655
Finally, let's compute QR.
> R = qr.R(Bqr)
> Q %*% R
[,1] [,2] [,3]
[1,] -51 4 12
[2,] 167 -68 6
[3,] 24 -41 -4
Notice the difference? QR is A with its columns permuted given the order in Bqr$pivot above.
I have researched for this same problem as the OP asks and I don't think it is possible. Basically the OP question is whether having the explicitly computed Q, one can recover the H1 H2 ... Ht. I do not think this is possible without computing the QR from scratch but I would also be very interested to know whether there is such solution.
I have a similar issue as the OP but in a different context, my iterative algorithm needs to mutate the matrix A by adding columns and/or rows. The first time, the QR is computed using DGEQRF and thus, the compact LAPACK format. After the matrix A is mutated e.g. with new rows I can quickly build a new set of reflectors or rotators that will annihilate the non-zero elements of the lowest diagonal of my existing R and build a new R but now I have a set of H1_old H2_old ... Hn_old and H1_new H2_new ... Hn_new (and similarly tau's) which can't be mixed up into a single QR compact representation. The two possibilities I have are, and maybe the OP has the same two possibilities:
Always maintain Q and R explicitly separated whether when computed the first time or after every update at the cost of extra flops but keeping the required memory well bounded.
Stick to the compact LAPACK format but then every time a new update comes in, keep a list of all these mini sets of update reflectors. At the point of solving the system one would do a big Q'*c i.e. H1_u3*H2_u3*...*Hn_u3*H1_u2*H2_u2*...*Hn_u2*H1_u1*H2_u1...*Hn_u1*H1*H2*...*Hn*c where ui is the QR update number and this is potentially a lot of multiplications to do and memory to keep track of but definitely the fastest way.
The long answer from David basically explains what the compact QR format is but not how to get to this compact QR format having the explicit computed Q and R as input.

Resources