We are upgrading the cloud stream library, converting it to a functional style from imperative.
#StreamListener(target = Channel.ALPHA, condition = "headers['type']=='ALPHA1'" ...)
handleAlpha1{...}
#StreamListener(target = Channel.ALPHA, condition = "headers['type']=='ALPHA2'" ...)
handleALpha2{...}
...
#StreamListener(target = Channel.INPUT_ALPHA, condition = "headers['type']=='Alphan'" ...)
handleAlphan{...}
Similarly, there are other channels also
#StreamListener(target = Channel.BETA, condition = "headers['type']=='BETA1'" ...)
handleBeta1{...}
#StreamListener(target = Channel.BETA, condition = "headers['type']=='BETA2'" ...)
handleBeta2{...}
...
#StreamListener(target = Channel.BETA, condition = "headers['type']=='BETAN'" ...)
handleBetan{...}
Converting this into a functional style will create
N functions for Alpha channel
N functions for the Beta channel
which will result in properties file huge and function resolution on header based complicated
and which function to define here :
spring.cloud.stream.bindings.inbound-alpha-events.group=alpha-channel-group
spring.cloud.stream.bindings.inbound-alpha-events.destination=${spring.kafka.alpha.topic}
spring.cloud.stream.bindings.inbound-alpha-events.consumer.header-mode=headers
spring.cloud.stream.bindings.inbound-alpha-events.content-type=application/json
spring.cloud.stream.kafka.bindings.inbound-alpha-events.consumer.autoCommitOffset=false
Related
In the Help Documentation of Scilab 6.0.2, I can read the following instruction on the Overloading entry, regarding the last operation code "iext" showed in this entry's table:
"The 6 char code may be used for some complex insertion algorithm like x.b(2) = 33 where b field is not defined in the structure x. The insertion is automatically decomposed into temp = x.b; temp(2) = 33; x.b = temp. The 6 char code is used for the first step of this algorithm. The 6 overloading function is very similar to the e's one."
But I can't find a complete example on how to use this "char 6 code" to overload a function. I'm trying to use it, without success. Does anyone have an example on how to do this?
The code bellow creates a normal "mlist" as a example. Which needs overloading functions
A = rand(5,3)
names = ["colA" "colB" "colC"]
units = ["ft" "in" "lb"]
M = mlist(["Mlog" "names" "units" names],names,units,A(:,1),A(:,2),A(:,3))
Following are the overload functions:
//define display
function %Mlog_p(M)
n = size(M.names,"*")
formatStr = strcat(repmat("%10s ",1,n)) + "\n"
formatNum = strcat(repmat("%0.10f ",1,n)) + "\n"
mprintf(formatStr,M.names)
mprintf(formatStr,M.units)
disp([M(M.names(1)),M(M.names(2)),M(M.names(3))])
end
//define extraction operation
function [Mat]=%Mlog_e(varargin)
M = varargin($)
cols = [1:size(M.names,"*")] // This will also work
cols = cols(varargin($-1)) // when varargin($-1) = 1:1:$
Mat = []
if length(varargin)==3 then
for i = M.names(cols)
Mat = [Mat M(i)(varargin(1))]
end
else
for i=1:size(M.names(cols),"*")
Mat(i).name = M.names(cols(i))
Mat(i).unit = M.units(cols(i))
Mat(i).data = M(:,cols(i))
end
end
endfunction
//define insertion operations (a regular matrix into a Mlog matrix)
function ML=%s_i_Mlog(i,j,V,M)
names = M.names
units = M.units
A = M(:,:) // uses function above
A(i,j) = V
ML = mlist(["Mlog" "names" "units" names],names,units,A(:,1),A(:,2),A(:,3))
endfunction
//insertion operation with structures (the subject of the question)
function temp = %Mlog_6(j,M)
temp = M(j) // uses function %Mlog_e
endfunction
function M = %st_i_Mlog(j,st,M)
A = M(:,:) // uses function %Mlog_e
M.names(j) = st.name // uses function above
M.units(j) = st.unit // uses function above
A(:,j) = st.data // uses function above
names = M.names
units = M.units
M = mlist(["Mlog" "names" "units" names],names,units,A(:,1),A(:,2),A(:,3))
endfunction
The first overload (displays mlist) will show the matrix in the form of the following table:
--> M
M =
colA colB colC
ft in lb
0.4720517 0.6719395 0.5628382
0.0623731 0.1360619 0.5531093
0.0854401 0.2119744 0.0768984
0.0134564 0.4015942 0.5360758
0.3543002 0.4036219 0.0900212
The next overloads (extraction and insertion) Will allow the table to be access as a simple matrix M(i,j).
The extraction function Will also allow M to be access by column, which returns a structure, for instance:
--> M(2)
ans =
name: "colB"
unit: "in"
data: [5x1 constant]
The last two functions are the overloads mentioned in the question. They allow the column metadata to be changed in a structure form.
--> M(2).name = "length"
M =
colA length colC
ft in lb
0.4720517 0.6719395 0.5628382
0.0623731 0.1360619 0.5531093
0.0854401 0.2119744 0.0768984
0.0134564 0.4015942 0.5360758
0.3543002 0.4036219 0.0900212
I'm using the library bigchess in R to read in a 3GB PGN file, see source here: https://github.com/rosawojciech/bigchess
Reading it in like so:
df <- read.pgn(paste0(path, file), add.tags = c("UTCDate", "UTCTime", "WhiteElo", "BlackElo", "WhiteRatingDiff", "BlackRatingDiff", "WhiteTitle", "BlackTitle","TimeControl", "Termination"), n.moves = T, extract.moves = -1, stat.moves = T, big.mode = F, quiet = F, ignore.other.games = F)
I get the following error -
"Error in paste(subset(r2, tmp1 == "Movetext", select = c(tmp2))[, 1],
: result would exceed 2^31-1 bytes"
Based on internet searches this exists because R cannot store a character vector in memory that is greater than 2Gb. Is there a way to override this limit, or otherwise deal with the issue?
I want to do a radiometric correction of a landsat image using:
radiocorr(x, gain, offset, Grescale, Brescale, sunelev, satzenith, edist, Esun,
Lhaze, method = "apparentreflectance")
I performed the correction to each band, as follow:
B1 <- readGDAL("_X20060509_B_1.tif")
B1.ar<-radiocorr(x = B1, Grescale = 0.76583, Brescale = -2.28583, sunelev = 43.99853366,
satzenith = 0, edist = 1.0095786, Esun = 1983, method = "apparentreflectance")
writeGDAL(B1.ar, "C:/Users/Documents/ Reflectance/B1.tif", drivername="GTiff")
How can I make one function to automatically perform the correction to the six bands?
I tried with this function:
atmcor <- function(img, i) {
x<-img[[i]]
Grescale<-gain[i,2]
Brescale<-bias[i,2]
sunelev<-sunelevation[i,2]
satzenith=0
edist<-edistance[i,2]
Esun<-Esun[1,2]
method = "apparentreflectance"
B.ar<-radiocorr(x, Grescale, Brescale, sunelev, satzenith, edist, Esun, method)
return(B.ar)
}
ATMCOR <- atmcor(landsat_stack, 1)
But, I got this error:
(Error in array(x, c(length(x), 1L), if (!is.null(names(x))) list(names(x), :
length of 'dimnames' [1] not equal to array extent)
If you want to do the radiometric calibration for all bands in only one execution chunk you need to load your metadata file as well.
Therefore you can do it in several forms. But the following code can easily solve your problem.
radCor(img, metaData, method = "apref", bandSet = "full", hazeValues,
hazeBands, atmosphere, darkProp = 0.01, clamp = TRUE, verbose)
When you set the bandSet to "full", all the band in the solar region will be processed.
I am using package fda in particular function fRegress. This function includes another function that is called eigchk and checks if coeffients matrix is singular.
Here is the function as the package owners (J. O. Ramsay, Giles Hooker, and Spencer Graves) wrote it.
eigchk <- function(Cmat) {
# check Cmat for singularity
eigval <- eigen(Cmat)$values
ncoef <- length(eigval)
if (eigval[ncoef] < 0) {
neig <- min(length(eigval),10)
cat("\nSmallest eigenvalues:\n")
print(eigval[(ncoef-neig+1):ncoef])
cat("\nLargest eigenvalues:\n")
print(eigval[1:neig])
stop("Negative eigenvalue of coefficient matrix.")
}
if (eigval[ncoef] == 0) stop("Zero eigenvalue of coefficient matrix.")
logcondition <- log10(eigval[1]) - log10(eigval[ncoef])
if (logcondition > 12) {
warning("Near singularity in coefficient matrix.")
cat(paste("\nLog10 Eigenvalues range from\n",
log10(eigval[ncoef])," to ",log10(eigval[1]),"\n"))
}
}
As you can see last if condition checks if logcondition is bigger than 12 and prints then the ranges of eigenvalues.
The following code implements the useage of regularization with roughness pennalty. The code is taken from the book "Functional data analysis with R and Matlab".
annualprec = log10(apply(daily$precav,2,sum))
tempbasis =create.fourier.basis(c(0,365),65)
tempSmooth=smooth.basis(day.5,daily$tempav,tempbasis)
tempfd =tempSmooth$fd
templist = vector("list",2)
templist[[1]] = rep(1,35)
templist[[2]] = tempfd
conbasis = create.constant.basis(c(0,365))
betalist = vector("list",2)
betalist[[1]] = conbasis
SSE = sum((annualprec - mean(annualprec))^2)
Lcoef = c(0,(2*pi/365)^2,0)
harmaccelLfd = vec2Lfd(Lcoef, c(0,365))
betabasis = create.fourier.basis(c(0, 365), 35)
lambda = 10^12.5
betafdPar = fdPar(betabasis, harmaccelLfd, lambda)
betalist[[2]] = betafdPar
annPrecTemp = fRegress(annualprec, templist, betalist)
betaestlist2 = annPrecTemp$betaestlist
annualprechat2 = annPrecTemp$yhatfdobj
SSE1.2 = sum((annualprec-annualprechat2)^2)
RSQ2 = (SSE - SSE1.2)/SSE
Fratio2 = ((SSE-SSE1.2)/3.7)/(SSE1/30.3)
resid = annualprec - annualprechat2
SigmaE. = sum(resid^2)/(35-annPrecTemp$df)
SigmaE = SigmaE.*diag(rep(1,35))
y2cMap = tempSmooth$y2cMap
stderrList = fRegress.stderr(annPrecTemp, y2cMap, SigmaE)
betafdPar = betaestlist2[[2]]
betafd = betafdPar$fd
betastderrList = stderrList$betastderrlist
betastderrfd = betastderrList[[2]]
As penalty factor the authors use certain lambda.
The following code implements the search for the appropriate `lambda.
loglam = seq(5,15,0.5)
nlam = length(loglam)
SSE.CV = matrix(0,nlam,1)
for (ilam in 1:nlam) {
lambda = 10ˆloglam[ilam]
betalisti = betalist
betafdPar2 = betalisti[[2]]
betafdPar2$lambda = lambda
betalisti[[2]] = betafdPar2
fRegi = fRegress.CV(annualprec, templist,
betalisti)
SSE.CV[ilam] = fRegi$SSE.CV
}
By changing the value of the loglam and cross validation I suppose to equaire the best lambda, yet if the length of the loglam is to big or its values lead the coefficient matrix to singulrity. I recieve the following message:
Log10 Eigenvalues range from
-5.44495317739048 to 6.78194912518214
Created by the function eigchk as I already have mentioned above.
Now my question is, are there any way to catch this so called warning? By catch I mean some function or method that warns me when this has happened and I could adjust the values of the loglam. Since there is no actual warning definition in the function beside this print of the message I ran out of ideas.
Thank you all a lot for your suggestions.
By "catch the warning", if you mean, will alert you that there is a potential problem with loglam, then you might want to look at try and tryCatch functions. Then you can define the behavior you want implemented if any warning condition is satisfied.
If you just want to store the output of the warning (which might be assumed from the question title, but may not be what you want), then try looking into capture.output.
I've been working on an R package that is just a REST API wrapper for a graph database. I have a function createNode that returns an object with class node and entity:
# Connect to the db.
graph = startGraph("http://localhost:7474/db/data/")
# Create two nodes in the db.
alice = createNode(graph, name = "Alice")
bob = createNode(graph, name = "Bob")
> class(alice)
[1] "node" "entity"
> class(bob)
[1] "node" "entity"
I have another function, createRel, that creates a relationship between two nodes in the database. It is specified as follows:
createRel = function(fromNode, type, toNode, ...) {
UseMethod("createRel")
}
createRel.default = function(fromNode, ...) {
stop("Invalid object. Must supply node object.")
}
createRel.node = function(fromNode, type, toNode, ...) {
params = list(...)
# Check if toNode is a node.
stopifnot("node" %in% class(toNode))
# Making REST API calls through RCurl and stuff.
}
The ... allows the user to add an arbitrary amount of properties to the relationship in the form key = value. For example,
rel = createRel(alice, "KNOWS", bob, since = 2000, through = "Work")
This creates an (Alice)-[KNOWS]->(Bob) relationship in the db, with the properties since and through and their respective values. However, if a user specifies properties with keys from or to in the ... argument, R gets confused about the classes of fromNode and toNode.
Specifying a property with key from creates confusion about the class of fromNode. It is using createRel.default:
> createRel(alice, "KNOWS", bob, from = "Work")
Error in createRel.default(alice, "KNOWS", bob, from = "Work") :
Invalid object. Must supply node object.
3 stop("Invalid object. Must supply node object.")
2 createRel.default(alice, "KNOWS", bob, from = "Work")
1 createRel(alice, "KNOWS", bob, from = "Work")
Similarly, if a user specifies a property with key to, there is confusion about the class of toNode, and stops at the stopifnot():
Error: "node" %in% class(toNode) is not TRUE
4 stop(sprintf(ngettext(length(r), "%s is not TRUE", "%s are not all TRUE"),
ch), call. = FALSE, domain = NA)
3 stopifnot("node" %in% class(toNode))
2 createRel.node(alice, "KNOWS", bob, to = "Something")
1 createRel(alice, "KNOWS", bob, to = "Something")
I've found that explicitly setting the parameters in createRel works fine:
rel = createRel(fromNode = alice,
type = "KNOWS",
toNode = bob,
from = "Work",
to = "Something")
# OK
But I am wondering how I need to edit my createRel function so that the following syntax will work without error:
rel = createRel(alice, "KNOWS", bob, from = "Work", to = "Something")
# Errors galore.
The GitHub user who opened the issue mentioned it is most likely a conflict with setAs on dispatch, which has arguments called from and to. One solution is to get rid of ... and change createRel to the following:
createRel = function(fromNode, type, toNode, params = list()) {
UseMethod("createRel")
}
createRel.default = function(fromNode, ...) {
stop("Invalid object. Must supply node object.")
}
createRel.node = function(fromNode, type, toNode, params = list()) {
# Check if toNode is a node.
stopifnot("node" %in% class(toNode))
# Making REST API calls through RCurl and stuff.
}
But, I wanted to see if I had any other options before making this change.
Not really an answer, but...
The problem is that the user-provided argument 'from' is being (partially) matched to the formal argument 'fromNode'.
f = function(fromNode, ...) fromNode
f(1, from=2)
## [1] 2
The rules are outlined in section 4.3.2 of RShowDoc('R-lang'), where named arguments are exact matched, then partial matched, and then unnamed arguments are assigned by position.
It's hard to know how to enforce exact matching, other than using single-letter argument names! Actually, for a generic this might not be as trite as it sounds -- x is a pretty generic variable name. If 'from' and 'to' were common arguments to ... you could change the argument list to "fromNode, , ..., from, to", check for missing(from) in the body of the function, and act accordingly; I don't think this would be pleasant, and the user would invariable provide an argument 'fro'.
While enforcing exact matching (and errors, via warn=2) by setting global options() might be helpful in debugging (though by then you'd probably know what you were looking for!) it doesn't help the package author who is trying to write code to work for users in general.
It might be reasonable to ask on the R-devel mailing list whether it might be time for this behavior to be changed (on the 'several releases' time scale); partial matching probably dates as a 'convenience' from the days before tab completion.