I have been given following code for R, but I am having trouble understanding what it is doing. In fact I can not even run it in R because of its syntax. I assume the syntax is for lower level code behind R. If someone could help explain what's happening here and translate this into executable R code that would be very helpful.
soft_thresholding = function(x,a){
result a)] a)] - a
result[which(x < -a)] = x[which(x < -a)] + a
return(result)}
Here is a summary of the findings. This is not a definite answer but could help the questioner.
If one uses wordpress, then x <- a will look like x < -a. Check this URL that confirms this assumption
Upon further online search with the function name in the question "soft_thresholding", shows that this function is probably attempting to do soft thresholding defined here.
Some more online searching about soft thresholding lands on a CRAN package that is present here.
Further deepdive into the r folder in the package binaries shows the following.
soft.threshold <- function(x,sumabs=1)
return(soft(x, BinarySearch(x,sumabs)))
The function above seems very close to the code in the question.
Furthermore, the soft.threshold function uses another internal function BinarySearch that looks like this.
BinarySearch <-
function(argu,sumabs){
if(norm2(argu)==0 || sum(abs(argu/norm2(argu)))<=sumabs) return(0)
lam_max = max(abs(argu))
lam1 <- 0
lam2 <- lam_max
iter <- 1
while(iter < 500){
su <- soft(argu,(lam1+lam2)/2)
if(sum(abs(su/norm2(su)))<sumabs){
lam2 <- (lam1+lam2)/2
} else {
lam1 <- (lam1+lam2)/2
}
if((lam2-lam1)/lam1 < 1e-10){
if (lam2 != lam_max){
return(lam2)
}else{
return(lam1)
}
}
iter <- iter+1
}
warning("Didn't quite converge")
return((lam1+lam2)/2)
}
This recursive research leads one to believe that the function is perhaps attempting to mimic the function soft.threshold in the CRAN package "RGCCA"
Hope it helps
Related
hello i created the following functions that test reliability. however i want to combine them into one function like :reliability<-function(x)in order for them to give me a 1-0 matris showing each answer for each function with in "reliability". because each has been giving me the answer on its own. if any ideas help.
splithalf1<- function( data ) {
n<-ncol(data)
tek<-data[ , seq(1,n , 2)]
cift<-data[ , seq(2 ,n , 2)]
top_single<-rowSums(single)
top_double<-rowSums(double)
kor<-cor(top_single,top_double)
r<-2*kor / (1+kor)
return(r)
}
cr.alpha2<- function(x) {
n<-ncol(x)
kov<-cov(x)
kov1<-as.vector(kov)
kov2<-unique (kov1)
kov3<- kov2[-1]
kov4<-sum(kov3)/length(kov3)
pay<- n*kov4
payd<- (1 + (n-1)*kov4)
alpha<-pay/payd
return(alpha)
}
kr20<-function(x) {
n<-ncol(x)
pq<-function(x) {
p<-mean(x)
q<-1-p
res<-p*q
return (res)
}
pay<- sum(apply(x,2,pq))
top<-rowSums(x)
payda<-var(top)
result<- n /(n-1)* (1-(pay/payda))
return(result)
}
Stack is not a coding service! As a teaching service to you, however, I will suggest several things which is probably the intent of your teacher in the first place!
Study up on fundamental R. There are innumerable (well numerable but innumerable for all practical individual purposes) resources for free on the net. One good (IMO), free, intro-to-journeyman-level book is R for Data Science which can be accessed here: https://r4ds.had.co.nz/
check out assigning function return values to vars.
check out the c() and matrix() functions.
I am using R's excellent future package. And in the documentation it mentions %global% and %packages% for assigning global variables and packages to be evaluated in the future environment. But those seem to only work with %<-%.
My question is: is there away to do that with future_apply as well. I tried
x = 1
future.apply::future_sapply(1:50, function(y) {
glue("{x}")
}) %packages% "glue" %globals% "x"
and It doesn't work
If you look at the help page for future_sapply, you'll see that future_lapply has the arguments future.packages and future.globals, and if you read carefully, these are also used in future_sapply. So this works:
x = 1
future.apply::future_sapply(1:50, function(y) {
glue("{x}")
}, future.packages = "glue", future.globals = "x")
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I wrote a code for dbscan algorithm.
when I call a function in main
It doesnt work and I dont know why
here is the code
x=read.delim("C:/Users/mf/Desktop/stp.txt")
y=read.delim("C:/Users/mf/Desktop/stp.txt")
hash=0
c=temp1=0
q=1
C=0
eps=30
MinPts=30
lable=matrix(-2,1,nrow(x))
clusterlab=matrix(-3,1,nrow(x))
for(p in 1:nrow(x))
{
if(lable[p]==-2)
{
lable[p]=1 #visited=1 and nonvisited=-2
NeighborPts = regionQuery(p, eps)
temp=nrow(NeighborPts)-1
if (temp < MinPts){
clusterlab[p]=0 #noise = 0
}
else if(temp>=MinPts){
C = C+1
haha=expandCluster(p, NeighborPts, C, eps, MinPts,hash,clusterlab,lable)
}
}
}
expandCluster <- function(p, NeighborPts, C, eps, MinPts,hash,clusterlab,lable) {
hash=hash+1
clusterlab[p]=C
for (q in 2:nrow(NeighborPts))
{ testP=NeighborPts[q,1]
if(lable[testP]==-2)
lable[testP]=1
newNeighborPts = regionQuery(testP, eps)
if ((nrow(newNeighborPts)-1) >= MinPts)
NeighborPts = rbind(NeighborPts,newNeighborPts)
if(clusterlab[testP]==-3) #is not yet member of any cluster
clusterlab[testP]=C
}
return(hash)
}
regionQuery <- function(p, eps) {
neighborhood=p
for(i in 1:nrow(x)){
temp=sqrt((x[p,1]-y[i,1])^2+(x[p,2]-y[i,2])^2)
if(temp<eps){
c=c+1
neighborhood=rbind(neighborhood,i)}
}
#neighborhood=neighborhood[-1,]
return(neighborhood)
}
when I call
haha=expandCluster(p, NeighborPts, C, eps, MinPts,hash,clusterlab,lable)
It doesnt work!!
I add hash variable to check it.
every time that expancdCLuster called hash must incease. but it doesned increased.
lable and clusterlab is not change too.
data is
here
Functions in R are usually designed to pass parameters by value and not by reference. Updating the value of the variables passed in will not change them in the calling environment. Generally speaking, the R way to do this is for your function to return the updated data. If you want to return more than one updated variable, you can use a list to do this.
You will see people using the assign to parent environment operator (<<-) and even assign to the global environment within functions. This style of coding works, but it goes against the principle that functions generally don't modify the calling environment and may make debugging and integrating different pieces of code into a larger project much harder.
I am completely new to R.
I am trying to use the dist object with a custom function based on the specification here, but I was unable to pass the custom function directly by name, so I tried to add it using the registry described here, but it appears that I am missing a library.
However, I'm not sure which library I need and cannot find a reference to find the name of the library.
Here's a code sample that I'm trying to run:
library(cluster)
myfun <- function(x,y) {
numDiffs <- 0;
for (i in x) {
if (x[i] != y[i])
numDiffs <- numDiffs + 1;
}
return(numDiffs);
}
summary(pr_DB)
pr_DB$set_entry(FUN = myfun, names = c("myfun", "vectorham"))
pr_DB$get_entry("MYFUN")
Here's the error:
Error in summary(pr_DB) : object 'pr_DB' not found
Execution halted
You need to learn the conventions used by R help pages. That "{proxy}" at the top of the page you linked to is really the answer to your question. The convention for the help page construction is "topic {package_name}".
I am having a very hard time trying to subtract two vectors in my OpenBUGS model. The last line of the code below keeps giving the error "expected right parenthesis error":
model {
for ( i in 1:N) {
for(j in 1:q) {
vv[i,j] ~ dnorm(vz[i,j],tau.eta[j])
}
vz[i,1:q] ~ dmnorm(media.z[i,], K.delta[,])
for(j in 1:q) {
mean.z[i,j] <- inprod(K[i,] , vbeta[j,])
}
K[i,1] <- 1.0
for(j in 1:N) {
K[i,j+1] <- sum(ve[,i] - ve[,j])
}
}
If I change that line to K[i,j+1] <- sum(ve[,i]) - sum(ve[,j]), then the model works fine, but that is not what I want to do. I would like to subtract element-wise.
I searched SO for OpenBUGS, but there are only a few unrelated topics:
OpenBUGS - Variable is not defined
OpenBUGS: missing value in Bernoulli distribution
In Stats Stack Exchange there is this post which is close, but I still could not make how to implement this in my model:
https://stats.stackexchange.com/questions/20653/vector-multiplication-in-bugs-and-jags/20739#20739
I understand I have to write a for loop, but this thing is sure giving me a big headache. :)
I tried changing that line to:
for(k in 1:p) { temp [k] <- ve[k,i] - ve[k,j] }
K[i,j+1] <- sum(temp[])
where 'p' is the number of rows in each 've'. Now I keep getting the error "multiple definitions of node temp[1]".
I could definitely use some help. It will be much appreciated.
Best regards to all and thanks in advance!
PS: I wanted to add the tag "OpenBUGS" to this question but unfortunately I couldn't because it would be a new tag and I do not have enough reputation. I added "winbugs" instead.
The "multiple definitions" error is because temp[k] is redefined over and over again within a loop over i and another loop over j - you can only define it once. To get around that, use i and j subscripts like
for(k in 1:p) { temp[k,i,j] <- ve[k,i] - ve[k,j] }
K[i,j+1] <- sum(temp[,i,j])
Though if that compiles and runs, I'd check the results to make sure that's exactly what you want mathematically.