I have an igraph object, what I have created with the igraph library. This object is a list. Some of the components of this list have a length of 2. I would like to remove all of these ones.
IGRAPH clustering walktrap, groups: 114, mod: 0.79
+ groups:
$`1`
[1] "OTU0041" "OTU0016" "OTU0062"
[4] "OTU1362" "UniRef90_A0A075FHQ0" "UniRef90_A0A075FSE2"
[7] "UniRef90_A0A075FTT8" "UniRef90_A0A075FYU2" "UniRef90_A0A075G543"
[10] "UniRef90_A0A075G6B2" "UniRef90_A0A075GIL8" "UniRef90_A0A075GR85"
[13] "UniRef90_A0A075H910" "UniRef90_A0A075HTF5" "UniRef90_A0A075IFG0"
[16] "UniRef90_A0A0C1R539" "UniRef90_A0A0C1R6X4" "UniRef90_A0A0C1R985"
[19] "UniRef90_A0A0C1RCN7" "UniRef90_A0A0C1RE67" "UniRef90_A0A0C1RFI5"
[22] "UniRef90_A0A0C1RFN8" "UniRef90_A0A0C1RGE0" "UniRef90_A0A0C1RGX0"
[25] "UniRef90_A0A0C1RHM1" "UniRef90_A0A0C1RHR5" "UniRef90_A0A0C1RHZ4"
+ ... omitted several groups/vertices
For example, this one :
> a[[91]]
[1] "OTU0099" "UniRef90_UPI0005B28A7E"
I tried this but it does not work :
a[lapply(a,length)>2]
Any help?
Since you didn't provide any reproducible data or example, I had to produce some dummy data:
# create dummy data
a <- list(x = 1, y = 1:4, z = 1:2)
# remove elements in list with lengths greater than 2:
a[which(lapply(a, length) > 2)] <- NULL
In case you wanted to remove the items with lengths exactly equal to 2 (question is unclear), then last line should be replaced by:
a[which(lapply(a, length) == 2)] <- NULL
I have a couple of datasets(short examples below)
> print(partOne)
[1] 0.010966943 -0.006819522 -0.007189830 0.039736714 0.002013070
[6] -0.043946666 0.003808415 0.199010991 -0.094001478 -0.053006526
[11] -0.051489992 0.019122283 -0.011215761 0.057408738 -0.020809564
[16] -0.041295915 0.010134001 -0.011398076
> print(part2)
[1] 0.13070012 0.15793754 0.06980192 0.13270089 0.11384798 0.24417631
[7] 0.10363273 0.09182783 0.12217490 0.47649356 0.33660955 0.23079863
[13] 0.21581061 0.13967763 0.05988797 0.28255164 0.16277804 0.12716975
[19] 0.19299641 0.21452418
I need to weight each partOne value by the current part2 value divided by the sum of N previous part2 values.
So, for the short example above (which has 20 values in each array) , a pseudo-code would be:
Skip to item N+1 (e.g. assume N=10 for this example)
Calculate (partOne[11]*partTwo[11])/sum(partTwo[1->10])
Increment +1
Calculate (partOne[12]*partTwo[12])/sum(partTwo[2->11])
etc. etc.
Try this
x[11:20]*y[11:20]/sapply(1:10,function(t) sum(y[t:(t+9)]))
Data
x <- rnorm(20)
y <- rnorm(20)
Is it possible to insert "" instead of NA when creating a new row in gdf?
EDIT: Here's some sample code that I tried
require(gWidgets2RGtk2)
df <- data.frame(x=1:5,y=6:10) #Sample data frame
w2 <- gwindow("keyfile editor")
h <- gdf(df,cont=w2)
addHandlerChanged(h, handler = function(h,...){ #Handler to remove NA
h<<-apply(h[1:nrow(h),1:ncol(h)], 2, function(x) gsub("NA","",x))
})
svalue(h$obj, drop = FALSE)
gives you the new value for the updated row. So in theory,
addHandlerChanged(h, handler = function(h,...) {
svalue(h$obj, drop = FALSE)[] <- lapply(
svalue(h$obj, drop = FALSE),
function(x) {
x[is.na(x)] <- ""
}
)
}
should replace all the NAs with "". There are two problems:
Firstly, replacing the missing values with an empty string converts the whole column to be a character vector, which you probably don't want, and secondly, there seems to be a problem with svalue<- that means the values aren't updating.
I think that the problem is this:
methods(`svalue<-`)
## [1] svalue<-.default* svalue<-.GCheckbox* svalue<-.GFormLayout* svalue<-.GGroup*
## [5] svalue<-.GHtml* svalue<-.GLabel* svalue<-.GMenuBar* svalue<-.GRadio*
## [9] svalue<-.GToolBar* svalue<-.GTree*
shows that there is no GDf-specific method for setting the svalue, so svalue<-.default will be called.
gWidgets2:::`svalue<-.default`
## function (obj, index = NULL, ..., value)
## {
## if (!isExtant(obj)) {
## return(obj)
## }
## if (getWithDefault(index, FALSE))
## obj$set_index(value, ...)
## else obj$set_value(value, ...)
## obj
## }
This calls the object's set_value method.
ls(attr(h, ".xData"))
## [1] "add_cell_popup" "add_popup_to_view_col" "add_to_parent"
## [4] "add_view_columns" "block" "block_editable_column"
## [7] "cell_popup_id" "change_signal" "clear_stack"
## [10] "clear_view_columns" "cmd_coerce_column" "cmd_insert_column"
## [13] "cmd_remove_column" "cmd_replace_column" "cmd_set_column_name"
## [16] "cmd_set_column_names" "cmd_stack" "coerce_with"
## [19] "connected_signals" "default_cell_popup_menu" "default_expand"
## [22] "default_fill" "default_popup_menu" "freeze_attributes"
## [25] "get_column_index" "get_column_value" "get_dim"
## [28] "get_name" "get_view_column" "handler_id"
## [31] "initFields" "initialize" "initialize#GComponent"
## [34] "initialize#GWidget" "invoke_change_handler" "invoke_handler"
## [37] "is_editable" "map_j" "model"
## [40] "not_deleted" "notify_observers" "parent"
## [43] "set_editable" "set_frame" "set_name"
## [46] "set_names" "set_parent" "store"
## [49] "toolkit" "unblock_editable_column" "widget"
but there doesn't seem to be one implemented yet.
Well, Richie did his usual thorough job. This question has a few problems: One you use the variable h as a global variable (for the gdf object) and as the argument to the handler, so within the handler h does not refer to the object, but h$obj would. Second To set values for selection in the gdf object uses the [<- method (h[i,j] <- "" calls the h object's set_items method). You tried to modify the object, not call a method on it. As for NA values, underlying the items to select from is an RGtk2DataFrame, which like a data frame in R will coerce values to character if you try to put a character value into a numeric value. Best, to use R as it is intended. If you really want to get rid of NA values you can do so when you go to use the values that the user has edited, modifying h[,] as you want.
Now, if you really wanted to do this, I think you could at the RGtk2 level by writing an appropriate cell renderer.
I tried to find two values in the following vector, which are close to 10. The expected value is 10.12099196 and 10.63054170. Your inputs would be appreciated.
[1] 0.98799517 1.09055728 1.20383713 1.32927166 1.46857509 1.62380423 1.79743107 1.99241551 2.21226576 2.46106916 2.74346924 3.06455219 3.42958354 3.84350238 4.31005838
[16] 4.83051356 5.40199462 6.01590035 6.65715769 7.30532785 7.93823621 8.53773241 9.09570538 9.61755743 10.12099196 10.63018180 11.16783243 11.74870531 12.37719092 13.04922392
[31] 13.75661322 14.49087793 15.24414627 16.00601247 16.75709565 17.46236358 18.06882072 18.51050094 18.71908344 18.63563523 18.22123225 17.46709279 16.40246292 15.09417699 13.63404124
[46] 12.11854915 10.63054170 9.22947285 7.95056000 6.80923943 5.80717982 4.93764782 4.18947450 3.54966795 3.00499094 2.54283599 2.15165780 1.82114213 1.54222565 1.30703661
[61] 1.10879707 0.94170986 0.80084308 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730
[76] 0.11305559 0.09840485 0.08578789 0.07490387 0.06549894 0.05735864
Another alternative could be allowing the user to control for the "tolerance" in order to set what "closeness" is, this can be done by using a simple function:
close <- function(x, value, tol=NULL){
if(!is.null(tol)){
x[abs(x-10) <= tol]
} else {
x[order(abs(x-10))]
}
}
Where x is a vector of values, value is the value of comparison for closeness, and tol is logical, if it's NULL it returns all the "close" values ordered by "closeness" to value, otherwise it returns just the values meeting the condition given in tol.
> close(x, value=10, tol=.7)
[1] 9.617557 10.120992 10.630182 10.630542
> close(x, value=10)
[1] 10.12099196 9.61755743 10.63018180 10.63054170 9.22947285 9.09570538 11.16783243
[8] 8.53773241 11.74870531 7.95056000 7.93823621 12.11854915 12.37719092 7.30532785
[15] 13.04922392 6.80923943 6.65715769 13.63404124 13.75661322 6.01590035 5.80717982
[22] 14.49087793 5.40199462 4.93764782 15.09417699 4.83051356 15.24414627 4.31005838
[29] 4.18947450 16.00601247 3.84350238 16.40246292 3.54966795 3.42958354 16.75709565
[36] 3.06455219 3.00499094 2.74346924 2.54283599 17.46236358 17.46709279 2.46106916
[43] 2.21226576 2.15165780 1.99241551 18.06882072 1.82114213 1.79743107 18.22123225
[50] 1.62380423 1.54222565 18.51050094 1.46857509 18.63563523 1.32927166 1.30703661
[57] 18.71908344 1.20383713 1.10879707 1.09055728 0.98799517 0.94170986 0.80084308
[64] 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281
[71] 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730 0.11305559 0.09840485
[78] 0.08578789 0.07490387 0.06549894 0.05735864
In the first example I defined "closeness" to be at most a difference of 0.7 between value and each elements in x. In the second example the function close returns a vector of values where the firsts are the closest to the value given in value and the lasts are the farest values from value.
Since my solution does not provide an easy (practical) way to find tol as #Arun pointed out, one way to find the closest values would be seting tol=NULL and asking for the exact number of close values as in:
> close(x, value=10)[1:3]
[1] 10.120992 9.617557 10.630182
This shows the three values in x closest to 10.
I can't think of a way without using sort. However, you can speed it up by using partial sort.
x[abs(x-10) %in% sort(abs(x-10), partial=1:2)[1:2]]
# [1] 9.617557 10.120992
In case the same values are present more than once, you'll get all of them here. So, you can either wrap this with unique or you can use match instead as follows:
x[match(sort(abs(x-10), partial=1:2)[1:2], abs(x-10))]
# [1] 10.120992 9.617557
dput output:
dput(x)
c(0.98799517, 1.09055728, 1.20383713, 1.32927166, 1.46857509,
1.62380423, 1.79743107, 1.99241551, 2.21226576, 2.46106916, 2.74346924,
3.06455219, 3.42958354, 3.84350238, 4.31005838, 4.83051356, 5.40199462,
6.01590035, 6.65715769, 7.30532785, 7.93823621, 8.53773241, 9.09570538,
9.61755743, 10.12099196, 10.6301818, 11.16783243, 11.74870531,
12.37719092, 13.04922392, 13.75661322, 14.49087793, 15.24414627,
16.00601247, 16.75709565, 17.46236358, 18.06882072, 18.51050094,
18.71908344, 18.63563523, 18.22123225, 17.46709279, 16.40246292,
15.09417699, 13.63404124, 12.11854915, 10.6305417, 9.22947285,
7.95056, 6.80923943, 5.80717982, 4.93764782, 4.1894745, 3.54966795,
3.00499094, 2.54283599, 2.1516578, 1.82114213, 1.54222565, 1.30703661,
1.10879707, 0.94170986, 0.80084308, 0.68201911, 0.58171175, 0.49695298,
0.42525021, 0.3645135, 0.31299262, 0.26922281, 0.2319786, 0.20023468,
0.17313291, 0.14995459, 0.1300973, 0.11305559, 0.09840485, 0.08578789,
0.07490387, 0.06549894, 0.05735864)
I'm not sure your question is clear, so here's another approach. To find the value closest to your first desired value, 10.12099196 , subtract that from the vector, take the absolute value, and then find the index of the closest element. Explicit:
delx <- abs( 10.12099196 - x)
min.index <- which.min(delx) #returns index of first minimum if there are duplicates
x[min.index] #gets you the value itself
Apologies if this was not the intent of your question.
I have a data frame, deflator.
I want to get a new data frame inflation which can be calculated by:
deflator[i] - deflator[i-4]
----------------------------- * 100
deflator [i - 4]
The data frame deflator has 71 numbers:
> deflator
[1] 0.9628929 0.9596746 0.9747274 0.9832532 0.9851884
[6] 0.9797770 0.9913502 1.0100561 1.0176906 1.0092516
[11] 1.0185932 1.0241043 1.0197975 1.0174097 1.0297328
[16] 1.0297071 1.0313232 1.0244618 1.0347808 1.0480411
[21] 1.0322142 1.0351968 1.0403264 1.0447121 1.0504402
[26] 1.0487097 1.0664664 1.0935239 1.0965951 1.1141851
[31] 1.1033155 1.1234482 1.1333870 1.1188136 1.1336276
[36] 1.1096461 1.1226584 1.1287245 1.1529588 1.1582911
[41] 1.1691221 1.1782178 1.1946234 1.1963453 1.1939922
[46] 1.2118189 1.2227960 1.2140535 1.2228828 1.2314258
[51] 1.2570788 1.2572214 1.2607763 1.2744415 1.2982076
[56] 1.3318808 1.3394186 1.3525902 1.3352815 1.3492751
[61] 1.3593859 1.3368135 1.3642940 1.3538567 1.3658135
[66] 1.3710932 1.3888638 1.4262185 1.4309707 1.4328823
[71] 1.4497201
This is a very tricky question for me.
I tried to do this using a for loop:
> d <- data.frame(deflator)
> for (i in 1:71) {d <-rbind(d,c(delfaotr ))}
I think I might be doing it wrong.
Why use data frames? This is a straightforward vector operation.
inflation = 100 * (deflator[1:67] - deflator[-(1:4)])/deflator[-(1:4)]
I agree with #Fhnuzoag that your example suggests calculations on a numeric vector, not a data frame. Here's an additional way to do your calculations taking advantage of the lag argument in the diff function (with indexes that match those in your question):
lagBy <- 4 # The number of indexes by which to lag
laggedDiff <- diff(deflator, lag = lagBy) # The numerator above
theDenom <- deflator[seq_len(length(deflator) - lagBy)] # The denominator above
inflation <- laggedDiff/theDenom
The first few results are:
head(inflation)
# [1] 0.02315470 0.02094710 0.01705379 0.02725941 0.03299085 0.03008297