Should my projects always have output in Console in Xcode 8 - console

I'm fairly new to Xcode, Swift, and making apps. So when Xcode 8 and Swift 3 came out, this was my first time having to transition my app to a new version of either.
What I'm seeing is now, every time I run my app, it instantly outputs text in the console, something that never used to happen. I was curious if it was a problem with my app, so I made a completely new app and just ran it without making any changes. And I still got the output. Is this some new feature of Xcode? Or is there possibly something wrong with my Xcode?
This is the output I'm seeing
2016-09-27 10:02:06.942171 Test App[42382:2680437] subsystem: com.apple.UIKit, category: HIDEventFiltered, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-27 10:02:06.961602 Test App[42382:2680437] subsystem: com.apple.UIKit, category: HIDEventIncoming, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-27 10:02:07.027065 Test App[42382:2680433] subsystem: com.apple.BaseBoard, category: MachPort, enable_level: 1, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0
2016-09-27 10:02:07.065095 Test App[42382:2680364] subsystem: com.apple.UIKit, category: StatusBar, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-27 10:02:07.103796 Test App[42382:2680364] subsystem: com.apple.BackBoardServices.fence, category: App, enable_level: 1, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0
It doesn't appear that there is any mention of an error, so I'm wondering if this is even a problem, or what exactly it is trying to tell me.
Thank you all.

Related

What is the most typographically safe way to write a linear program in R?

In R, linear programming using the lpsolve library requires an extensive amount of manually typed values for the needed matrices and vectors for the objective function coefficients, left-hand side coefficients, constraints, etc. Messing up even one value or one comma will make the script error or worse, the program will find a solution but it will be the wrong one due to incorrect setup. For large, real-world problems like network flow, simply typing the program itself is time prohibitive. What am I missing about R's capabilities in this space? Or, is there an alternate tool better fit for the job? Open source preferred due to budget.
Here is an example of the type of code needed in R for a relatively simple optimization problem:
# fleet size optimization, with an added computation of total miles driven
library(lpSolve)
# Objective function coefficients
ObjCoeff<-c(1300, 690, 421.5, 531, 690, 427.50, 277.50, 421.5, 427.50, 303, 531, 277.50, 303,
460, 281, 354, 460, 285, 185, 281, 285, 202, 354, 185, 202, 0)
# Constraint matrix
Amatrix<-matrix(c(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 1, -1, 0, 0, -1, 0, 0, -1, 0, 0, 1, 1, 1, -1, 0, 0, -1, 0, 0, -1, 0, 0,0,
0, -1,0, 0, 1, 1, 1, 0, -1, 0, 0,-1, 0,-1, 0, 0, 1, 1, 1, 0, -1, 0, 0, -1, 0,0,
0, 0,-1, 0, 0,-1, 0, 1, 1, 1, 0, 0,-1, 0,-1, 0, 0, -1, 0, 1, 1, 1, 0, 0,-1,0,
0, 0, 0, -1, 0, 0,-1, 0, 0,-1, 1, 1, 1, 0, 0,-1, 0, 0,-1, 0, 0,-1, 1, 1, 1,0,
-1300, 460, 281,354,460,285,185,281,285,202,354,185,202,460,281,354, 460, 285,185, 281, 285,202, 354, 185, 202, 0,
0, 460, 281,354,460,285,185,281,285,202,354,185,202,460,281,354, 460, 285,185, 281, 285,202, 354, 185, 202,-1), nrow=18, byrow=TRUE)
# Right hand side constraint vector
Bvector<-c(10, 10, 10, 20, 10, 10, 10, 10, 10, 10, 10, 10, 0, 0, 0, 0, 0,0 )
# Constraint inequality direction vector
constrainttype<- c(">=", ">=",">=", ">=",">=", ">=",">=", ">=",">=", ">=",">=", ">=","=", "=" , "=" , "=" ,"<=", "=" )
# Solve the specified integer program by setting all.int=TRUE
optimum<-lp(direction="min", objective.in=ObjCoeff, const.mat = Amatrix, const.dir = constrainttype, const.rhs = Bvector, all.int =TRUE)
# Print constraint matrix to verify it was specified correctly
print(optimum$constraints)
# Check to see if the solver reached optimality (0 means yes)
print(optimum$status)
# Print values of each variable in the optimal solution
# note that they are all integer valued
print(optimum$solution)
# Print the optimal objective function value
print(optimum$objval)
R has its strengths for sure, but formulation of non-trivial LP's is not one of them.
There are several other open source frameworks that are much more expressive. If you are a python user, you can pick from many including:
pyomo, gekko, cvxpy, pulp, or-tools and others I'm sure.
I'm a fan of pyomo for model building, but it requires installation of a separate solver, which isn't too difficult. There are several open-source free solvers that are excellent and in common use with different capabilities such as cbc, glpk, ipopt and others, and of course many excellent licensed solvers.
If you want to start with an "all in one" the pulp framework includes a solver with the build--I forget which one.
There are many examples on this site for all the pkgs above if you search by tag.

Contrasts between successive levels of a factor in R

I'm writing this post because I'm stuck in the analysis of a data file from a laboratorial experiment.
In this experiment, I counted the number of females (of a small arthropod) present in a specific environment, throughout 26 time points (TP). However, I want to understand if the number of females was different between each successive time point (e.g. if the number of females counted in TP 1 is different than TP 2; the number of females counted in TP 2 is different than TP 3; and so on...)
The data frame has the following columns:
Replicate (who contain the number of the replicate, going from 1 to 8); TimePoint (the day in which the females where counted, going from 1 to 26); Females (the number of females counted in each time point); and Block (experiment had 2 blocks)
I've tried to do some successive contrasts, but I dont think its the best way. This is my code:
m1<-lmer(Females~TimePoint+(1|Block))
Suc_contrasts2<-glht(m1,linfct=mcp(TimePoint=
c(
"t1 - t2 == 0",
"t2 - t3 == 0",
"t3 - t4 == 0",
"t4 - t5 == 0",
"t5 - t6 == 0",
"t6 - t7 == 0",
"t7 - t8 == 0",
"t8 - t9 == 0",
"t9 - t10 == 0",
"t10 - t11== 0",
"t11 - t12 == 0",
"t12 - t13 == 0",
"t13 - t14 == 0",
"t14 - t15 == 0",
"t15 - t16 == 0",
"t16 - t17 == 0",
"t17 - t18 == 0",
"t18 - t19 == 0",
"t19 - t20 == 0",
"t20 - t21== 0",
"t21 - t22 == 0",
"t22 - t23 == 0",
"t23 - t24 == 0",
"t24 - t25 == 0",
"t25 - t26 == 0")))
summary(Suc_contrasts2)
summary(Suc_contrasts2, test=adjusted ("bonferroni"))
I've been looking on google for other ways to do planned comparisons, but all i've found was not really appropriate for my data set. I'm still new at this, so sorry for any newbie mistake.
Thus my question is, is there any better way to compare the number of females I found between each pair of successive time points?
Edit 1:
I also tried doing contrasts like this, but the results don't seem right..
levels(TimePoint)
# [1] "t1" "t10" "t11" "t12" "t13" "t14" "t15" "t16" "t17" "t18" "t19" "t2" "t20" "t21" "t22" "t23" "t24" "t25" "t26"
# [20] "t3" "t4" "t5" "t6" "t7" "t8" "t9"
# tell R which TimePoints to compare
c1<- c(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #1v2
c2<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, -1, 0, 0, 0, 0, 0, 0) #2v3
c3<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0) #3v4
c4<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0) #4v5
c5<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0) #5v6
c5<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0) #6v7
c6<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0) #7v8
c7<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1) #8v9
c8<- c(0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1) #9v10
c9<- c(0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #10v11
c10<- c(0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #11v12
c11<- c(0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #11v12
c12<- c(0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #12v13
c13<- c(0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #13v14
c14<- c(0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #14v15
c15<- c(0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #15v16
c16<- c(0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #16v17
c17<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #17v18
c18<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #18v19
c19<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #19v20
c20<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #20v21
c21<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #21v22
c22<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) #22v23
c23<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0) #23v24
c24<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0, 0) #24v25
c25<- c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -1, 0, 0, 0, 0, 0, 0, 0) #25v26
# combined the above lines into a matrix
mat <- cbind(c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25)
# tell R that the matrix gives the contrasts you want
contrasts(TimePoint) <- mat
model2 <- aov(Females ~ TimePoint)
summary(model2)
# Df Sum Sq Mean Sq F value Pr(>F)
# line2$TimePoint 25 9694 387.8 6.939 <2e-16 ***
# Residuals 390 21794 55.9
# ---
# Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
summary.aov(model2, split=list(TimePoint=list("1v2"=1, "2v3" = 2, "3v4"=3, "4v5"=4, "5v6"=5, "6v7"=6, "7v8"=7, "8v9"=8, "9v10"=9, "10v11"=10, "11v12"=11, "12v13"=12, "13v14"=13, "14v15"=14, "15v16"=15, "16v17"=16, "17v18"=17, "18v19"=18, "19v20"=19, "20v21"=20, "21v22"=21, "22v23"=22, "23v24"=23, "24v25"=24, "25v26"=25)))
Thanks for your time,
André
Another option for fitting successive-differences contrasts:
m1 <- lmer(Females~TimePoint+(1|Block), contrasts=list(TimePoint=MASS::contr.sdif))
This doesn't take the multiplicity of testing into account (which you might get away with since these are pre-planned contrasts): you could use p.adjust() on the p-values.
#AndreasM's points about the ordering of your factor, choice of random vs fixed effects, etc., should definitely be taken into consideration.
I think this website my help you: backward difference coding
Following the information there, difference contrasts between subsequent factor levels could be set like this (see below). Note, that I only use a simple example with 5 factor levels.
#create dummy data
df <- expand.grid(TimePoint = c("t01", "t02", "t03", "t04", "t05"),
Replicate = 1:8, Block = 1:2)
set.seed(2)
df$Females <- runif(nrow(df), min = 0, max = 100)
#set backward difference contrasts
contrasts(df$TimePoint) <- matrix(c(-4/5, 1/5, 1/5, 1/5, 1/5,
-3/5, -3/5, 2/5, 2/5, 2/5,
-2/5, -2/5, -2/5, 3/5, 3/5,
-1/5, -1/5, -1/5, -1/5, 4/5), ncol = 4)
When fitting a simple linear model, the parameter estimates correspond to the expected values, i.e., contrast "TimePoint1" corresponds to t2 - t1, contrast "TimePoint2" to t3 - t2 and so on.
#fit linear model
m1 <- lm(Females ~ TimePoint, data = df)
coef(m1)
(Intercept) TimePoint1 TimePoint2 TimePoint3 TimePoint4
50.295659 -10.116045 7.979465 -10.182389 2.209413
#mean by time point
with(df, tapply(Females, TimePoint, mean))
t01 t02 t03 t04 t05
57.23189 47.11584 55.09531 44.91292 47.12233
I want to add that I am not sure whether what you are trying to do is sensible. But this I don´t feel comfortable to evaluate and it would be a question for CrossValidated. I worry that treating 26 time points as categorical factor levels is not the best way to go. Also, in your initial code you seem to fit a model treating block as a random factor. This does not make sense if block has only 2 levels (as you write), see for example here: Link
Finally, I noticed that in your example, factor levels of your TimePoint variable are not ordered right (t1, t10, t11... instead of t1, t2, t3, ...). You could change this for instance with this line of code:
df$TimePoint <- factor(df$TimePoint, levels = paste0("t", 1:26),
labels = paste0("t", sprintf("%02d", 1:26)))

Working on bipartite networks with igraph : problem with basic measures (density, normalized degree)

I'm new to bipartite network analysis and i've some trouble with basic measures.
I'm trying to work on bipartite networks without projecting in 1-mode graphs.
My problems come from the fact that the igraph package allows to create bipartite graphs but that the measures do not seem to adapt to the specificity of these graphs.
So, my general question is how do you do when you work directly on bipartite networks ?
Here a concrete exemple with density
## Working with an incidence matrix (sample) with 47 columns and 10 rows (unweighted / undirected)
# Want to compute basic global index like density with igraph
library(igraph)
g <- graph.incidence(m, directed = F )
graph.density(g) # result = 0.04636591
# Now trying to compute basic density for a bipartite graph without igraph (number of edges divided by the product of the two types of vertices)
library(Matrix)
d <- nnzero(m)/ (ncol(m)*nrow(m)) # result 0.1574468
# It seems that bipartite package does the job
library(bipartite)
networklevel(m, index=c("connectance")) # result 0.1574468
But the bipartite package is very specific to ecology fields and lot of measures are designed to food web and interaction between species (and some, like clustering coefficient, don't seem to take into account the bipartite nature of the graph : e.g compute 4-cycles).
So, are there simpler ways to work on bipartite networks with igraph ? To measure some global indexes (density, clustering coefficient with 4-cycles, I know that tnet does this but my actual networks are too large), and to normalize local indexes like degree, closeness, betweenness centralities taking into account the bipartite specificity (like in Borgatti S.P., Everett M.G., 1997, « Network analysis of 2-mode data », Social Networks) ?
Any advice will be appreciated !
Below the code to reproduce the sample of my matrix "m"
m <- structure(c(1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,
1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0,
0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,
0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0), .Dim = c(10L, 47L), .Dimnames = list(
c("02723", "13963", "F3238", "02194", "15051", "04477", "02164",
"06283", "04080", "08304"), c("1185241", "170063", "10350868",
"217831", "2210247", "2262963", "1816670", "1848354", "2232593",
"146214", "1880252", "2261639", "2262581", "2158177", "1850147",
"2262912", "146412", "2262957", "1566083", "1841811", "146384",
"216281", "2220957", "1846986", "1951567", "1581130", "105343",
"1580240", "170654", "1796236", "1835553", "1835848", "146400",
"1174872", "1283240", "2253354", "1283617", "146617", "160263",
"2263115", "184745", "1809858", "1496747", "10346824", "148730",
"2262582", "146268")))
Density: you already got it
Degree
degv1 <- degree(g, V(g)[type == FALSE])
degv2 <- degree(g, V(g)[type == TRUE])
Normalized degree: divise by the vcount of the other node category
degnormv1 <- degv1/length(V(g)[type == TRUE])
degnormv2 <- degv2/length(V(g)[type == FALSE])
No answer regarding closeness, betweenness nor clustering coefficient
For the normalized degree, here a solution without igraph
normalizedegreeV1 <- data.frame(ND = colSums(m)/nrow(m))
normalizedegreeV2 <- data.frame(ND = rowSums(m)/ncol(m))
but that leaves the other questions about centrality measures open...

lapply Question: How to streamline further without generating errors

I'm looking to condense the steps in my script, but I'm having issues with lapply(). It looks to be an issue with my code as usual. Any help would be much appreciated!
library(iNEXT)
sa4 <- list(Bird = list(structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1,
0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0,
0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0), .Dim = c(26L,
6L), .Dimnames = list(Scientific_name = c(" Pycnonotus plumosus",
"Acridotheres javanicus", "Aegithina tiphia", "Aethopyga siparaja",
"Anthreptes malacensis", "Aplonis panayensis", "Cacatua goffiniana",
"Callosciurus notatus", "Cinnyris jugularis", "Copsychus malabaricus",
"Copsychus saularis", "Dicaeum cruentatum", "Dicrurus paradiseus",
"Gorsachius melanolophus", "Larvivora cyane", "Macronus gularis",
"Oriolus chinensis", "Orthotomus atrogularis", "Otus lempiji",
"Pitta moluccensis", "Pycnonotus goiavier", "Pycnonotus plumosus",
"Pycnonotus zeylanicus", "Spilopelia chinensis", "Todiramphus chloris",
"Zosterops simplex"), Sampling_Point = c("SA_01", "SA_02", "SA_03",
"SA_04", "SA_05", "SA_06")))), Butterfly = list(structure(c(0,
0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0,
0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0), .Dim = c(10L,
4L), .Dimnames = list(Scientific_name = c("Burara harisa consobrina",
"Catopsilia pyranthe pyranthe", "Catopsilia scylla cornelia",
"Delias hyparete metarete", "Eurema sp", "Idea leuconoe clara",
"Pachliopta aristolochiae asteris", "Phalanta phalantha phalantha",
"Troides helena cerberus", "Zizula hylax pygmaea"), Sampling_Point = c("SA_01",
"SA_02", "SA_04", "SA_06")))), Mammal = list(structure(c(0, 1,
1, 1, 1, 0), .Dim = 2:3, .Dimnames = list(Scientific_name = c("Callosciurus notatus",
"Unidentified Fruit Bat sp"), Sampling_Point = c("SA_03", "SA_04",
"SA_05")))), Reptile = list(structure(1, .Dim = c(1L, 1L), .Dimnames = list(
Scientific_name = "Hemidactylus frenatus", Sampling_Point = "SA_05"))))
I've been doing it the longer way:
estimateD(sa4$Butterfly, datatype="incidence_raw") #Sampling coverage for butterflies
estimateD(sa4$Mammal, datatype="incidence_raw") #Sampling coverage for mammals
estimateD(sa4$Bird, datatype="incidence_raw") #Sampling coverage for birds
estimateD(sa4$Reptile, datatype="incidence_raw") #Sampling coverage for reptiles
Note that estimateD(sa4$Reptile, datatype="incidence_raw" generates an error since it only has one species.
Is it possible to condense the following steps via lapply? In this situation I've only have 4 taxa, but for other projects, it might be a lot more. I tried the following and it gives me a warning message--which actually is the same warning message as the one above. I'm wondering if lapply stops working if one component gives an error?
> (lapply(sa4, function(x) estimateD(x, datatype="incidence_raw")) )
Error in `[.data.frame`(tmp, , c(1, 2, 3, 7, 4, 5, 6)) :
undefined columns selected
In addition: Warning messages:
1: In FUN(X[[i]], ...) :
Invalid data type, the element of species by sites presence-absence matrix should be 0 or 1. Set nonzero elements as 1.
2: In log(b/Ub) : NaNs produced
Please let me know if I need to provide more information? Thank you!
This is a simple error trapping issue. Wrap tryCatcharound your problem function call and have the error function return information on what happened.
results <- lapply(sa4, function(x) {
tryCatch(estimateD(x, datatype="incidence_raw"),
error = function(e) e)
})
Now determine which ran alright.
ok <- !sapply(results, inherits, "error")
ok
# Bird Butterfly Mammal Reptile
# TRUE TRUE TRUE FALSE
And check those that did.
results[ok]
It is the issue with the 'Reptiles', so if we select the first 3 elements of the list, it should work
lapply(sa4[1:3], function(x) estimateD(x, datatype="incidence_raw"))

Using car::Anova package for a doubly-multivariate MANOVA in R

I'm trying to run a repeated-measures MANOVA in R, which also contains a number of dependent variables (key outcome variables of behavioural tasks). The repeated-measures are due to a cross-over design, in which individuals took a drug and placebo (in randomised order).
The code I'm running looks like this:
imatrix <- matrix(c(
1, 0, 0, 0, 0, 0, 1,
1, 0, 0, 0, 0, 0, -1,
0, 1, 0, 0, 0, 0, 1,
0, 1, 0, 0, 0, 0, -1,
0, 0, 1, 0, 0, 0, 1,
0, 0, 1, 0, 0, 0, -1,
0, 0, 0, 1, 0, 0, 1,
0, 0, 0, 1, 0, 0, -1,
0, 0, 0, 0, 1, 0, 1,
0, 0, 0, 0, 1, 0, -1,
0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 0, 0, 1, -1
), 12, 7, byrow=TRUE)
colnames(imatrix) <- c("BCST", "CGT", "AST", "AGN", "DDT", "FERT", "NAC")
(imatrix <- list(measure=imatrix[,1:6], condition=imatrix[,7]))
contrasts(condition_factor) <- matrix(c(-1,1,1, -1), ncol=2)
doubly.mod<-lm(cbind(bcst_nac$totPersErr,bcst_placebo$totPersErr,cantab_nac$CGT.Delay.aversion,cantab_placebo$CGT.Delay.aversion,cantab_nac$AST.Switching.cost..Mean..correct.,cantab_placebo$AST.Switching.cost..Mean..correct.,cantab_nac$AGN.Affective.response.bias..Mean.,cantab_placebo$AGN.Affective.response.bias..Mean.,aucs_NAC,aucs_placebo,fert_nac$FERTACCHA,fert_placebo$FERTACCHA)~1))
Manova(doubly.mod, imatrix=imatrix, type =3)
The result is this error: Error in Anova.III.mlm(mod, SSPE, error.df, idata, idesign, icontrasts, :
(list) object cannot be coerced to type 'double'
However, when I change imatrix back from a list to a matrix, I get this error response:
Error in do.call(cbind, imatrix) : second argument must be a list
I've based this off the example from the car::Anova package about doubly multivariate analyses. Please let me know if you can help, or if I can add anything to make this question clearer.

Resources