how do I export my results in excel from Jullia? - julia

How can I export my tables in Excel from Julia?
my table is a matrix as bellow:
x y z e r t b
1 0 1 1 0 1 1
0 1 1 0 1 1 1
1 0 1 0 0 0 1
I want to write it in a excel sheet. Is there any way to do that?

Taking your table as (i.e. ignoring that it contains strings as well as integers)
julia> table = ["x" "y" "z" "e" "r" "t" "b";
1 0 1 1 0 1 1;
0 1 1 0 1 1 1;
1 0 1 0 0 0 1]
4×7 Array{Any,2}:
"x" "y" "z" "e" "r" "t" "b"
1 0 1 1 0 1 1
0 1 1 0 1 1 1
1 0 1 0 0 0 1
you can use Julia's writecsv, i.e. writecsv("mytable.csv", table).
For more complex data you should consider using DataFrames and CSV.jl for CSV export.
UPDATE: There is also Taro.jl which allows one (through some Java API) to write Excel files on individual cell level.

If you are fine working (in excel or in libreoffice) with the ods format you can use OdsIO.jl.
You can specify the exact cell location, so for example you can update with the new data only the cell region of interest.

Related

how to sum binary results up in r

I am new to R and still learning.
I have a dataset like this
county chemicalA chemicalB chemicalC chemicalD
A 1 0 1 0
B 0 0 0 0
C 1 0 0 0
D 0 1 1 0
I generate these binary variables by using code:
chemicalA=ifelse(Mean.Value_chemicalA>0,1,0)
chemicalA[is.na(chemicalA)]=0
Now I would like to sum the "1" up and see how many chemicals are detected in one place.My ideal result is like this:
county chemicalA chemicalB chemicalC chemicalD detection
A 1 0 1 0 2
B 0 0 0 0 0
C 1 0 0 0 1
D 0 1 1 1 3
I have tried
data$detection=chemicalA+chemicalB+chemicalC+chemicalD
But the result is only 2 and 0 and I don't know why. At first, I thought the chemicalX might not be numeric data and I used class(). All the chemicalX variables return as numeric.
Can someone help me with this? Thanks!
We can use rowSums on the column names that startsWith the prefix 'chemical'
data$detection <- rowSums(data[startsWith(names(data), "chemical")])
I think rowSums works better when the row name starts with the same prefix. But if not, we can try apply.
data$detection=apply(data[,c(1:5)], 1, sum)

How to add multiple values to data.frame without loop?

Suppose I have matrix D which consists of death counts per year by specific ages.
I want to fill this matrix with appropriate death counts that is stored in
vector Age, but the following code gives me wrong answer. How should I write the code without making a loop?
# Year and age grid for tables
Years=c(2007:2017)
Ages=c(60:70)
#Data.frame of deaths
D=data.frame(matrix(ncol=length(Years),nrow=length(Ages))); D[is.na(D)]=0
colnames(D)=Years
rownames(D)=Ages
Age=c(60,61,62,65,65,65,68,69,60)
year=2010
D[as.character(Age),as.character(year)]<-
D[as.character(Age),as.character(year)]+1
D[,'2010'] # 1 1 1 0 0 1 0 0 1 1 0
# Should be 2 1 1 0 0 3 0 0 1 1 0
You need to use table
AgeTable = table(Age)
D[names(AgeTable), as.character(year)] = AgeTable
D[,'2010']
[1] 2 1 1 0 0 3 0 0 1 1 0

merge multiple columns with condition

I have a data frame like this:
Q17a_17 Q17a_18 Q17a_19 Q17a_20 Q17a_21 Q17a_22 Q17a_23
1 NA NA NA NA NA NA NA
2 0 0 0 0 0 0 1
3 0 0 0 0 0 1 1
4 0 0 0 0 0 0 1
5 1 0 0 0 1 1 0
6 0 0 0 0 0 1 1
7 1 1 0 0 1 0 1
And I would like to merge Q17a_17, Q17a_19 and Q17a_23 in a new column with a new name. The "old" columns Q17a_17, Q17a_19 and Q17a_23 should be deleted.
In the new column should be just one value with the following conditions: "NA" if there was "NA" before, "1" if there was somewhere "1" as value before (like in row 3 or 4 or 7) and "0" if there were only zeros before.
Maybe this is really simple, but I struggle already for hours...
The approach I use here is to first compute a vector which is NA when an NA value occurs in at least one of the three columns, and zero otherwise. Also, we compute a vector containing the numerical result you want. What you want can be obtained by logically ORing together the three columns. Then, adding these two computed vectors together produces the desired result.
na.vector <- df$Q17a_17 * df$Q17a_19 * df$Q17a_23
na.vector[!is.na(na.vector)] <- 0
num.vector <- as.numeric(df$Q17a_17 | df$Q17a_19 | df$Q17a_23)
df$new_column <- na.vector + num.vector
df <- df[ , -which(names(df) %in% c("Q17a_17", "Q17a_19", "Q17a_23"))]

Dynamically apply network analysis to list of text files in r

I have a folder "Disintegration T1" containing around 50 text files which look like this, where each text file name corresponds to a team:
> 1
0 0 0 0 1
1 0 0 0 1
0 1 0 0 1
0 0 0 0 0
1 1 1 1 0
> 2
0 1 1 0 1
0 0 1 1 1
1 1 0 1 1
1 1 1 0 1
0 0 0 0 0
> 3
0 1 1 1
1 0 0 0
0 0 0 0
1 0 0 0
I am attempting to automate the process of running network analysis on all of the text files, which includes the following functions:
library(sna)
centralization(x,betweenness)
centralization(x,degree)
centralization(x,evcent)
gden(x)
One of the problems I am having with this is that these analyses must be performed on matrices, and R defaults to reading these files as data-frames. I am also having trouble constructing the code in general. My goal is to automate the process so that the results are added to a data-frame where each row corresponds to each text file, and each column corresponds to the performed analysis, so that the final results looks like:
> results
gden betweenness degree evcent
1 # # # #
2 # # # #
3 # # # #
(Number refers to the corresponding analysis result) I have tried the following which is not working:
networkanalysis<-function(x){
gden<-gden(x)
centb<-centralization(x,betweenness)
centd<-centralization(x,degree)
cente<-centralization(x,evcent)
return(gden)
return(centb)
return(centd)
return(cente)
}
out<-do.call("rbind",lapply(dir(),function(x)networkanalysis(data=as.matrix(read.table(x)))))
Where my working directory is set to the folder with the text files.
You can only return one result from a function, so wrap those values in a data.frame and return the data.frame.
networkanalysis <- function(x) {
data.frame(
gden=gden(x),
centb=centralization(x,betweenness),
centd=centralization(x,degree),
cente=centralization(x,evcent)
)
}
Then, call your function on each file and rbind the results together with do.call.
out <- do.call(rbind, lapply(dir(), function(x) networkanalysis(as.matrix(read.table(x)))))
Edit
To add filenames as rownames,
out <- do.call(rbind, lapply(dir(), function(x)
`rownames<-`(networkanalysis(as.matrix(read.table(x))), basename(x))))

Data Transformations in R

I have a need to look at the data in a data frame in a different way. Here is the problem..
I have a data frame as follows
Person Item BuyOrSell
1 a B
1 b S
1 a S
2 d B
3 a S
3 e S
I need it be transformed into this way. Show the sum of all transactions made by the Person on individual items.
Person a b d e
1 2 1 0 0
2 0 0 1 0
3 1 0 0 1
I was able to achieve the above by using the
table(Person,Item) in R
The new requirement I have is to see the data as follows. Show the sum of all transactions made by the Person on individual items broken by the transaction type (B or S)
Person aB aS bB bS dB dS eB eS
1 1 1 0 1 0 0 0 0
2 0 0 0 0 1 0 0 0
3 1 0 0 0 0 0 0 1
So i created a new column and appended the values of both the Item and BuyOrSell.
df$newcol<-paste(Item,"-",BuyOrSell,sep="")
table(Person,newcol)
and was able to achieve the above results.
Is there a better way in R to do this type of transformation ?
Your way (creating a new column via paste) is probably the easiest. You could also do this:
require(reshape2)
dcast(Person~Item+BuyOrSell,data=df,fun.aggregate=length,drop=FALSE)

Resources