This might be a fairly banal question. I am a r-user who knows little about coding.
I am using the package 'EGAnet' and create a plot using the function EGA. This function has a 'plot.args' argument. From the ?EGA page
plot.args
List. A list of additional arguments for the network plot.
For plot.type = "GGally" (see ggnet2 for full list of arguments):
vsize Size of the nodes. Defaults to 6.
label.size Size of the labels. Defaults to 5.
alpha The level of transparency of the nodes, which might be a single value or a vector of values. Defaults to 0.7.
edge.alpha The level of transparency of the edges, which might be a single value or a vector of values. Defaults to 0.4.
legend.names A vector with names for each dimension
color.palette The color palette for the nodes. For custom colors, enter HEX codes for each dimension in a vector. See color_palette_EGA for more details and examples
I can't figure out how to modify the parameters of this list to add them to the function EGA. For instance, I'd like to change the color.palette (color.palette = "grayscale") and vsize (vsize = 8) arguments.
How do I create this list to add to the function?
The final function looks something like
ega.sds <- EGA(data = ex_graph, model = "glasso", plot.EGA = T, plot.args = ???)
What do I add in place of ???
Thank you a million.
I believe you just need to write:
plot.args = list(
color.palette = "grayscale",
vsize = 8
)
Related
I am working on plotting a Network and it contains two different types of Nodes which I want to visualise with different shapes. For that I made an additional table in which I specified which structure is which type using a binary system. Now I want to specify in my plot function that the structures with 1 are to be triangles and the ones with 0 as circles.
My data for the Network is in the format of an adjacency matrix (I use igraph) and I am using ggnet2 for the plotting of it.
this is how I imported the data:
am <- as.matrix(read.csv2("mydata.csv", header = T, row.names = 1))
g <- graph_from_adjacency_matrix(am, mode = "undirected")
attr <- read.csv2("myattributes.csv", header = T, row.names = 1)
this is how I would plot it but I dont know how to specify the shape function
ggnet2(g, size = "degree", node.color = "darkgreen", shape = ??????)
Thanks in advance for your help!
Note that the package-requirements for plotting igraphs with ggnet2 include ggplot2, sna and network as well as intergraph as a bridge.
ggnet2 is prettier, sure, but the igraph-way is this:
g <- erdos.renyi.game(100,100,'gnm')
V(g)$shape <- sample(c('csquare','circle'), 100, replace=T)
plot(g, vertex.label = NA)
Note that I added two igraph-style shapes as vertex-attributes to g above. In ggent2 you can provide a vector with shapes, but they can be any values (even a factor), or numbers (the usual gray circle is 19. Try this out to plot in ggnet2
ggnet2(g, shape=19)
ggnet2(g, shape=10+round(1:100/10))
ggnet2(g, shape=factor(V(g)$shape))
V(g)$shape <- sample(c('One shape','Another shape'), 100, replace=T)
ggnet2(g, shape=V(g)$shape, size = "degree", node.color = "darkgreen")
Note that, if you add attributes to your vertices after separately loading attribute data (as you do above), it may be so that the very order of your data matters. Make sure your table import actually works as intended with the correct attribute being assigned to the correct vertex. I find it a good practice to tie all values as attributes on the igraph-object (edge- and vertex attributes alike) rather than letting the network data live in different dataframes or loose vectors to be combined in order to correctly visualise a network.
I apologize for this super basic question, but I am not experienced in plotting, and a lot of the documentation for Julia plotting assumes more knowledge than I have!
I am creating a scatter plot, using Plots, where each marker is plotted based on spatial position, and I want to scale the color by magnitude of value that each marker holds. I created a color gradient as such:
C(g::ColorGradient) = RGB[g[z] for z = LinRange(0,1,M)]
g = :inferno
cgrad(g,[0.01,0.99]) |> C
M is related to the number of markers, this way I create a suitable scale of colors based on the number of markers I have.
I assumed I was creating some kind of structure that would assign a color from this gradient based off a value ranging from 0.01 to 0.99. However, I guess I don't understand what the structure C is. When I assign color = C(v), where v is between 0 and 1.00, I get an error saying that C does not accept type Float64.
Is there a way I can assign a marker some color from this gradient based off its value? I have all of the values for each location stored in another array.
UPDATE: I have also tried indexing into C. I turned my values into Int64 ranging from 1-99, and tried to set color=C[v], but C also does not take Type Int64.
UPDATE 2: Ok, so i realized my issue was I did not understand the |> functionality, So i rewrote the code to look like:
C(g::ColorGradient) = RGB[g[z] for z = LinRange(0,1,M)]
g = :inferno
myGrad = (cgrad(g,[0.00,1.00]) |> C)
and now I can index into my color gradiant! However I still am having an issue setting the color equal to the value stored in the myGradient array.
for i = 1:M
X,Y = find_coords(i,pd)
colors = myGrad[c_index[i]]
outline = rand(Float64,3)
plt = plot!(X,Y,colors, markerstrokecolor = outline)
end
When I type myGrad[c_index[i]] into REPL it plots a color. However I am getting an error from the above code which states
"Cannot convert RGB{Float64} to series data for plotting"
If i change the plot line as follows I get a slightly different error:
plt = plot!(X,Y,markercolor = colors, markerstrokecolor = outline)
ERROR: LoadError: MethodError: no method matching plot_color(::Float64)
So for some reason I cant store this color, as a color variable for my plot.
There are a few different issues at play here. Firstly, if you want to create a scatter plot, you should probably use scatter. It also doesn't seem necessary to plot things in a loop here, although it's hard to tell as your code isn't a minimum working example (MWE), as it relies on things defined somewhere else in your code.
Here's an example of how this might work:
using Plots
# Create a discrete color gradient with 20 points
my_colors = [cgrad(:inferno, [0.01, 0.99])[z] for z ∈ range(0.0, 1.0, length = 20)]
# Draw some random data points
x, y = sort(rand(100)), rand(100)
# Assign a color between 1 and 20 on the color grid to each point
z = sort(rand(1:20, 100))
# Plot
scatter(x, y, color = my_colors[z], markerstrokecolor = "white", label = "",
markersize = [10 for _ ∈ 1:100])
gives:
I am plotting a set of 15 samples clustered in three groups A, B, C, and the heatmap orders them such as C, A, B. (I have read this is due to that it plots on the right the cluster with the strongest similarity). I would like to order the clusters so the leaves of the cluster are seen as A, B, C (therefore reorganising the order of the cluster branches. Is there a function that can help me do this?
The code I have used:
library(pheatmap)
pheatmap(mat, annotation_col = anno,
color = colorRampPalette(c("blue", "white", "red"))(50), show_rownames = F)
(cluster_cols=FALSE would not cluster the samples at all, but that is not what I want)
I have also found on another forum this, but I am unsure how to change the function code and if it would work for me:
clustering_callback callback function to modify the clustering. Is
called with two parameters: original hclust object and the matrix used
for clustering. Must return a hclust object.
Hi I am not sure if that is of any help for you but when you check?pheatmap and scroll down to examples the last snippet of code actually does give that example.
# Modify ordering of the clusters using clustering callback option
callback = function(hc, mat){
sv = svd(t(mat))$v[,1]
dend = reorder(as.dendrogram(hc), wts = sv)
as.hclust(dend)
}
pheatmap(test, clustering_callback = callback)
I tried it on my heatmap and the previously defined function actually sorted the clusters exactly the way I needed them. Although I have to admit (as I am new to R) I don't fully understand what the defined callback function does.
Maybe you can also write a function with the dendsortpackage as I know you can reorder the branches of a dendrogram with it.
In this case, luckily clustering of the columns coincides with sample number order, (which is similar to dendrogram) so I added cluster_cols = FALSE and solved the issue of re-clustering the columns (and avoided writing the callback function.
pheatmap(mat,
annotation_col = anno,
fontsize_row = 2,
show_rownames = T,
cutree_rows = 3,
cluster_cols = FALSE)
# install.packages("dendsort")
library(dendsort)
sort_hclust <- function(...) as.hclust(dendsort(as.dendrogram(...)))
cluster_cols=sort_hclust(hclust(dist(mat)))
I am using plot function from the kohonen package in R. The function inherently produces title called "property plot" or whatever different plot I specify. I can change the name and customize it but I am not able to alter its size.
I want to either remove the inherent title plotting or increase the size of the title.
I have attached a plot in which there are two different titles. The smaller one is the inherent one and the larger one is added through title function. The code for the same is also given
plot(knime.model, type = "property", property = unscaled, main = colnames(knime.model$data)[5], , cex = 2, font.main= 4 palette.name=coolBlueHotRed, heatkey = F, ncolors = length(unique(data[,5])))
title(colnames(knime.model$data)[5], cex.main = 3)
I'm attempting to set manual colors for Dimple dPlot line values and having some trouble.
d1 <- dPlot(
x="Date",
y="Count",
groups = "Category",
data = AB_DateCategory,
type = 'line'
)
d1$xAxis(orderRule = "Date")
d1$yAxis(type = "addMeasureAxis")
d1$xAxis(
type = "addTimeAxis",
inputFormat = "%Y-%m-%d",
outputFormat = "%Y-%m-%d",
)
The plot comes out looking great, but I would like to manually set the "Category" colors. Right now, it's set to the defaults and I cannot seem to find a method of manually setting a scale.
I have been able to set the defaults using brewer.pal, but I want to match other colors in my report:
d1$defaultColors(brewer.pal(n=4,"Accent"))
Ideally, these are my four colors - the category values I'm grouping on are R, D, O and U.
("#377EB8", "#4DAF4A", "#E41A1C", "#984EA3"))
If I understand correctly, you want to make sure R is #377EB8, etc. To match R, D, O, U consistently to the colors especially across multiple charts, you will need to do something like this.
d1$defaultColors = "#!d3.scale.ordinal().range(['#377EB8', '#4DAF4A', '#E41A1C', '#984EA3']).domain(['R','D','O','U'])!#"
This is on my list of things to make easier.
Let me know if this doesn't work.
The issue with the accepted answer above is that defining an ordinal scale will not guarantee that specific colors are bound to specific categories R, D, O and U. The color mapping will change depending on the input data. To assign each color specifically you can use assignColor like this
d1$setTemplate(afterScript = '<script>
myChart.assignColor("R","#377EB8");
myChart.draw();
</script>')