How to make a multiple groups of histogram? [duplicate] - graph

So here is how my data set looks like :
In [1]: df1=pd.DataFrame(np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
In [2]: df2=pd.DataFrame(np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
In [3]: df1
Out[3]:
I J
A 0.675616 0.177597
B 0.675693 0.598682
C 0.631376 0.598966
D 0.229858 0.378817
In [4]: df2
Out[4]:
I J
A 0.939620 0.984616
B 0.314818 0.456252
C 0.630907 0.656341
D 0.020994 0.538303
I want to have stacked bar plot for each dataframe but since they have same index, I'd like to have 2 stacked bars per index.
I've tried to plot both on the same axes :
In [5]: ax = df1.plot(kind="bar", stacked=True)
In [5]: ax2 = df2.plot(kind="bar", stacked=True, ax = ax)
But it overlaps.
Then I tried to concat the two dataset first :
pd.concat(dict(df1 = df1, df2 = df2),axis = 1).plot(kind="bar", stacked=True)
but here everything is stacked
My best try is :
pd.concat(dict(df1 = df1, df2 = df2),axis = 0).plot(kind="bar", stacked=True)
Which gives :
This is basically what I want, except that I want the bar ordered as
(df1,A) (df2,A) (df1,B) (df2,B) etc...
I guess there is a trick but I can't found it !
After #bgschiller's answer I got this :
Which is almost what I want. I would like the bar to be clustered by index, in order to have something visually clear.
Bonus : Having the x-label not redundant, something like :
df1 df2 df1 df2
_______ _______ ...
A B

I eventually found a trick (edit: see below for using seaborn and longform dataframe):
Solution with pandas and matplotlib
Here it is with a more complete example :
import pandas as pd
import matplotlib.cm as cm
import numpy as np
import matplotlib.pyplot as plt
def plot_clustered_stacked(dfall, labels=None, title="multiple stacked bar plot", H="/", **kwargs):
"""Given a list of dataframes, with identical columns and index, create a clustered stacked bar plot.
labels is a list of the names of the dataframe, used for the legend
title is a string for the title of the plot
H is the hatch used for identification of the different dataframe"""
n_df = len(dfall)
n_col = len(dfall[0].columns)
n_ind = len(dfall[0].index)
axe = plt.subplot(111)
for df in dfall : # for each data frame
axe = df.plot(kind="bar",
linewidth=0,
stacked=True,
ax=axe,
legend=False,
grid=False,
**kwargs) # make bar plots
h,l = axe.get_legend_handles_labels() # get the handles we want to modify
for i in range(0, n_df * n_col, n_col): # len(h) = n_col * n_df
for j, pa in enumerate(h[i:i+n_col]):
for rect in pa.patches: # for each index
rect.set_x(rect.get_x() + 1 / float(n_df + 1) * i / float(n_col))
rect.set_hatch(H * int(i / n_col)) #edited part
rect.set_width(1 / float(n_df + 1))
axe.set_xticks((np.arange(0, 2 * n_ind, 2) + 1 / float(n_df + 1)) / 2.)
axe.set_xticklabels(df.index, rotation = 0)
axe.set_title(title)
# Add invisible data to add another legend
n=[]
for i in range(n_df):
n.append(axe.bar(0, 0, color="gray", hatch=H * i))
l1 = axe.legend(h[:n_col], l[:n_col], loc=[1.01, 0.5])
if labels is not None:
l2 = plt.legend(n, labels, loc=[1.01, 0.1])
axe.add_artist(l1)
return axe
# create fake dataframes
df1 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
df2 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
df3 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
# Then, just call :
plot_clustered_stacked([df1, df2, df3],["df1", "df2", "df3"])
And it gives that :
You can change the colors of the bar by passing a cmap argument:
plot_clustered_stacked([df1, df2, df3],
["df1", "df2", "df3"],
cmap=plt.cm.viridis)
Solution with seaborn:
Given the same df1, df2, df3, below, I convert them in a long form:
df1["Name"] = "df1"
df2["Name"] = "df2"
df3["Name"] = "df3"
dfall = pd.concat([pd.melt(i.reset_index(),
id_vars=["Name", "index"]) # transform in tidy format each df
for i in [df1, df2, df3]],
ignore_index=True)
The problem with seaborn is that it doesn't stack bars natively, so the trick is to plot the cumulative sum of each bar on top of each other:
dfall.set_index(["Name", "index", "variable"], inplace=1)
dfall["vcs"] = dfall.groupby(level=["Name", "index"]).cumsum()
dfall.reset_index(inplace=True)
>>> dfall.head(6)
Name index variable value vcs
0 df1 A I 0.717286 0.717286
1 df1 B I 0.236867 0.236867
2 df1 C I 0.952557 0.952557
3 df1 D I 0.487995 0.487995
4 df1 A J 0.174489 0.891775
5 df1 B J 0.332001 0.568868
Then loop over each group of variable and plot the cumulative sum:
c = ["blue", "purple", "red", "green", "pink"]
for i, g in enumerate(dfall.groupby("variable")):
ax = sns.barplot(data=g[1],
x="index",
y="vcs",
hue="Name",
color=c[i],
zorder=-i, # so first bars stay on top
edgecolor="k")
ax.legend_.remove() # remove the redundant legends
It lacks the legend that can be added easily I think. The problem is that instead of hatches (which can be added easily) to differentiate the dataframes we have a gradient of lightness, and it's a bit too light for the first one, and I don't really know how to change that without changing each rectangle one by one (as in the first solution).
Tell me if you don't understand something in the code.
Feel free to re-use this code which is under CC0.

This is a great start but I think the colors could be modified a bit for clarity. Also be careful about importing every argument in Altair as this may cause collisions with existing objects in your namespace. Here is some reconfigured code to display the correct color display when stacking the values:
Import packages
import pandas as pd
import numpy as np
import altair as alt
Generate some random data
df1=pd.DataFrame(10*np.random.rand(4,3),index=["A","B","C","D"],columns=["I","J","K"])
df2=pd.DataFrame(10*np.random.rand(4,3),index=["A","B","C","D"],columns=["I","J","K"])
df3=pd.DataFrame(10*np.random.rand(4,3),index=["A","B","C","D"],columns=["I","J","K"])
def prep_df(df, name):
df = df.stack().reset_index()
df.columns = ['c1', 'c2', 'values']
df['DF'] = name
return df
df1 = prep_df(df1, 'DF1')
df2 = prep_df(df2, 'DF2')
df3 = prep_df(df3, 'DF3')
df = pd.concat([df1, df2, df3])
Plot data with Altair
alt.Chart(df).mark_bar().encode(
# tell Altair which field to group columns on
x=alt.X('c2:N', title=None),
# tell Altair which field to use as Y values and how to calculate
y=alt.Y('sum(values):Q',
axis=alt.Axis(
grid=False,
title=None)),
# tell Altair which field to use to use as the set of columns to be represented in each group
column=alt.Column('c1:N', title=None),
# tell Altair which field to use for color segmentation
color=alt.Color('DF:N',
scale=alt.Scale(
# make it look pretty with an enjoyable color pallet
range=['#96ceb4', '#ffcc5c','#ff6f69'],
),
))\
.configure_view(
# remove grid lines around column clusters
strokeOpacity=0
)

I have managed to do the same using pandas and matplotlib subplots with basic commands.
Here's an example:
fig, axes = plt.subplots(nrows=1, ncols=3)
ax_position = 0
for concept in df.index.get_level_values('concept').unique():
idx = pd.IndexSlice
subset = df.loc[idx[[concept], :],
['cmp_tr_neg_p_wrk', 'exp_tr_pos_p_wrk',
'cmp_p_spot', 'exp_p_spot']]
print(subset.info())
subset = subset.groupby(
subset.index.get_level_values('datetime').year).sum()
subset = subset / 4 # quarter hours
subset = subset / 100 # installed capacity
ax = subset.plot(kind="bar", stacked=True, colormap="Blues",
ax=axes[ax_position])
ax.set_title("Concept \"" + concept + "\"", fontsize=30, alpha=1.0)
ax.set_ylabel("Hours", fontsize=30),
ax.set_xlabel("Concept \"" + concept + "\"", fontsize=30, alpha=0.0),
ax.set_ylim(0, 9000)
ax.set_yticks(range(0, 9000, 1000))
ax.set_yticklabels(labels=range(0, 9000, 1000), rotation=0,
minor=False, fontsize=28)
ax.set_xticklabels(labels=['2012', '2013', '2014'], rotation=0,
minor=False, fontsize=28)
handles, labels = ax.get_legend_handles_labels()
ax.legend(['Market A', 'Market B',
'Market C', 'Market D'],
loc='upper right', fontsize=28)
ax_position += 1
# look "three subplots"
#plt.tight_layout(pad=0.0, w_pad=-8.0, h_pad=0.0)
# look "one plot"
plt.tight_layout(pad=0., w_pad=-16.5, h_pad=0.0)
axes[1].set_ylabel("")
axes[2].set_ylabel("")
axes[1].set_yticklabels("")
axes[2].set_yticklabels("")
axes[0].legend().set_visible(False)
axes[1].legend().set_visible(False)
axes[2].legend(['Market A', 'Market B',
'Market C', 'Market D'],
loc='upper right', fontsize=28)
The dataframe structure of "subset" before grouping looks like this:
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 105216 entries, (D_REC, 2012-01-01 00:00:00) to (D_REC, 2014-12-31 23:45:00)
Data columns (total 4 columns):
cmp_tr_neg_p_wrk 105216 non-null float64
exp_tr_pos_p_wrk 105216 non-null float64
cmp_p_spot 105216 non-null float64
exp_p_spot 105216 non-null float64
dtypes: float64(4)
memory usage: 4.0+ MB
and the plot like this:
It is formatted in the "ggplot" style with the following header:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')

The answer by #jrjc for use of seaborn is very clever, but it has a few problems, as noted by the author:
The "light" shading is too pale when only two or three categories are needed. It makes colour series (pale blue, blue, dark blue, etc.) difficult to distinguish.
The legend is not produced to distinguish the meaning of the shadings ("pale" means what?)
More importantly, however, I found out that, because of the groupbystatement in the code:
This solution works only if the columns are ordered alphabetically. If I rename columns ["I", "J", "K", "L", "M"] by something anti-alphabetical (["zI", "yJ", "xK", "wL", "vM"]), I get this graph instead:
I strove to resolve these problems with the plot_grouped_stackedbars() function in this open-source python module.
It keeps the shading within reasonable range
It auto-generates a legend that explains the shading
It does not rely on groupby
It also allows for
various normalization options (see below normalization to 100% of maximum value)
the addition of error bars
See full demo here. I hope this proves useful and can answer the original question.

Here is a more succinct implementation of the answer from Cord Kaldemeyer. The idea is to reserve as much width as necessary for the plots. Then each cluster gets a subplot of the required length.
# Data and imports
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import MaxNLocator
import matplotlib.gridspec as gridspec
import matplotlib
matplotlib.style.use('ggplot')
np.random.seed(0)
df = pd.DataFrame(np.asarray(1+5*np.random.random((10,4)), dtype=int),columns=["Cluster", "Bar", "Bar_part", "Count"])
df = df.groupby(["Cluster", "Bar", "Bar_part"])["Count"].sum().unstack(fill_value=0)
display(df)
# plotting
clusters = df.index.levels[0]
inter_graph = 0
maxi = np.max(np.sum(df, axis=1))
total_width = len(df)+inter_graph*(len(clusters)-1)
fig = plt.figure(figsize=(total_width,10))
gridspec.GridSpec(1, total_width)
axes=[]
ax_position = 0
for cluster in clusters:
subset = df.loc[cluster]
ax = subset.plot(kind="bar", stacked=True, width=0.8, ax=plt.subplot2grid((1,total_width), (0,ax_position), colspan=len(subset.index)))
axes.append(ax)
ax.set_title(cluster)
ax.set_xlabel("")
ax.set_ylim(0,maxi+1)
ax.yaxis.set_major_locator(MaxNLocator(integer=True))
ax_position += len(subset.index)+inter_graph
for i in range(1,len(clusters)):
axes[i].set_yticklabels("")
axes[i-1].legend().set_visible(False)
axes[0].set_ylabel("y_label")
fig.suptitle('Big Title', fontsize="x-large")
legend = axes[-1].legend(loc='upper right', fontsize=16, framealpha=1).get_frame()
legend.set_linewidth(3)
legend.set_edgecolor("black")
plt.show()
The result is the following:

We tried to do this just using matplotlib. We converted the values to cumulative values as shown below:
# get cumulative values
cum_val = [a[0]]
for j in range(1,len(a)):
cum_val.append( cum_val[j-1] + a[j] )
We then plotted bars in descending order of height so that they are all visible. We added some hard-coded color schemes as well as it can generated sequentially from the RGB cube. The package can be installed with
pip install groupstackbar
Then, it can be imported as used as shown below. Also, there is a function (generate_dummy_data) to generate a dummy.csv sample data in order to test the functionality.
import matplotlib.pyplot as plt
import csv
import random
import groupstackbar
def generate_dummy_data():
with open('dummy_data.csv','w') as f:
csvwriter = csv.writer(f)
csvwriter.writerow(['Week','State_SEIR','Age_Cat','Value'])
for i in ['Week 1', 'Week 2', 'Week 3']: # 3 weeks
for j in ['S','E','I','R']:
for k in ['Age Cat 1', 'Age Cat 2', 'Age Cat 3', 'Age Cat 4', 'Age Cat 5']:
csvwriter.writerow([i,j,k, int(random.random()*100)])
generate_dummy_data()
f = groupstackbar.plot_grouped_stacks('dummy_data.csv', BGV=['State_SEIR','Week','Age_Cat'], extra_space_on_top = 30)
plt.savefig("output.png",dpi=500)
The plot_grouped_stacks() function of groupstackbar is reproduced below:
"""
Arguments:
filename:
a csv filename with 4 headers, H1, H2, H3 and H4. Each one of H1/H2/H3/H4 are strings.
the first three headers(H1/H2/H3) should identify a row uniquely
the fourth header H4 contains the value (H4 must be integer or floating; cannot be a string)
.csv files without headers will result in the first row being read as headers.
duplicates (relevant for csv inputs):
duplicate entries imply two rows with same <H1/H2/H3> identifier.
In case of duplicates aggregation is performed before proceeding, both the duplicates are binned together to increase the target value
BGV:a python list of three headers in order for stacking (Bars, Groups and Vertical Stacking)
for example, if BGV=[H2, H1, H3], the group stack plot will be such that:
maximum number of bars = number of unique values under column H2
maximum number of bars grouped together horizontally(side-by-side) = number of
unique values under column H1
maximum number of vertical stacks in any bar = number of unique values under column H2
"""
def plot_grouped_stacks(filename, BGV, fig_size=(10, 8),
intra_group_spacing=0.1,
inter_group_spacing=10,
y_loc_for_group_name=-5,
y_loc_for_hstack_name=5,
fontcolor_hstacks='blue',
fontcolor_groups='black',
fontsize_hstacks=20,
fontsize_groups=30,
x_trim_hstack_label=0,
x_trim_group_label=0,
extra_space_on_top=20
):
figure_ = plt.figure(figsize=fig_size)
size = figure_.get_size_inches()
figure_.add_subplot(1,1,1)
# sanity check for inputs; some trivial exception handlings
if intra_group_spacing >= 100:
print ("Percentage for than 100 for variables intra_group_spacing, Aborting! ")
return
else:
intra_group_spacing = intra_group_spacing*size[0]/100 # converting percentanges to inches
if inter_group_spacing >= 100:
print ("Percentage for than 100 for variables inter_group_spacing, Aborting! ")
return
else:
inter_group_spacing = inter_group_spacing*size[0]/100 # converting percentanges to inches
if y_loc_for_group_name >= 100:
print ("Percentage for than 100 for variables inter_group_spacing, Aborting! ")
return
else:
# the multiplier 90 is set empirically to roughly align the percentage value
# <this is a quick fix solution, which needs to be improved later>
y_loc_for_group_name = 90*y_loc_for_group_name*size[1]/100 # converting percentanges to inches
if y_loc_for_hstack_name >= 100:
print ("Percentage for than 100 for variables inter_group_spacing, Aborting! ")
return
else:
y_loc_for_hstack_name = 70*y_loc_for_hstack_name*size[1]/100 # converting percentanges to inches
if x_trim_hstack_label >= 100:
print ("Percentage for than 100 for variables inter_group_spacing, Aborting! ")
return
else:
x_trim_hstack_label = x_trim_hstack_label*size[0]/100 # converting percentanges to inches
if x_trim_group_label >= 100:
print ("Percentage for than 100 for variables inter_group_spacing, Aborting! ")
return
else:
x_trim_group_label = x_trim_group_label*size[0]/100 # converting percentanges to inches
fileread_list = []
with open(filename) as f:
for row in f:
r = row.strip().split(',')
if len(r) != 4:
print ('4 items not found # line ', c, ' of ', filename)
return
else:
fileread_list.append(r)
# inputs:
bar_variable = BGV[0]
group_variable = BGV[1]
vertical_stacking_variable = BGV[2]
first_line = fileread_list[0]
for i in range(4):
if first_line[i] == vertical_stacking_variable:
header_num_Of_vertical_stacking = i
break
sorted_order_for_stacking = []
for listed in fileread_list[1:]: # skipping the first line
sorted_order_for_stacking.append(listed[header_num_Of_vertical_stacking])
sorted_order_for_stacking = list(set(sorted_order_for_stacking))
list.sort(sorted_order_for_stacking)
sorted_order_for_stacking_V = list(sorted_order_for_stacking)
#####################
first_line = fileread_list[0]
for i in range(4):
if first_line[i] == bar_variable:
header_num_Of_bar_Variable = i
break
sorted_order_for_stacking = []
for listed in fileread_list[1:]: # skipping the first line
sorted_order_for_stacking.append(listed[header_num_Of_bar_Variable])
sorted_order_for_stacking = list(set(sorted_order_for_stacking))
list.sort(sorted_order_for_stacking)
sorted_order_for_stacking_H = list(sorted_order_for_stacking)
######################
first_line = fileread_list[0]
for i in range(4):
if first_line[i] == group_variable:
header_num_Of_bar_Variable = i
break
sorted_order_for_stacking = []
for listed in fileread_list[1:]: # skipping the first line
sorted_order_for_stacking.append(listed[header_num_Of_bar_Variable])
sorted_order_for_stacking = list(set(sorted_order_for_stacking))
list.sort(sorted_order_for_stacking)
sorted_order_for_stacking_G = list(sorted_order_for_stacking)
#########################
print (" Vertical/Horizontal/Groups ")
print (sorted_order_for_stacking_V, " : Vertical stacking labels")
print (sorted_order_for_stacking_H, " : Horizontal stacking labels")
print (sorted_order_for_stacking_G, " : Group names")
# +1 because we need one space before and after as well
each_group_width = (size[0] - (len(sorted_order_for_stacking_G) + 1) *
inter_group_spacing)/len(sorted_order_for_stacking_G)
# -1 because we need n-1 spaces between bars if there are n bars in each group
each_bar_width = (each_group_width - (len(sorted_order_for_stacking_H) - 1) *
intra_group_spacing)/len(sorted_order_for_stacking_H)
# colormaps
number_of_color_maps_needed = len(sorted_order_for_stacking_H)
number_of_levels_in_each_map = len(sorted_order_for_stacking_V)
c_map_vertical = {}
for i in range(number_of_color_maps_needed):
try:
c_map_vertical[sorted_order_for_stacking_H[i]] = sequential_colors[i]
except:
print ("Something went wrong with hardcoded colors!\n reverting to custom colors (linear in RGB) ")
c_map_vertical[sorted_order_for_stacking_H[i]] = getColorMaps(N = number_of_levels_in_each_map, type = 'S')
##
state_num = -1
max_bar_height = 0
for state in sorted_order_for_stacking_H:
state_num += 1
week_num = -1
for week in ['Week 1', 'Week 2','Week 3']:
week_num += 1
a = [0] * len(sorted_order_for_stacking_V)
for i in range(len(sorted_order_for_stacking_V)):
for line_num in range(1,len(fileread_list)): # skipping the first line
listed = fileread_list[line_num]
if listed[1] == state and listed[0] == week and listed[2] == sorted_order_for_stacking_V[i]:
a[i] = (float(listed[3]))
# get cumulative values
cum_val = [a[0]]
for j in range(1,len(a)):
cum_val.append( cum_val[j-1] + a[j] )
max_bar_height = max([max_bar_height, max(cum_val)])
plt.text(x= (week_num)*(each_group_width+inter_group_spacing) - x_trim_group_label
, y=y_loc_for_group_name, s=sorted_order_for_stacking_G[week_num], fontsize=fontsize_groups, color=fontcolor_groups)
# state labels need to be printed just once for each week, hence putting them outside the loop
plt.text(x= week_num*(each_group_width+inter_group_spacing) + (state_num)*(each_bar_width+intra_group_spacing) - x_trim_hstack_label
, y=y_loc_for_hstack_name, s=sorted_order_for_stacking_H[state_num], fontsize=fontsize_hstacks, color = fontcolor_hstacks)
if week_num == 1:
# label only in the first week
for i in range(len(sorted_order_for_stacking_V)-1,-1,-1):
# trick to make them all visible: Plot in descending order of their height!! :)
plt.bar( week_num*(each_group_width+inter_group_spacing) +
state_num*(each_bar_width+intra_group_spacing),
height=cum_val[i] ,
width=each_bar_width,
color=c_map_vertical[state][i],
label= state + "_" + sorted_order_for_stacking_V[i] )
else:
# no label after the first week, (as it is just repetition)
for i in range(len(sorted_order_for_stacking_V)-1,-1,-1):
plt.bar( week_num*(each_group_width+inter_group_spacing) +
state_num*(each_bar_width+intra_group_spacing),
height=cum_val[i] ,
width=each_bar_width,
color=c_map_vertical[state][i])
plt.ylim(0,max_bar_height*(1+extra_space_on_top/100))
plt.tight_layout()
plt.xticks([], [])
plt.legend(ncol=len(sorted_order_for_stacking_H))
return figure_
A pictorial readMe is attached to help the user quickly figure out the parameters to the function. Please feel free to raise an issue or start a pull request. Currently the input format is .csv files with 4 columns, but pandas data frame input can be added if necessary.
https://github.com/jimioke/groupstackbar

You're on the right track! In order to change the order of the bars, you should change the order in the index.
In [5]: df_both = pd.concat(dict(df1 = df1, df2 = df2),axis = 0)
In [6]: df_both
Out[6]:
I J
df1 A 0.423816 0.094405
B 0.825094 0.759266
C 0.654216 0.250606
D 0.676110 0.495251
df2 A 0.607304 0.336233
B 0.581771 0.436421
C 0.233125 0.360291
D 0.519266 0.199637
[8 rows x 2 columns]
So we want to swap axes, then reorder. Here's an easy way to do this
In [7]: df_both.swaplevel(0,1)
Out[7]:
I J
A df1 0.423816 0.094405
B df1 0.825094 0.759266
C df1 0.654216 0.250606
D df1 0.676110 0.495251
A df2 0.607304 0.336233
B df2 0.581771 0.436421
C df2 0.233125 0.360291
D df2 0.519266 0.199637
[8 rows x 2 columns]
In [8]: df_both.swaplevel(0,1).sort_index()
Out[8]:
I J
A df1 0.423816 0.094405
df2 0.607304 0.336233
B df1 0.825094 0.759266
df2 0.581771 0.436421
C df1 0.654216 0.250606
df2 0.233125 0.360291
D df1 0.676110 0.495251
df2 0.519266 0.199637
[8 rows x 2 columns]
If it's important that your horizontal labels show up in the old order (df1,A) rather than (A,df1), we can just swaplevels again and not sort_index:
In [9]: df_both.swaplevel(0,1).sort_index().swaplevel(0,1)
Out[9]:
I J
df1 A 0.423816 0.094405
df2 A 0.607304 0.336233
df1 B 0.825094 0.759266
df2 B 0.581771 0.436421
df1 C 0.654216 0.250606
df2 C 0.233125 0.360291
df1 D 0.676110 0.495251
df2 D 0.519266 0.199637
[8 rows x 2 columns]

Altair can be helpful here. Here is the produced plot.
Imports
import pandas as pd
import numpy as np
from altair import *
Dataset creation
df1=pd.DataFrame(10*np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
df2=pd.DataFrame(10*np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
Preparing dataset
def prep_df(df, name):
df = df.stack().reset_index()
df.columns = ['c1', 'c2', 'values']
df['DF'] = name
return df
df1 = prep_df(df1, 'DF1')
df2 = prep_df(df2, 'DF2')
df = pd.concat([df1, df2])
Altair plot
Chart(df).mark_bar().encode(y=Y('values', axis=Axis(grid=False)),
x='c2:N',
column=Column('c1:N') ,
color='DF:N').configure_facet_cell( strokeWidth=0.0).configure_cell(width=200, height=200)

Here is how I did with two charts including data replication.
Initial Data:
A B C D
0 level1 B1 456 326
1 level1 B3 694 1345
2 level1 B2 546 1471
3 level2 B1 687 806
4 level2 B3 877 1003
5 level2 B2 790 1004
Set multi index
data = data.set_index(["A", "B"])
Here is the code:
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import os
import seaborn as sns
matplotlib.style.use("seaborn-white")
ig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,6))
ax_position = 0
y_offset = -120 # decrease value if you want to decrease the position of data labels
for metric in data.index.get_level_values('A').unique():
idx = pd.IndexSlice
subset = data.loc[idx[[metric], :],
['C', 'D']]
subset = subset.groupby(
subset.index.get_level_values('B')).sum()
ax = subset.plot(kind="bar", stacked=True, colormap="Pastel1",
ax=axes[ax_position])
ax.set_title(metric, fontsize=15, alpha=1.0)
ax.set_xlabel(metric, fontsize=15, alpha=0.0)
ax.set_ylabel("Values", fontsize=15)
ax.set_xticklabels(labels=['B1', "B2", "B3"], rotation=0,
minor=False, fontsize=15)
ax.set_ylim(0, 3000)
ax.set_yticks(range(0, 3000, 500), fontsize=15)
handles, labels = ax.get_legend_handles_labels()
ax_position += 1
for bar in ax.patches:
ax.text(
# Put the text in the middle of each bar. get_x returns the start
# so we add half the width to get to the middle.
bar.get_x() + bar.get_width() / 2,
# Vertically, add the height of the bar to the start of the bar,
# along with the offset.
bar.get_height() + bar.get_y() + y_offset,
# This is actual value we'll show.
round(bar.get_height()),
# Center the labels and style them a bit.
ha='center',
color='w',
weight='bold',
size=12
)
ax.legend(bbox_to_anchor=(1.04, 1), loc="upper left")
plt.tight_layout(pad=0.0, w_pad=-1.0, h_pad=0.0) # increase w_pad if you'd like to separate charts
axes[1].set_yticklabels("")
axes[1].set_ylabel("")
axes[0].legend().set_visible(False)

You can change the bar order by altering the index order (using sort in this case):
pd.concat([df1, df2], keys=['df1', 'df2']).sort_index(level=1).plot.bar(stacked=True)

Related

How to produce a column containing "values ± error" from two columns with one being "value" and the other being "error"

I am new to both R and asking questions in related forums, so please bear with me.
I have raw data, output from a geochemical analysis, which contains a large number of observations for over 200 variables. The output of this analysis generates a number of columns containing many types of data.
Of concern for this question are two specific column formats: the analysis outputs two separate columns (variables) for a reading of each element's abundance within a sample. The first is a column containing the magnitude, in PPM, of the abundance of the element (variable) for that sample (observation). The second column is the error in measurement for the abundance of the element. So, for example, it will essentially produce the following (simplified) format:
SampleID
Magnesium Abundance [ppm]
Magnesium error [ppm]
A1
10530
300
I have to produce a formal report which I am using the "xlsx" package for. In this report, I have to merge the two columns which report the abundance and the corresponding error of an element as output by the analysis from the format of two separate columns to one column containing the "abundance ± error", like so:
Sample ID
Magnesium Abundance [ppm]
A1
10035 ± 250
My question is: is what I am attempting to do actually plausible? Is there a way to merge columns to produce a single column containing the "value" and "error" as "value ± error"?
I do not have any reproducible code for this as I am at a complete loss for this part.
Here is the entire code for my project thus far. I am sorry if it is hard to read or see where the problem is as I do not have reproducible code for the current issue I am working on, but I do have this.
In the section "# Import and process data", you can see where I upload my raw data, remove columns of all NA's, and produce a separate table for standardized samples (which check for machine analysis drift). I need to now merge the columns as mentioned above for all elements being measured.
library(xlsx)
library(tidyverse)
library(readxl)
### Create Excel Workbook
# Workbook title and subtitle style
wb <- createWorkbook(type = "xlsx")
wb_title_style <- CellStyle(wb) + Font(wb, color = "blue",
heightInPoints = 14,
name = "Times New Roman",
isBold = T,
underline = 0)
wb_subtitle_style <- CellStyle(wb) + Font(wb, color = "black",
heightInPoints = 12,
name = "Times New Roman",
underline = 0)
# Workbook row and column name styles
wb_rownames_style <- CellStyle(wb) + Font(wb, heightInPoints = 12,
isBold = F,
name = "Times New Roman")
wb_colnames_style <- CellStyle(wb) + Font(wb, heightInPoints = 12,
isBold = T,
name = "Times New Roman") +
Alignment(wrapText = T, horizontal = "ALIGN_CENTER") +
Border(color = "black", position = c("TOP", "BOTTOM"),
pen = c("BORDER_THIN", "BORDER_THICK"))
# Data formatting
format_date <- DataFormat("mm/dd/yyyy")
# Workbook sheets - NOTE: see if there is a way to loop over the number of rows
# from the SampleID column and produce a sheet for each row (i.e. sheet 1 = SampleID 0-3cm)
sheet <- createSheet(wb, sheetName = "PPI-10 Geochem")
# Titling helper function (from: http://www.sthda.com/english/wiki/r-xlsx-package-a-quick-start-guide-to-manipulate-excel-files-in-r)
xlsx.addTitle <- function(sheet, rowIndex, title, titleStyle){
rows <- createRow(sheet, rowIndex = rowIndex)
sheetTitle <- createCell(rows, colIndex = 1)
setCellValue(sheetTitle[[1,1]], title)
setCellStyle(sheetTitle[[1,1]], titleStyle)
}
# Add title and subtitle
xlsx.addTitle(sheet, rowIndex = 1, title = "UConn Sediment Core pXRF Spreadsheet",
titleStyle = wb_title_style) # Be sure not to backquote the value
# for title style, it will produce an error in referencing
xlsx.addTitle(sheet, rowIndex = 2, title = "Project/Core ID:",
titleStyle = wb_subtitle_style)
xlsx.addTitle(sheet, rowIndex = 3, title = "Operator:",
titleStyle = wb_subtitle_style)
xlsx.addTitle(sheet, rowIndex = 4, title = "Date:",
titleStyle = wb_subtitle_style)
# Import and process data
PPI10.pXRF_raw <- as.data.frame(read_xlsx("PPI-10_pXRF.xlsx")) # Import xlsx data as dataframe
PPI10.pXRF <- PPI10.pXRF_raw[, colSums(is.na(PPI10.pXRF_raw)) != nrow(PPI10.pXRF_raw)] #Remove rows of all NAs
PPI10 <- PPI10.pXRF[PPI10.pXRF$`Method Name` != "Cal Check", ] # Remove Cal Check rows
PPI10.calcheck <- PPI10.pXRF[PPI10.pXRF$`Method Name` == "Cal Check", ] # Produce separate Cal Check table
# Append data to workbook
addDataFrame(PPI10, sheet, startRow = 5, startColumn = 1,
colnamesStyle = wb_colnames_style,
rownamesStyle = wb_rownames_style)
setColumnWidth(sheet, colIndex = c(1:ncol(PPI10)), colWidth = 12)
# Save Workbook
saveWorkbook(wb, "R-pXRF-report_test.xlsx")`
You can do this simply using the paste function and the unicode value for the "±" symbol:
df <- data.frame(value = seq(100, 500, 50),
error = seq(20, 60, 5))
df$new <- paste(df$value, "\u00B1", df$error)
# This also works
# paste(df$value, "±", df$error)
Output:
# value error new
# 1 100 20 100 ± 20
# 2 150 25 150 ± 25
# 3 200 30 200 ± 30
# 4 250 35 250 ± 35
# 5 300 40 300 ± 40
# 6 350 45 350 ± 45
# 7 400 50 400 ± 50
# 8 450 55 450 ± 55
# 9 500 60 500 ± 60
Also, since you said you were new, as a friendly note to posting on this forum, almost all of the text and code your question is unnecessary, and the question is not really concerned with your subject-specific application (I mean that in the nicest way, I promise!). You could have simply stated:
I have a data frame with values and error in separate columns. I need to combine them into a single column with the "±" symbol (i.e., "1000 ± 150"). Some sample data are: df <- data.frame(values = 10:15, error = 1:5)
The length and noise in your post is likely why it has not yet received an answer. Only hoping this helps you get better, faster help in future posts, and good luck!

How to create heatmap only for 50 highest value

I have data matrix with thousands row like this:
file_A file_B file_C file_D
Carbohydrate metabolism 69370 67839 68914 67272
Energy metabolism 40223 40750 39450 39735
Lipid metabolism 22333 21668 22421 21773
Nucleotide metabolism 18449 18389 17560 18263
Amino acid metabolism 63739 63441 62797 63106
Metabolism of other amino acids 19075 19068 18896 18836
I want to create heatmap only for 50 highest value of the row for file_A,B,C,D.
How I can get it?
Assuming you want the top 50 rows for the sum of file_A through file_D, you can do so with dplyr pretty easily:
your_dataframe %>%
mutate(fileSum = select(., file_A:file_D) %>% rowSums()) %>%
arrange(desc(fileSum)) %>%
head(50)
From there, you can pipe into ggplot for your desired visual, save it as a separate dataframe, or whatever you need to do.
First, determine maximum values by row, then sort in descending order and pick top 50. Then plot, eg. using pheatmap.
library(pheatmap)
# toy example
df <- data.frame(iris[, 1:4], row.names=make.unique(as.character(iris$Species)))
# pick top 50 rows with highest values
top <- df[order(apply(df, 1, max), decreasing = TRUE)[1:50],]
# plot heatmap
pheatmap::pheatmap(top)
Created on 2020-03-13 by the reprex package (v0.3.0)
Edit:
If I misunderstood and you want the sums of the rows, then use
top <- df[order(rowSums(df), decreasing = TRUE)[1:50], ]
instead.
Edit #2:
If you want the top 50 for each row, as suggested by dc37, then you can use
top <- df[unique(unlist(lapply(df, function(x) order(x, decreasing = TRUE)[1:50]))),]
instead.
Maybe I misunderstood your question, but from my understanding, you are looking make the heatmap of the top 50 values of file A, top 50 values of file B, top 50 of file C and top 50 of File D. Am I right ?
If it is what you are looking for, it could means that you don't need only 50 but potentially up to 200 values (depending if the same row is in top 50 for all files or in only one).
Here a dummy example of large dataframe corresponding to your example:
row <- expand.grid(LETTERS, letters, LETTERS)
row$Row = paste(row$Var1, row$Var2, row$Var3, sep = "")
df <- data.frame(row = row$Row,
file_A = sample(10000:99000,nrow(row), replace = TRUE),
file_B = sample(10000:99000,nrow(row), replace = TRUE),
file_C = sample(10000:99000,nrow(row), replace = TRUE),
file_D = sample(10000:99000,nrow(row), replace = TRUE))
> head(df)
row file_A file_B file_C file_D
1 AaA 54418 65384 43526 86870
2 BaA 57098 75440 92820 27695
3 CaA 71172 59942 12626 53196
4 DaA 54976 25370 43797 30770
5 EaA 56631 73034 50746 77878
6 FaA 45245 57979 72878 94381
In order to get a heatmap using ggplot2, you need to obtain the following organization: One column for x value, one column for y value and one column that serve as a categorical variable for filling for example.
To get that, you need to reshape your dataframe into a longer format. To do that, you can use pivot_longer function from tidyr package but as you have thousands of rows,I will rather recommend data.table which is faster for this kind of process.
library(data.table)
DF <- melt(setDT(df), measure = list(c("file_A","file_B","file_C","file_D")), value.name = "Value", variable.name = "File")
row File Value
1: AaA file_A 54418
2: BaA file_A 57098
3: CaA file_A 71172
4: DaA file_A 54976
5: EaA file_A 56631
6: FaA file_A 45245
Now, we can use dplyr to get only the first top 50 values for each file by doing:
library(dplyr)
Extract_DF <- DF %>%
group_by(File) %>%
arrange(desc(Value)) %>%
slice(1:50)
# A tibble: 200 x 3
# Groups: File [4]
row File Value
<fct> <fct> <int>
1 PaH file_A 98999
2 RwX file_A 98996
3 JjQ file_A 98992
4 SfA file_A 98990
5 TrI file_A 98989
6 WgU file_A 98975
7 DnZ file_A 98969
8 TdK file_A 98965
9 YlS file_A 98954
10 FeZ file_A 98954
# … with 190 more rows
Now to plot this as a heatmap we can do:
library(ggplot2)
ggplot(Extract_DF, aes(y = row, x = File, fill = Value))+
geom_tile(color = "black")+
scale_fill_gradient(low = "red", high = "green")
And you get:
I intentionally let y labeling even if it is not elegant just in order you see how the graph is organized. All the white spot are those rows that are top 50 in one column but not in other columns
If you are looking for only top 50 values across all columns, you can use #Jon's answer and use the last part of my answer for getting a heatmap using ggplot2
Here is another approach using rank. I am using a matrix, but it should easily work on a data.frame as well. Using the volcano dataset, each column is reverse ranked (i.e. lowest rank for highest value), then returns a value of 1 for those values that have a rank of less than or equal to 50, and a 0 otherwise. I include a plot of the scaled version of the matrix to show that the results correctly identify the highest values for each column of the matrix.
# example data
M <- volcano
# for reference - each column is centered and scaled
Msc <- scale(M)
# return TRUE if rank is in top 50 highest values
Ma <- apply(M, 2, function(x){
ran <- length(x) - rank(x, ties.method = "average")
ran <= 50
})
colSums(Ma)
png("tmp.png", width = 7.5, height = 2.5, units = "in", res = 400)
op <- par(mfcol = c(1,3), mar = c(1,1,1.5,1), oma = c(2,2,0,0))
image(M, xlab = "", ylab = "", xaxt = "n", yaxt = "n"); mtext("original")
image(Msc, xlab = "", ylab = "", xaxt = "n", yaxt = "n"); mtext("scaled")
image(Ma, xlab = "", ylab = "", xaxt = "n", yaxt = "n"); mtext("top 50 for each column")
mtext(text = "rows", side = 1, line = 0, outer = TRUE)
mtext(text = "columns", side = 2, line = 0, outer = TRUE)
par(op)
dev.off()

Coloring Rarefaction curve lines by metadata (vegan package) (phyloseq package)

First time question asker here. I wasn't able to find an answer to this question in other posts (love stackexchange, btw).
Anyway...
I'm creating a rarefaction curve via the vegan package and I'm getting a very messy plot that has a very thick black bar at the bottom of the plot which is obscuring some low diversity sample lines.
Ideally, I would like to generate a plot with all of my lines (169; I could reduce this to 144) but make a composite graph, coloring by Sample Year and making different types of lines for each Pond (i.e: 2 sample years: 2016, 2017 and 3 ponds: 1,2,5). I've used phyloseq to create an object with all my data, then separated my OTU abundance table from my metadata into distinct objects (jt = OTU table and sampledata = metadata). My current code:
jt <- as.data.frame(t(j)) # transform it to make it compatible with the proceeding commands
rarecurve(jt
, step = 100
, sample = 6000
, main = "Alpha Rarefaction Curve"
, cex = 0.2
, color = sampledata$PondYear)
# A very small subset of the sample metadata
Pond Year
F16.5.d.1.1.R2 5 2016
F17.1.D.6.1.R1 1 2017
F16.1.D15.1.R3 1 2016
F17.2.D00.1.R2 2 2017
enter image description here
Here is an example of how to plot a rarefaction curve with ggplot. I used data available in the phyloseq package available from bioconductor.
to install phyloseq:
source('http://bioconductor.org/biocLite.R')
biocLite('phyloseq')
library(phyloseq)
other libraries needed
library(tidyverse)
library(vegan)
data:
mothlist <- system.file("extdata", "esophagus.fn.list.gz", package = "phyloseq")
mothgroup <- system.file("extdata", "esophagus.good.groups.gz", package = "phyloseq")
mothtree <- system.file("extdata", "esophagus.tree.gz", package = "phyloseq")
cutoff <- "0.10"
esophman <- import_mothur(mothlist, mothgroup, mothtree, cutoff)
extract OTU table, transpose and convert to data frame
otu <- otu_table(esophman)
otu <- as.data.frame(t(otu))
sample_names <- rownames(otu)
out <- rarecurve(otu, step = 5, sample = 6000, label = T)
Now you have a list each element corresponds to one sample:
Clean the list up a bit:
rare <- lapply(out, function(x){
b <- as.data.frame(x)
b <- data.frame(OTU = b[,1], raw.read = rownames(b))
b$raw.read <- as.numeric(gsub("N", "", b$raw.read))
return(b)
})
label list
names(rare) <- sample_names
convert to data frame:
rare <- map_dfr(rare, function(x){
z <- data.frame(x)
return(z)
}, .id = "sample")
Lets see how it looks:
head(rare)
sample OTU raw.read
1 B 1.000000 1
2 B 5.977595 6
3 B 10.919090 11
4 B 15.826125 16
5 B 20.700279 21
6 B 25.543070 26
plot with ggplot2
ggplot(data = rare)+
geom_line(aes(x = raw.read, y = OTU, color = sample))+
scale_x_continuous(labels = scales::scientific_format())
vegan plot:
rarecurve(otu, step = 5, sample = 6000, label = T) #low step size because of low abundance
One can make an additional column of groupings and color according to that.
Here is an example how to add another grouping. Lets assume you have a table of the form:
groupings <- data.frame(sample = c("B", "C", "D"),
location = c("one", "one", "two"), stringsAsFactors = F)
groupings
sample location
1 B one
2 C one
3 D two
where samples are grouped according to another feature. You could use lapply or map_dfr to go over groupings$sample and label rare$location.
rare <- map_dfr(groupings$sample, function(x){ #loop over samples
z <- rare[rare$sample == x,] #subset rare according to sample
loc <- groupings$location[groupings$sample == x] #subset groupings according to sample, if more than one grouping repeat for all
z <- data.frame(z, loc) #make a new data frame with the subsets
return(z)
})
head(rare)
sample OTU raw.read loc
1 B 1.000000 1 one
2 B 5.977595 6 one
3 B 10.919090 11 one
4 B 15.826125 16 one
5 B 20.700279 21 one
6 B 25.543070 26 one
Lets make a decent plot out of this
ggplot(data = rare)+
geom_line(aes(x = raw.read, y = OTU, group = sample, color = loc))+
geom_text(data = rare %>% #here we need coordinates of the labels
group_by(sample) %>% #first group by samples
summarise(max_OTU = max(OTU), #find max OTU
max_raw = max(raw.read)), #find max raw read
aes(x = max_raw, y = max_OTU, label = sample), check_overlap = T, hjust = 0)+
scale_x_continuous(labels = scales::scientific_format())+
theme_bw()
I know this is an older question but I originally came here for the same reason and along the way found out that in a recent (2021) update vegan has made this a LOT easier.
This is an absolutely bare-bones example.
Ultimately we're going to be plotting the final result in ggplot so you'll have full customization options, and this is a tidyverse solution with dplyr.
library(vegan)
library(dplyr)
library(ggplot2)
I'm going to use the dune data within vegan and generate a column of random metadata for the site.
data(dune)
metadata <- data.frame("Site" = as.factor(1:20),
"Vegetation" = rep(c("Cactus", "None")))
Now we will run rarecurve, but provide the argument tidy = TRUE which will export a dataframe rather than a plot.
One thing to note here is that I have also used the step argument. The default step is 1, and this means by default you will get one row per individual per sample in your dataset, which can make the resulting dataframe huge. Step = 1 for dune gave me over 600 rows. Reducing the step too much will make your curves blocky, so it will be a balance between step and resolution for a nice plot.
Then I piped a left join right into the rarecurve call
dune_rare <- rarecurve(dune,
step = 2,
tidy = TRUE) %>%
left_join(metadata)
Now it will be plottable in ggplot, with a color/colour call to whatever metadata you attached.
From here you can customize other aspects of the plot as well.
ggplot(dune_rare) +
geom_line(aes(x = Sample, y = Species, group = Site, colour = Vegetation)) +
theme_bw()
dune-output
(Sorry it says I'm not allowed to embed the image yet :( )

cbind 1:nrows of same ID variable value to original data.frame

I have a large dataframe, where a variable id (first column) recurs with different values in the second column. My idea is to order the dataframe, to split it into a list and then lapply a function which cbinds the sequence 1:nrows(variable id) to each group. My code so far:
DF <- DF[order(DF[,1]),]
DF <- split(DF,DF[,1])
DF <- lapply(1:length(DF), function(i) cbind(DF[[i]], 1:length(DF[[i]])))
But this gives me an error: arguments imply different number of rows.
Can you elaborate?
> head(DF, n=50)
cell area
1 1 121.2130
2 2 81.3555
3 3 81.5862
4 4 83.6345
...
33 1 121.3270
34 2 80.7832
35 3 81.1816
36 4 83.3340
DF <- DF[order(DF$cell),]
What I want is:
> head(DF, n=50)
cell area counter
1 1 121.213 1
33 1 121.327 2
65 1 122.171 3
97 1 122.913 4
129 1 123.697 5
161 1 124.474 6
...and so on.
This is my code:
cell.areas.t <- function(file) {
dat = paste(file)
DF <- read.table(dat, col.names = c("cell","area"))
DF <- splitstackshape::getanID(DF, "cell")[] # thanks to akrun's answer
ggplot2::ggplot(data = DF, aes(x = .id , y = area, color = cell)) +
geom_line(aes(group = cell)) + geom_point(size=0.1)
}
And the plot looks like this:
Most cells increase in area, only some decrease. This is only a first try to visualize my data, so what you can't see very well is that the areas drop down periodically due to cell division.
Additional question:
There is a problem I didn't take into account beforehand, which is that after a cell division a new cell is added to the data.frame and is handed the initial index 1 (you see in the image that all cells start from .id=1, not later), which is not what I want - it needs to inherit the index of its creation time. First thing that comes into my mind is that I could use a parsing mechanism that does this job for a newly added cell variable:
DF$.id[DF$cell != temporary.cellindex] <- max(DF$.id[DF$cell != temporary.cellindex])
Do you have a better idea? Thanks.
There is a boundary condition which may ease the problem: fixed number of cells at the beginning (32). Another solution would be to cut away all data before the last daughter cell is created.
Update: Additional question solved, here's the code:
cell.areas.t <- function(file) {
dat = paste(file)
DF <- read.table(dat, col.names = c("cell","area"))
DF$.id <- c(0, cumsum(diff(DF$cell) < 0)) + 1L # Indexing
title <- getwd()
myplot <- ggplot2::ggplot(data = DF, aes(x = .id , y = area, color = factor(cell))) +
geom_line(aes(group = cell)) + geom_line(size=0.1) + theme(legend.position="none") + ggtitle(title)
#save the plot
image=myplot
ggsave(file="cell_areas_time.svg", plot=image, width=10, height=8)
}
We can use getanID from splitstackshape
library(splitstackshape)
getanID(DF, "cell")[]
There's a much easier method to accomplish that goal. Use ave with seq.int
DF$group_seq <- ave(DF, DF[,1], FUN=function(x){ seq.int(nrow(x)) } )

Including custom text in legend of plot

Suppose I have a data that looks like this.
> print(dat)
V1 V2
1 1 11613
2 2 6517
3 3 2442
4 4 687
5 5 159
6 6 29
# note that V2 is the frequency and V1 does not always start with 1.
> plot(dat,main=title,type="h")
# legend()??
Now what I want to do is to plot histogram, and have the mean
and standard deviation included as the legend. In the above example the standard deviation equals 0.87 and the mean eauals 1.66.
How can I achieve that automatically in R?
This solves the problem with legend creation that Gavin notices.
require(Hmisc)
myMean <- wtd.mean(dat$V1, dat$V2)
mySD <- sqrt(wtd.var(dat$V1, dat$V2))
plot(dat,main="title",type="h")
L= list( bquote(Mean== .(myMean)), bquote(SD== .(mySD) ) )
legend('topright', legend=sapply(L, as.expression))
This was pulled from an answer on Rhelp that I posted in 2010 that attributed the strategy for the solution to a 2005 exchange between Gabor Grothendieck and Thomas Lumley.
This gets pretty close:
dat <- data.frame(V1 = 1:6, V2 = c(11613, 6517, 2442, 687, 159, 29))
addMyLegend <- function(data, where = "topright", digits = 3, ...) {
MEAN <- round(mean(data), digits = digits)
SD <- round(sd(data), digits = digits)
legend(where, legend = list(bquote(Mean == .(MEAN)),
bquote(SD == .(SD))),
...)
}
plot(dat, type = "h")
addMyLegend(dat$V1, digits = 2, bty = "n")
Which gives
I'm not sure why the plotmath code is not displaying the == and a typeset =... Will have to look into that.
To see what is going on read ?bquote which explains that it can be used to replace components of an expression with dynamic data. Anything wrapped in .( ) will be replaced by the value of the object named in the wrapped part of the expression. Thus foo == .(bar) will look for an object named bar and insert the value of bar into the expression. If bar contained 1.3 then the result after applying bquote(foo == .(bar)) would be similar to expression(foo == 1.3).
The rest of my function addMyLegend() should be fairly self explanatory, if not read ?legend. Note you can pass on any arguments to legend() via the ... in addMyLegend().

Resources