Given data points and y value, give x value - r

Given a set of (x,y) coordinates, how can I solve for x, from y. If you were to plot the coordinates, they would be non-linear, but pretty close to exponential. I tried approx(), but it is way off. Here is example data. In this scenario, how could I solve for y == 50?
V1 V3
1 5.35 11.7906
2 10.70 15.0451
3 16.05 19.4243
4 21.40 20.7885
5 26.75 22.0584
6 32.10 25.4367
7 37.45 28.6701
8 42.80 30.7500
9 48.15 34.5084
10 53.50 37.0096
11 58.85 39.3423
12 64.20 41.5023
13 69.55 43.4599
14 74.90 44.7299
15 80.25 46.5738
16 85.60 47.7548
17 90.95 49.9749
18 96.30 51.0331
19 101.65 52.0207
20 107.00 52.9781
21 112.35 53.8730
22 117.70 54.2907
23 123.05 56.3025
24 128.40 56.6949
25 133.75 57.0830
26 139.10 58.5051
27 144.45 59.1440
28 149.80 60.0687
29 155.15 60.6627
30 160.50 61.2313
31 165.85 61.7748
32 171.20 62.5587
33 176.55 63.2684
34 181.90 63.7085
35 187.25 64.0788
36 192.60 64.5807
37 197.95 65.2233
38 203.30 65.5331
39 208.65 66.1200
40 214.00 66.6208
41 219.35 67.1952
42 224.70 67.5270
43 230.05 68.0175
44 235.40 68.3869
45 240.75 68.7485
46 246.10 69.1878
47 251.45 69.3980
48 256.80 69.5899
49 262.15 69.7382
50 267.50 69.7693
51 272.85 69.7693
52 278.20 69.7693
53 283.55 69.7693
54 288.90 69.7693

I suppose the problem you have is that approx solves for y given x, while you are talking about solving for x given y. So you need to switch your variables x and y when using approx:
df <- read.table(textConnection("
V1 V3
85.60 47.7548
90.95 49.9749
96.30 51.0331
101.65 52.0207
"), header = TRUE)
approx(x = df$V3, y = df$V1, xout = 50)
# $x
# [1] 50
#
# $y
# [1] 91.0769
Also, if y is exponential with respect to x, then you have a linear relationship between x and log(y), so it makes more sense to use a linear interpolator between x and log(y), then take the exponential to get back to y:
exp(approx(x = df$V3, y = log(df$V1), xout = 50)$y)
# [1] 91.07339

Related

Convert "yymm" numeric to months since 9101 (Jan 1991)

I have a data set that runs from 1991-01 to 1996-12 in R. In order to calculate some time-dependent metrics on some of the data set, I am trying to have a "months since first entry" column. To do so I need to convert a number like 9107 into 07, 9207 into 12+7=19, and 9301 into 12+12+1=25. Ie, the first two digits specify a full year (12 months), and the last two digits specify months since January (01). How would I go about this?
Thank you!
May be, we can use substr (splitted in multiple lines for more clarity and also as.integer)
yrs <- as.integer(substr(v1, 1, 2))
mths <- as.integer(substr(v1, 3, 4))
mths[mths == 1] <- 0
12 * (yrs - yrs[1]) + mths
#[1] 7 19 24
Or without substr
yrs <- v1 %/% 100
12 * (yrs - yrs[1]) + (v1 %% 100)
data
v1 <- c(9107, 9207, 9301)
Using the substrings.
(as.numeric(substr(x, 2, 2)) - 1)*12 + as.numeric(substr(x, 3, 4)) - 1
# [1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
# [24] 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
# [47] 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
# [70] 69 70 71
I believe, 9101 should be 0, since it is the distance of itself.
Data:
x <- c(9101, 9102, 9103, 9104, 9105, 9106, 9107, 9108, 9109, 9110,
9111, 9112, 9201, 9202, 9203, 9204, 9205, 9206, 9207, 9208, 9209,
9210, 9211, 9212, 9301, 9302, 9303, 9304, 9305, 9306, 9307, 9308,
9309, 9310, 9311, 9312, 9401, 9402, 9403, 9404, 9405, 9406, 9407,
9408, 9409, 9410, 9411, 9412, 9501, 9502, 9503, 9504, 9505, 9506,
9507, 9508, 9509, 9510, 9511, 9512, 9601, 9602, 9603, 9604, 9605,
9606, 9607, 9608, 9609, 9610, 9611, 9612)

fit a normal distribution to grouped data, giving expected frequencies

I have a frequency distribution of observations, grouped into counts within class intervals.
I want to fit a normal (or other continuous) distribution, and find the expected frequencies in each interval according to that distribution.
For example, suppose the following, where I want to calculate another column, expected giving the
expected number of soldiers with chest circumferences in the interval given by chest, where these
are assumed to be centered on the nominal value. E.g., 35 = 34.5 <= y < 35.5. One analysis I've seen gives the expected frequency in this cell as 72.5 vs. the observed 81.
> data(ChestSizes, package="HistData")
>
> ChestSizes
chest count
1 33 3
2 34 18
3 35 81
4 36 185
5 37 420
6 38 749
7 39 1073
8 40 1079
9 41 934
10 42 658
11 43 370
12 44 92
13 45 50
14 46 21
15 47 4
16 48 1
>
> # ungroup to a vector of values
> chests <- vcdExtra::expand.dft(ChestSizes, freq="count")
There are quite a number of variations of this question, most of which relate to plotting the normal density on top of a histogram, scaled to represent counts not density. But none explicitly show the calculation of the expected frequencies. One close question is R: add normal fits to grouped histograms in ggplot2
I can perfectly well do the standard plot (below), but for other things, like a Chi-square test or a vcd::rootogram plot, I need the expected frequencies in the same class intervals.
> bw <- 1
n_obs <- nrow(chests)
xbar <- mean(chests$chest)
std <- sd(chests$chest)
plt <-
ggplot(chests, aes(chest)) +
geom_histogram(color="black", fill="lightblue", binwidth = bw) +
stat_function(fun = function(x)
dnorm(x, mean = xbar, sd = std) * bw * n_obs,
color = "darkred", size = 1)
plt
here is how you could calculate the expected frequencies for each group assuming Normality.
xbar <- with(ChestSizes, weighted.mean(chest, count))
sdx <- with(ChestSizes, sd(rep(chest, count)))
transform(ChestSizes, Expected = diff(pnorm(c(32, chest) + .5, xbar, sdx)) * sum(count))
chest count Expected
1 33 3 4.7600583
2 34 18 20.8822328
3 35 81 72.5129162
4 36 185 199.3338028
5 37 420 433.8292832
6 38 749 747.5926687
7 39 1073 1020.1058521
8 40 1079 1102.2356155
9 41 934 943.0970605
10 42 658 638.9745241
11 43 370 342.7971793
12 44 92 145.6089948
13 45 50 48.9662992
14 46 21 13.0351612
15 47 4 2.7465640
16 48 1 0.4579888

when you need a Kinhom rather than a Kest?

envelope of the K funcition (and its derivative such as L) is very useful for validating a fitted spatial points process model. for instance, I fit a poisson model for a data J1a2, which is as following:
J1a2.points:
# X.1 X Y
1 1 118.544 1638.445
2 2 325.995 1761.223
3 3 681.625 1553.771
4 4 677.392 1816.261
5 5 986.451 1685.016
6 6 1469.093 1354.787
7 7 1608.805 1625.744
8 8 1994.071 1782.391
9 9 1968.669 1375.955
10 10 2362.403 1337.852
11 11 2701.099 1773.924
12 12 2900.083 1820.495
13 13 2963.588 1668.081
14 14 3412.360 1676.549
15 15 3378.490 1456.396
16 16 3721.420 1464.863
17 17 3823.028 1701.951
18 18 4072.817 1790.859
19 19 4089.751 1388.656
20 20 97.375 715.497
21 21 376.799 1033.025
22 22 563.082 1126.166
23 23 935.647 1206.607
24 24 512.277 486.876
25 25 935.647 757.834
26 26 1409.821 410.670
27 27 1435.223 639.290
28 28 1706.180 1045.726
29 29 1968.669 876.378
30 30 2307.365 711.263
31 31 2624.892 897.546
32 32 2654.528 1236.243
33 33 2857.746 423.371
34 34 3039.795 639.290
35 35 3298.050 707.029
36 36 3111.767 1011.856
37 37 3361.555 1227.775
38 38 4047.414 1185.438
39 39 3569.007 508.045
40 40 4250.632 469.942
41 41 4386.110 872.144
42 42 93.141 237.088
43 43 554.614 186.283
44 44 757.832 148.180
45 45 965.283 220.153
46 46 1723.115 296.360
47 47 1744.283 423.371
48 48 1913.631 203.218
49 49 2167.653 292.126
50 50 2629.126 211.685
51 51 3217.610 283.658
52 52 3827.262 325.996
and:
J1a2.Win<-owin(c(0, 4500.42),c(0, 1917.87))
if you draw evelope for the data with Lest:
library(spatstat)
env.data<-envelope(J1a2, Lest,correction="border",
nsim=19, global=TRUE)
plot(env.data,.-r~r, shade=NULL, legend=FALSE,
xlab=expression(paste("r(",mu,"m)")),ylab="L(r)-r", main = "")
the Lest() curve goes out of the envelope. however, if you use Linhom instead of Lest, you will find the Linhom() are all inside of the envelope.
it seems that this suggest a inhomogenous density kernel of the data. so I use y as covariate in fitting:
poisson.J1a2<-ppm(J1a2~1,Poisson(),correction="border")
y1.J1a2<-ppm(J1a2~y,correction="border")
anova(poisson.J1a2,y.J1a2,test="LR") #p=0.6484
I don't find any evidence of a spatial trend of density along y, or x, or their combinations.
then why the Linhom() outperform the Lest() in this case?
furthermore, when should one decide to use Linhom() instead of Lest?
You should first decide whether or not the intensity can be assumed to be constant. To help you with this you can look at kernel density estimates or do formal tests such as a quadrat test etc. If you decide that the intensity can be assumed to be constant you use Lest() if this is not the case you use Linhom().

Aesthetics must be either length 1 or the same as the data: ymin, ymax, x, y, colour, when using a second geom_errorbar function

I'm trying to add error bars to a second curve (using dataset "pmfprofbs01"), but I'm having problems and I couldn't fix this.
There are a few threads on this error, but unfortunately it looks like every other answer is case specific, and I'm not able to overcome this error in my code. I am able to plot a first smoothed curve (stat_smooth) and overlapping errorbars (using geom_errobar). The problem rises when I try to add a second curve to the same graph, for comparison purposes.
With following code, I get the following error: "Error: Aesthetics must be either length 1 or the same as the data (35): ymin, ymax, x, y, colour"
I am looking to add additional errorbars to the second smoothed curve (corresponding to datasets pmfprof01 and pmfprofbs01).
Could someone explain why I keep getting this error? The code works until using the second call of geom_errorbar().
These are my 4 datasets (all used as data frames):
- pmfprof1 and pmfprof01 are the two datasets used for applying the smoothing method.
- pmfprofbs1 and pmfprofbs01 contain additional information based on an error analysis for plotting error bars.
> pmfprof1
Z correctedpmfprof1
1 -1.1023900 -8.025386e-22
2 -1.0570000 6.257110e-02
3 -1.0116000 1.251420e-01
4 -0.9662020 2.143170e-01
5 -0.9208040 3.300960e-01
6 -0.8754060 4.658550e-01
7 -0.8300090 6.113410e-01
8 -0.7846110 4.902430e-01
9 -0.7392140 3.344200e-01
10 -0.6938160 4.002040e-01
11 -0.6484190 1.215460e-01
12 -0.6030210 -1.724360e-01
13 -0.5576240 -6.077170e-01
14 -0.5122260 -1.513420e+00
15 -0.4668290 -2.075330e+00
16 -0.4214310 -2.617160e+00
17 -0.3760340 -3.350500e+00
18 -0.3306360 -4.076220e+00
19 -0.2852380 -4.926540e+00
20 -0.2398410 -5.826390e+00
21 -0.1944430 -6.761300e+00
22 -0.1490460 -7.301530e+00
23 -0.1036480 -7.303880e+00
24 -0.0582507 -7.026800e+00
25 -0.0128532 -6.627960e+00
26 0.0325444 -6.651490e+00
27 0.0779419 -6.919830e+00
28 0.1233390 -6.686490e+00
29 0.1687370 -6.129060e+00
30 0.2141350 -6.120890e+00
31 0.2595320 -6.455160e+00
32 0.3049300 -6.554560e+00
33 0.3503270 -6.983390e+00
34 0.3957250 -7.413500e+00
35 0.4411220 -6.697370e+00
36 0.4865200 -5.477230e+00
37 0.5319170 -4.552890e+00
38 0.5773150 -3.393060e+00
39 0.6227120 -2.449930e+00
40 0.6681100 -2.183190e+00
41 0.7135080 -1.673980e+00
42 0.7589050 -8.003740e-01
43 0.8043030 -2.918780e-01
44 0.8497000 -1.159710e-01
45 0.8950980 9.123767e-22
> pmfprof01
Z correctedpmfprof01
1 -1.25634000 -1.878749e-21
2 -1.20387000 -1.750190e-01
3 -1.15141000 -3.500380e-01
4 -1.09894000 -6.005650e-01
5 -1.04647000 -7.935110e-01
6 -0.99400600 -8.626150e-01
7 -0.94153900 -1.313880e+00
8 -0.88907200 -2.067770e+00
9 -0.83660500 -2.662440e+00
10 -0.78413800 -4.514190e+00
11 -0.73167100 -7.989510e+00
12 -0.67920400 -1.186870e+01
13 -0.62673800 -1.535970e+01
14 -0.57427100 -1.829150e+01
15 -0.52180400 -2.067170e+01
16 -0.46933700 -2.167890e+01
17 -0.41687000 -2.069820e+01
18 -0.36440300 -1.662640e+01
19 -0.31193600 -1.265950e+01
20 -0.25946900 -1.182580e+01
21 -0.20700200 -1.213370e+01
22 -0.15453500 -1.233680e+01
23 -0.10206800 -1.235160e+01
24 -0.04960160 -1.123630e+01
25 0.00286531 -9.086940e+00
26 0.05533220 -6.562710e+00
27 0.10779900 -4.185860e+00
28 0.16026600 -3.087430e+00
29 0.21273300 -2.005150e+00
30 0.26520000 -9.295540e-02
31 0.31766700 1.450360e+00
32 0.37013400 1.123910e+00
33 0.42260100 2.426750e-01
34 0.47506700 1.213370e-01
35 0.52753400 5.265226e-21
> pmfprofbs1
Z correctedpmfprof01 bsmean bssd bsse bsci
1 -1.1023900 -8.025386e-22 0.00000000 0.0000000 0.00000000 0.0000000
2 -1.0570000 6.257110e-02 1.46519200 0.6691245 0.09974719 0.2010273
3 -1.0116000 1.251420e-01 1.62453300 0.6368053 0.09492933 0.1913175
4 -0.9662020 2.143170e-01 1.62111600 0.7200497 0.10733867 0.2163269
5 -0.9208040 3.300960e-01 1.44754700 0.7236743 0.10787900 0.2174158
6 -0.8754060 4.658550e-01 1.67509800 0.7148755 0.10656735 0.2147724
7 -0.8300090 6.113410e-01 1.78144200 0.7374481 0.10993227 0.2215539
8 -0.7846110 4.902430e-01 1.73058700 0.7701354 0.11480501 0.2313743
9 -0.7392140 3.344200e-01 0.97430090 0.7809477 0.11641681 0.2346227
10 -0.6938160 4.002040e-01 1.26812000 0.8033838 0.11976139 0.2413632
11 -0.6484190 1.215460e-01 0.93601510 0.7927926 0.11818254 0.2381813
12 -0.6030210 -1.724360e-01 0.63201080 0.8210839 0.12239996 0.2466809
13 -0.5576240 -6.077170e-01 0.05952252 0.8653050 0.12899205 0.2599664
14 -0.5122260 -1.513420e+00 0.57893690 0.8858471 0.13205429 0.2661379
15 -0.4668290 -2.075330e+00 -0.08164613 0.8921298 0.13299086 0.2680255
16 -0.4214310 -2.617160e+00 -1.08074600 0.8906925 0.13277660 0.2675937
17 -0.3760340 -3.350500e+00 -1.67279700 0.9081813 0.13538367 0.2728479
18 -0.3306360 -4.076220e+00 -2.50074900 1.0641550 0.15863486 0.3197076
19 -0.2852380 -4.926540e+00 -3.12062200 1.0639080 0.15859804 0.3196333
20 -0.2398410 -5.826390e+00 -4.47060100 1.1320770 0.16876008 0.3401136
21 -0.1944430 -6.761300e+00 -5.40812700 1.1471780 0.17101120 0.3446504
22 -0.1490460 -7.301530e+00 -6.42419100 1.1685490 0.17419700 0.3510710
23 -0.1036480 -7.303880e+00 -5.79613500 1.1935850 0.17792915 0.3585926
24 -0.0582507 -7.026800e+00 -5.85496900 1.2117630 0.18063896 0.3640539
25 -0.0128532 -6.627960e+00 -6.70480400 1.1961400 0.17831002 0.3593602
26 0.0325444 -6.651490e+00 -8.27106200 1.3376870 0.19941060 0.4018857
27 0.0779419 -6.919830e+00 -8.79402900 1.3582760 0.20247983 0.4080713
28 0.1233390 -6.686490e+00 -8.35947700 1.3673080 0.20382624 0.4107848
29 0.1687370 -6.129060e+00 -8.04437600 1.3921620 0.20753126 0.4182518
30 0.2141350 -6.120890e+00 -8.18588300 1.5220550 0.22689456 0.4572759
31 0.2595320 -6.455160e+00 -8.37217600 1.5436800 0.23011823 0.4637728
32 0.3049300 -6.554560e+00 -8.59346400 1.6276880 0.24264140 0.4890116
33 0.3503270 -6.983390e+00 -8.88378700 1.6557140 0.24681927 0.4974316
34 0.3957250 -7.413500e+00 -9.72709800 1.6569390 0.24700188 0.4977996
35 0.4411220 -6.697370e+00 -9.46033400 1.6378470 0.24415582 0.4920637
36 0.4865200 -5.477230e+00 -8.37590600 1.6262700 0.24243002 0.4885856
37 0.5319170 -4.552890e+00 -7.52867000 1.6617010 0.24771176 0.4992302
38 0.5773150 -3.393060e+00 -6.89192300 1.6667330 0.24846189 0.5007420
39 0.6227120 -2.449930e+00 -6.25115300 1.6670390 0.24850750 0.5008340
40 0.6681100 -2.183190e+00 -6.05373800 1.6720180 0.24924973 0.5023298
41 0.7135080 -1.673980e+00 -5.10526700 1.6668400 0.24847784 0.5007742
42 0.7589050 -8.003740e-01 -4.42001600 1.6561830 0.24688918 0.4975725
43 0.8043030 -2.918780e-01 -4.26640200 1.6588970 0.24729376 0.4983878
44 0.8497000 -1.159710e-01 -4.46318500 1.6533830 0.24647179 0.4967312
45 0.8950980 9.123767e-22 -5.17173200 1.6557990 0.24683194 0.4974571
> pmfprofbs01
Z correctedpmfprof01 bsmean bssd bsse bsci
1 -1.25634000 -1.878749e-21 0.000000 0.0000000 0.00000000 0.0000000
2 -1.20387000 -1.750190e-01 2.316589 0.4646486 0.07853995 0.1596124
3 -1.15141000 -3.500380e-01 2.320647 0.4619668 0.07808664 0.1586911
4 -1.09894000 -6.005650e-01 2.635883 0.6519826 0.11020517 0.2239639
5 -1.04647000 -7.935110e-01 2.814679 0.6789875 0.11476983 0.2332404
6 -0.99400600 -8.626150e-01 2.588038 0.7324196 0.12380151 0.2515949
7 -0.94153900 -1.313880e+00 2.033736 0.7635401 0.12906183 0.2622852
8 -0.88907200 -2.067770e+00 2.394285 0.8120181 0.13725611 0.2789380
9 -0.83660500 -2.662440e+00 2.465425 0.9485307 0.16033095 0.3258317
10 -0.78413800 -4.514190e+00 0.998115 1.0177400 0.17202946 0.3496059
11 -0.73167100 -7.989510e+00 -1.585430 1.0502190 0.17751941 0.3607628
12 -0.67920400 -1.186870e+01 -5.740894 1.2281430 0.20759406 0.4218819
13 -0.62673800 -1.535970e+01 -9.325951 1.3289330 0.22463068 0.4565045
14 -0.57427100 -1.829150e+01 -12.010540 1.3279860 0.22447060 0.4561792
15 -0.52180400 -2.067170e+01 -14.672770 1.3296720 0.22475559 0.4567583
16 -0.46933700 -2.167890e+01 -14.912250 1.3192610 0.22299581 0.4531820
17 -0.41687000 -2.069820e+01 -12.850570 1.3288470 0.22461614 0.4564749
18 -0.36440300 -1.662640e+01 -6.093746 1.3497100 0.22814263 0.4636416
19 -0.31193600 -1.265950e+01 -5.210692 1.3602240 0.22991982 0.4672533
20 -0.25946900 -1.182580e+01 -6.041660 1.3818700 0.23357866 0.4746890
21 -0.20700200 -1.213370e+01 -5.765808 1.3854680 0.23418683 0.4759249
22 -0.15453500 -1.233680e+01 -6.985883 1.4025360 0.23707185 0.4817880
23 -0.10206800 -1.235160e+01 -7.152865 1.4224030 0.24042999 0.4886125
24 -0.04960160 -1.123630e+01 -3.600538 1.4122650 0.23871635 0.4851300
25 0.00286531 -9.086940e+00 -0.751673 1.5764920 0.26647578 0.5415439
26 0.05533220 -6.562710e+00 2.852910 1.5535620 0.26259991 0.5336672
27 0.10779900 -4.185860e+00 5.398850 1.5915640 0.26902342 0.5467214
28 0.16026600 -3.087430e+00 6.262459 1.6137360 0.27277117 0.5543377
29 0.21273300 -2.005150e+00 8.047920 1.6283340 0.27523868 0.5593523
30 0.26520000 -9.295540e-02 11.168640 1.6267620 0.27497297 0.5588123
31 0.31766700 1.450360e+00 12.345900 1.6363310 0.27659042 0.5620994
32 0.37013400 1.123910e+00 12.124650 1.6289230 0.27533824 0.5595546
33 0.42260100 2.426750e-01 11.279890 1.6137100 0.27276677 0.5543288
34 0.47506700 1.213370e-01 11.531670 1.6311490 0.27571450 0.5603193
35 0.52753400 5.265226e-21 11.284980 1.6662890 0.28165425 0.5723903
The code for plotting both curves is:
deltamean01<-pmfprofbs01[,"bsmean"]-
pmfprofbs01[,"correctedpmfprof01"]
correctmean01<-pmfprofbs01[,"bsmean"]-deltamean01
deltamean1<-pmfprofbs1[,"bsmean"]-
pmfprofbs1[,"correctedpmfprof1"]
correctmean1<-pmfprofbs1[,"bsmean"]-deltamean1
pl<- ggplot(pmfprof1, aes(x=pmfprof1[,1], y=pmfprof1[,2],
colour="red")) +
list(
stat_smooth(method = "gam", formula = y ~ s(x), size = 1,
colour="chartreuse3",fill="chartreuse3", alpha = 0.3),
geom_line(data=pmfprof1,linetype=4, size=0.5,colour="chartreuse3"),
geom_errorbar(aes(ymin=correctmean1-pmfprofbs1[,"bsci"],
ymax=correctmean1+pmfprofbs1[,"bsci"]),
data=pmfprofbs1,colour="chartreuse3",
width=0.02,size=0.9),
geom_point(data=pmfprof1,size=1,colour="chartreuse3"),
xlab(expression(xi*(nm))),
ylab("PMF (KJ/mol)"),
## GCD
geom_errorbar(aes(ymin=correctmean01-pmfprofbs01[,"bsci"],
ymax=correctmean01+pmfprofbs01[,"bsci"]),
data=pmfprofbs01,
width=0.02,size=0.9),
geom_line(data=pmfprof01,aes(x=pmfprof01[,1],y=pmfprof01[,2]),
linetype=4, size=0.5,colour="darkgreen"),
stat_smooth(data=pmfprof01,method = "gam",aes(x=pmfprof01[,1],pmfprof01[,2]),
formula = y ~ s(x), size = 1,
colour="darkgreen",fill="darkgreen", alpha = 0.3),
theme(text = element_text(size=20),
axis.text.x = element_text(size=20,colour="black"),
axis.text.y = element_text(size=20,colour="black")),
scale_x_continuous(breaks=number_ticks(8)),
scale_y_continuous(breaks=number_ticks(8)),
theme(panel.background = element_rect(fill ='white',
colour='gray')),
theme(plot.background = element_rect(fill='white',
colour='white')),
theme(legend.position="none"),
theme(legend.key = element_blank()),
theme(legend.title = element_text(colour='gray', size=20)),
NULL
)
pl
This is the result of using pl,
[enter image description here][1]
[1]: https://i.stack.imgur.com/x8FjY.png
Thanks in advance for any suggestion,

Use vector to make probability table

In the form of a probability table, I'd like to illustrate a vector of quantiles divisible by 7 and 5, for marginal probability distributions, and 5 given 7, for conditional probability.
Let's assume this is my data:
>prob.table(table(x)) # discrete number and its probability
20 22 23 24 25 26 27 28 29 30 31
0.000152 0.000625 0.000796 0.001224 0.003138 0.003043 0.004549 0.006444 0.005938 0.009301 0.009456
32 33 34 35 36 37 38 39 40 41 42
0.013448 0.019839 0.018596 0.026613 0.028902 0.027377 0.035156 0.041379 0.041092 0.047733 0.055827
43 44 45 46 47 48 49 50 51 52 53
0.046099 0.051624 0.055131 0.049779 0.056992 0.049801 0.052912 0.031924 0.049114 0.022880 0.042279
54 55 56 57 58 59 61 63 65
0.013946 0.032340 0.003466 0.021240 0.001227 0.011734 0.005115 0.001491 0.000278
How can I turn this into a two-way probability table that shows which numbers are divisible by 7 and/or 5 for marginal and conditional probability?
This is what I'd hope the table to look like
Yes NO # Probability of numbers divisible by 7
Yes 0.02754 0.02886
No 0.02656 0.02831
# Probability of numbers divisible by 5
x <- sample(1:100, 100, replace = TRUE)
# %% is the mod operator, which gives the remainder after the division of the left-hand side by the right-hand side. x %% y == 0 therefore returns TRUE if x is divisible by y
db5 <- x %% 5 == 0
db7 <- x %% 7 == 0
table(db5, db7) / length(x)
# db7
# db5 FALSE TRUE
# FALSE 0.62 0.13
# TRUE 0.24 0.01

Resources