2520 is the smallest number that can be divided by each of the
numbers from 1 to 10 without any remainder. Use a loop to find out what is the smallest positive number that is divisible (we mean the remainder should be 0) by all of the numbers from 1 to 20?
So far for this I have the following code but it does not work:
divisors = seq(1:20)
divisors
num1 = 2520
while(TRUE){
if (num1 %% divisors == 0){
print(num1)
break
}
num1 = num1+ 1
}
Also I need help with computing probabilities in R,
How many draws in average you need to have one level A prize? To do this, please generate 1000 games and use a vector to record the number of draws.
For this I have the following code:
set.seed(1)
random_games <- sample(c('A','B'), size=1000, replace=T, prob=c(0.2,0.8))
random_games
What’s the probability to obtain the a level A prize within 5 draws? Calculate the theoretical value.
This one I am confused on how to calculate.
Since it seems the while loop does take ages to solve the problem, we will invoke recursiveness of the gcd and lcm:
gcd=function(x){
w=1:min(x)
max(w[sapply(w,function(y)all(!x%%y))])
}
lcm=function(x){
if(length(x)>2) lcm(c(prod(x[1:2])/gcd(x[1:2]),x[-(1:2)]))
else prod(x)/gcd(x)
}
lcm(c(1:20))
[1] 232792560
When you check the remainder for 1:20, you get 20 booleans as a result - and you have to make sure they are all TRUE. You can do that using all.
divisors = seq(1:20)
divisors
num1 = 2520
while(TRUE){
if (all(num1 %% divisors == 0)){
print(num1)
break
}
num1 = num1+ 1
}
or do
gcd <- function(x, y) {
while (y) {
z = x %% y
x = y
y = z
}
return (x)
}
Reduce(function(x, y) x * y %/% gcd(x, y), 3:20)
Related
The function dice takes a parameter n, representing number of rolls for a single six-sided die. It returns a vector of length n that has elements that are integers b/w 1 and 6. I have created the following code for the dice function below. It seems to run properly when I test it.
dice <- function(n) {
x <- c(1:6)
sample(length(x), size = n, replace = TRUE, prob = x)
}
The function kdice takes two parameters, n and k. The parameter n is denoted for number of rolls done. The number of dice rolled is represented by parameter k. The function should return the sum of the k dices, rolled n times. Somehow I have to implement dice() within this function. Below is what I have completed thus far, however the function returns nothing. I have an If and Else statement to make sure that at least 1 dice was rolled at least 1 time. While loop is to make the sum of NumofDice is outputted until it reaches n. Would appreciate any insights, especially how to incorporate the function Dice() in kdice().
kdice <- function(k, n){
NumofDice <- sample(1:6, size = k, replace = TRUE)
RollCount = 0
if(k>0 && n>0) {
while(RollCount < n) {
RollCount = RollCount + 1
sum(NumofDice)
}
}
else {
print("No number of dices were rolled")
}
}
kdice <- function(k, n){
if(k>0 && n>0){
replicate(n, sum(sample(c(1:6),k, replace=TRUE)))
}
else {
print("No number of dices were rolled")
}
}
kdice(4,2)
[1] 15 8
You can try defining kdice using replicate + colSums like below
kdice <- function(k, n) {
tryCatch(
colSums(matrix(replicate(n, dice(k)), nrow = k)),
error = function(e) print("No number of dices were rolled")
)
}
which give result like
> kdice(4, 5)
[1] 17 14 22 13 11
> kdice(4, 0)
[1] "No number of dices were rolled"
I was given a task to write a function, which I name: my_mode_k.
The input is consisted of two variables:
(x, k)
as x, is a vector of natural numbers with the length of n. the greatest object of x can be k, given that k < n.
my_mode_k output is the highest frequency object of x. if there's more then one object in the vector that are common in x the same number of times - then the function will output the minimum object between them.
for example:
my_mode_k(x = c(1, 1, 2, 3, 3) , k =3)
1
This is code I wrote:
my_mode_k <- function(x, k){
n <- length(x)
x_lemma <- rep(0, k)
for(i in 1:n){
x_lemma[i] < x_lemma[i] +1
}
x_lem2 <- 1
for( j in 2:k){
if(x_lemma[x_lem2] < x_lemma[j]){
x_lem2 <- j
}
}
x_lem2
}
which isn't working properly.
for example:
my_mode_k(x = c(2,3,4,3,2,2,5,5,5,5,5,5,5,5), k=5)
[1] 1
as the function is supposed to return 5.
I don't understand why and what is the intuition to have in order to even know if a function is working properly (It took me some time to realize that it's not executing the needed task) - so I could fix the mistake in it.
Here are a few steps on how you can achieve this.
k <- 5
input <- c(2,3,4,3,3,3,3,3,3,3,2,2,5,5,5,5,5,5,5,5)
# Calculate frequencies of elements.
tbl <- table(input[input <= k])
# Find which is max. Notice that it returns the minimum of there is a tie.
tbl.max <- which.max(tbl)
# Find which value is your result.
names(tbl.max)
input <- c(2,2,3,3,3,5,5,5)
names(which.max(table(input[input <= k])))
# 3
input <- c(2,2,5,5,5,3,3,3)
names(which.max(table(input[input <= k])))
# 3
I have an event A that is triggered when the majority of coin tosses in a series of tosses comes up heads. I have an unfair coin and I'd like to see how the likelihood of A changes as the number of tosses change and the probability in each toss changes.
This is my function assuming 3 tosses
n <- 3
#victory requires majority of tosses heads
#tosses only occur in odd intervals
k <- seq(n/2+.5,n)
victory <- function(n,k,p){
for (i in p) {
x <- 0
for (i in k) {
x <- x + choose(n, k) * p^k * (1-p)^(n-k)
}
z <- x
}
return(z)
}
p <- seq(0,1,.1)
victory(n,k,p)
My hope is the victory() function would:
find the probability of each of the outcomes where the majority of tosses are heads, given a particular value p
sum up those probabilities and add them to a vector z
go back and do the same thing given another probability p
I tested this with n <- 3, k <- c(2,3) and p <- (.5,.75) and the output was 0.75000, 0.84375. I know that the output should've been 0.625, 0.0984375.
I wasn't able to get exactly the result you wanted, but maybe can help you along a bit.
When looping in R the vector you are looping through remains unchanged and value you are using to loop changes. For example see the differences in these loops:
test <- seq(0,1,length.out = 5)
for ( i in test){
print(test)
}
for ( i in test){
print(i)
}
for ( i in 1:length(test)){
print(test[i])
}
when you are iterating you are firstly setting i to the first number in p, then to the first number in k and then using the unchanged vectors.
You are also assigning to z in the first loop of p and then writing over it in the second loop.
Try using the below - I am still not getting the answer you say but it might help you find where the error is (printing out along the way or using debug(victory) might also be helpful
victory <- function(n,k,p){
z <-list()
for (i in 1:length(p)) {
x <- 0
for (j in 1:length(k)) {
x <- x + choose(n, k[j]) * p[i]^k[j] * (1-p[i])^(n-k[j])
}
z[i] <- x
}
return(z)
}
The idea of Project Euler question 12 is to find the smallest triangular number with a specified number of divisors(https://projecteuler.net/problem=12). As an attempt to solve this problem, I wrote the following code:
# This function finds the number of divisors of a number and returns it.
FUN <- function(x) {
i = 1
lst = integer(0)
while(i<=x)
{
if(x %% i ==0)
{
lst = c(lst, i)
}
i = i +1
}
return(lst)
}
and
n = 1
i=1
while (length(FUN(n))<500)
{
i = i + 1
n = n + i
}
This code is producing the correct answer for few smaller test cases: length(FUN(n))<4 will produce 6, and length(FUN(n))<6 will produce 28.
However, this simple looking code is taking over 24 hours to run (and still running) for length(FUN(n))<500. I understand that for a number to have 500 divisors, the number is probably very big, but I am wondering why is it taking so long to run.
You FUN is much too inefficient for this task. As the first triangular number is above the 12,000th with a value of 75,000,000 and FUN runs through all these numbers ... the number of iterations to perform is almost
12000 * 75000000 / 2 = 450 * 10^9
This is clearly more than R's relatively slow for-loop can do in a reasonable time frame.
Instead, you could apply the divisors function from the numbers package that employs a prime factor decomposition. The following code need about 5-6 seconds (on my machine) to find the triangular number.
library(numbers)
t <- 0
system.time(
for (i in 1:100000) {
t <- t + i
d <- length( divisors(t) )
if (d > 500) {
cat(i, t, d, '\n')
break
}
}
)
## 12375 76576500 576
## user system elapsed
## 5.660 0.000 5.658
Instead of calculating the i-th triangular number, here i is added to the last triangular number. The time saving is minimal.
Here's my attempt:
library(gmp)
library(plyr)
get_all_factors <- function(n)
{
prime_factor_tables <- lapply(
setNames(n, n),
function(i)
{
if(i == 1) return(data.frame(x = 1L, freq = 1L))
plyr::count(as.integer(gmp::factorize(i)))
}
)
lapply(
prime_factor_tables,
function(pft)
{
powers <- plyr::alply(pft, 1, function(row) row$x ^ seq.int(0L, row$freq))
power_grid <- do.call(expand.grid, powers)
sort(unique(apply(power_grid, 1, prod)))
}
)
}
for (i in 99691200:100000) {
if (length(get_all_factors(i)[[1]])>500) print(paste(i, length(get_all_factors(i)[[1]])))
if (i %% 100000 == 0) print(paste("-",i,"-"))
}
Let it run as long as you can be bothered...
In the New York Times yesterday there was a reference to a paper essentially saying that the probability of 'heads' after a 'head' appears is not 0.5 (assuming a fair coin), challenging the "hot hand" myth. I want to prove it to myself.
Thus, I am working on coding a simulation of 7 coin tosses, and counting the number of heads after the first head, provided, naturally, that there is a first head at all.
I came up with the following lines of R code, but I'm still getting NA values, and would appreciate some help:
n <- 7 # number of tosses
p <- 0.5 # probability of heads
sims <- 100 # number of simulations
Freq_post_H <- 0 # frequency of 'head'-s after first 'head'
for(i in 1:sims){
z <- rbinom(n, 1, p)
if(sum(z==1)!=0){
y <- which(z==1)[1]
Freq_post_H[i] <- sum(z[(y+1):n])/length((y+1):n)
}else{
next()
}
Freq_post_H
}
Freq_post_H
What am I missing?
CONCLUSION: After the initial hiccups of mismatched variable names, both responses solve the question. One of the answers corrects problems in the initial code related to what happens with the last toss (i + 1) by introducing min(y + 1, n), and corrects the basic misunderstanding of next within a loop generating NA for skipped iterations. So thank you (+1).
Critically, and the reason for this appended "conclusion" the second response addresses a more fundamental or conceptual problem: we want to calculate the fraction of H's that are preceded by a H, as opposed to p(H) in whatever number of tosses remain after a head has appeared, which will be 0.5 for a fair coin.
This is a simulation of what they did in the newspaper:
nsims <- 10000
k <- 4
set.seed(42)
sims <- replicate(nsims, {
x <- sample(0:1, k, TRUE)
#print(x)
sum( # sum logical values, i.e. 0/1
diff(x) == 0L & # is difference between consecutive values 0?
x[-1] == 1L ) / # and are these values heads?
sum(head(x, -1) == 1L) #divide by number of heads (without last toss)
})
mean(sims, na.rm = TRUE) #NaN cases are samples without heads, i.e. 0/0
#[1] 0.4054715
k <- 7
sims <- replicate(nsims, {
x <- sample(0:1, k, TRUE)
#print(x)
sum(diff(x) == 0L & x[-1] == 1L) / sum(head(x, -1) == 1L)
})
mean(sims, na.rm = TRUE)
#[1] 0.4289402
n <- 7 # number of tosses
p <- 0.5 # probability of heads
sims <- 100 # number of simulations
Prob_post_H <- 0 # frequency of 'head'-s after first 'head'
for(i in 1:sims){
z <- rbinom(n, 1, p)
if(sum(z==1) != 0){
y <- which(z==1)[1]
Prob_post_H[i] <- mean(z[min(y+1, n):n], na.rm=TRUE)
}else{
next()
}
}
mean(Prob_post_H,na.rm=TRUE)
#[1] 0.495068
It looks like it's right around 50%. We can scale up to see more simulations.
sims <- 10000
mean(Prob_post_H,na.rm=TRUE)
#[1] 0.5057866
Still around 50%.
This is to simulate 100 fair coin tosses 30,000 times
counter <- 1
coin <- sum(rbinom(100,1,0.5))
while(counter<30000){
coin <- c(coin, sum(rbinom(100,1,0.5)))
counter <- counter+1
}
Try these after running above variable
hist(coin)
str(coin)
mean(coin)
sd(coin)
Below is some sample code in R to simulate a fair coin toss in R using the sample function. You can modify it as you like to simulate any number of flips. Since the outcome of flipping a coin is independent for each flip, the probability of a head or tail is always 0.5 for any given flip. Over many coin flips the probability of at least half of the flips being heads (or tails) will converge to 0.5. The probability that you get exactly half heads and half tails approaches 0.
n <- 7
count_heads <- 0
coin_flip <- sample(c(0,1), n, replace = TRUE)
for(flip_i in 1:n)
{
if(coin_flip[flip_i] == 1)
{
count_heads = count_heads + 1
}
}
count_heads/n