I am trying to simulate a multiple server single queue model (MMC) using R programming. I have written one previously simulating a single server single queue model (MM1) but I have no idea how to change it to MMC model. Here is the code for the MM1 simulation:
lambda=2 #Arrival rate
mu=3 #Service rate
time = 0
simtime = 1000 #Simulation time
arrive = 0
service = 0
s.start = 0 #Service start time
t.service = 0 #Total service time
s.end=0 #Service end time
complete=0
while(time<simtime){
service<-rexp(1,mu)
if(length(customer)==0){
arrive<-rexp(1,lambda)
s.start<-arrive
customer<-customer+1
} else {
arrive<-arrive+rexp(1,lambda)
s.start<-max(arrive,s.end)
customer<-customer+1
}
t.service<-t.service+service
s.end<-s.start+service
time<-arrive
}
Can anybody offer some suggestions to change the code to account for multiple server? Thanks in advance.
Related
I'm trying to create a simplified Gusek model, using more or less fictive parameters, to calculate the ideal cloud model for a VM infrastrucute, based on a few parameters. The model should represent three different VM types, each used in a specific part of the year for a specific time range. VMs are used separately, neither scale-up nor scale-out is part of the model.
I have two choices:
Buy physical machine(s) with equal parameters as a VM type
Rent the VM from a cloud service provider
Other parameters:
VMs in the cloud model have only OpEx costs.
Physical machines have CapEx costs and somewhat lower OpEx costs
Physical machines have an estimated lifetime, whereupon reinvestment will be needed
The problem is I got stuck in the middle of the solution, which is the following:
/*
The default measurement units in this model:
-time: days
-price: Euros
*/
set VMs;
set Models;
param OpEx{VMs, Models};
param CapEx{VMs, Models} default 0;
param Usage{VMs};
param MachineLifeTime;
var Choice{VMs, Models} binary;
s.t. EachVmOnlyOnce{m in Models}:
sum(vm in VMs) Choice[vm,m] = 1;
s.t. ExactlyOneVmPerModel{vm in VMs}:
sum(m in Models) Choice[vm,m] = 1;
# TODO: Minimize total cost: CapEx + OpEx
minimize TotalCost: sum{v in VMs, m in Models} Usage[v] * (CapEx[v,m] / MachineLifeTime + OpEx[v,m]);
solve;
printf "\n\n";
# Here I'd like to print the solution in a more eye-readable manner
data;
set VMs:= Standard_A5 Standard_B4ms Standard_NV6;
set Models:= Cloud OnPremise;
param MachineLifeTime:= 3650;
param OpEx:=
Cloud OnPremise
Standard_A5 8 2
Standard_B4ms 4 1
Standard_NV6 27 3
;
param CapEx:=
Cloud OnPremise
Standard_A5 . 4000
Standard_B4ms . 2000
Standard_NV6 . 13500
;
param Usage:=
Standard_A5 200
Standard_B4ms 100
Standard_NV6 65
;
I'm not even sure about the constraints, besides when I run the model, I've got the following error:
Context: ...s } binary ; s.t. ExactlyOneVmPerModel { vm in VMs } : sum (
MathProg model processing error
I'd appreciate any help! :)
I have the following code which, because of Excel max row limitations, is restricted to ~1million rows:
ZStream.unwrap(generateStreamData).mapMPar(32) {m =>
streamDataToCsvExcel
}
All fairly straightforward and it works perfectly. I keep track of the number of rows streamed, and then stop writing data. However I want to interrupt all the child fibers spawned in mapMPar, something like this:
ZStream.unwrap(generateStreamData).interruptWhen(effect.true).mapMPar(32) {m =>
streamDataToCsvExcel
}
Unfortunately the process is interrupted immediately here. I'm probably missing something obvious...
Editing the post as it needs some clarity.
My stream of data is generated by an expensive process in which data is pulled from a remote server, (this data is itself calculated by an expensive process) with n Fibers.
I then process the streams and then stream them out to the client.
Once the processed row count has reached ~1 million, I then need to stop pulling data from the remote server (i.e. interrupt all the Fibers) and end the process.
Here's what I can come up with after your clarification. The ZIO 1.x version is a bit uglier because of the lack of .dropRight
Basically we can use takeUntilM to count the size of elements we've gotten to stop once we get to the maximum size (and then use .dropRight or the additional filter to discard the last element that would take it over the limit)
This ensures that both
You only run streamDataToCsvExcel until the last possible message before hitting the size limit
Because streams are lazy expensiveQuery only gets run for as many messages as you can fit within the limit (or N+1 if the last value is discarded because it would go over the limit)
import zio._
import zio.stream._
object Main extends zio.App {
override def run(args: List[String]): URIO[zio.ZEnv, ExitCode] = {
val expensiveQuery = ZIO.succeed(Chunk(1, 2))
val generateStreamData = ZIO.succeed(ZStream.repeatEffect(expensiveQuery))
def streamDataToCsvExcel = ZIO.unit
def count(ref: Ref[Int], size: Int): UIO[Boolean] =
ref.updateAndGet(_ + size).map(_ > 10)
for {
counter <- Ref.make(0)
_ <- ZStream
.unwrap(generateStreamData)
.takeUntilM(next => count(counter, next.size)) // Count size of messages and stop when it's reached
.filterM(_ => counter.get.map(_ <= 10)) // Filter last message from `takeUntilM`. Ideally should be .dropRight(1) with ZIO 2
.mapMPar(32)(_ => streamDataToCsvExcel)
.runDrain
} yield ExitCode.success
}
}
If relying on the laziness of streams doesn't work for your use case you can trigger an interrupt of some sort from the takeUntilM condition.
For example you could update the count function to
def count(ref: Ref[Int], size: Int): UIO[Boolean] =
ref.updateAndGet(_ + size).map(_ > 10)
.tapSome { case true => someFiber.interrupt }
Is there a way to use a static variable shared across VUs in K6.
Say
// init code
let x = 0 // i want this to be static
// options
export let options = {
vus : 10,
iterations : 10
};
// VU code
export default function() {
x++;
console.log(x);
}
When I run this piece of code, the output should be incremental (1 to 10) and not 1 printed 10 times (1 for each VU).
In k6, each VU is a separate independent JS runtime, so you essentially have 10 copies of x. There is no way around that with stock k6 for now, you have to use some external service as an incrementing counter via HTTP or something like that. Alternatively, if you'll run k6 locally and only on a single instance, you can use this xk6 extension (more info): https://github.com/MStoykov/xk6-counter. It was a PoC originally developed for https://community.k6.io/t/unique-test-data-per-vu-without-reserving-data-upfront/1136/3 , but can be easily extended.
I'm trying to add x number of objects via a simple for-loop to a distributed hazelcast queue (IQueue).
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
BlockingQueue<String> configs = hazelcastInstance.getQueue("test");
for(int i = 0; i<1000;i++) {
configs.add("Some string"+i);
}
Changing the values and in the config (see below) doesn't have any influence on the execution speed. I'd assume that increasing would block the insert operations and increasing would not (actually the loop should be run through as quickly as if the #add operation was on a local queue). However, the time executing the for-loop is the same. Even if i set both values to 0. Why is that (it's a two-node cluster with one node on a different vm)?
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation=
"http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<network>
<port auto-increment="true" port-count="20">5701</port>
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>172.105.66.xx</member>
</tcp-ip>
</join>
</network>
<queue name="test">
<statistics-enabled>false</statistics-enabled>
<max-size>0</max-size>
<backup-count>0</backup-count>
<async-backup-count>1</async-backup-count>
<empty-queue-ttl>-1</empty-queue-ttl>
</queue>
</hazelcast>
The async-backups are not blocking your calls, so there should be a minimal difference in setting 0 or 1. Setting another value is meaningless on the 2 nodes cluster.
What makes the difference is the fact if the owner of the partition with your data structure is a local one or a remote one. The performance issues are in such case usually caused by the network latency between the caller (your test) and data structure owner (remote Hazelcast instance).
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
IQueue<String> configs = hazelcastInstance.getQueue("test");
for(int i = 0; i<1000;i++) {
configs.add("Some string"+i);
}
Member localMember = hazelcastInstance.getCluster().getLocalMember();
Member partitionOwner = hazelcastInstance.getPartitionService().getPartition(configs.getName()).getOwner();
boolean localCall = localMember.equals(partitionOwner);
System.out.println("Local calls to IQueue: " + localCall);
I am building a Lineup simulator that uses absorbing markov chains to simulate the number of runs that a certain lineup would score. There is a different transition matrix for each different of the 9 players in the lineup and one games is simulated using the following function:
simulate.half.inning9 <- function(P1,P2,P3,P4,P5,P6,P7,P8,P9,R,start=1){
s <- start; path <- NULL; runs <- 0; zz=1;inn=1
while(inn<10){
s <- start; path <- NULL;
while(s<25){
if(zz==1){
P=P1
}
if(zz==2){
P=P2
}
if(zz==3){
P=P3
}
if(zz==4){
P=P4
}
if(zz==5){
P=P5
}
if(zz==6){
P=P6
}
if(zz==7){
P=P7
}
if(zz==8){
P=P8
}
if(zz==9){
P=P9
}
s.new <- sample(1:25,1,prob=P[s,])
path <- c(path,s.new)
runs <- runs+R[s,s.new]
s <- s.new
zz=ifelse(zz==9,1,zz+1)
}
inn=inn+1
runs
}
runs
}
Mat 1-9 are the individual 25x25 transition matrices. And yes I know I should use a list! I then use the next function to simulate 1000 seasons worth of games using this function to try and have it settle towards the "true" number.
RUNS <- replicate(162000,simulate.half.inning9(mat1,mat2,mat3,
mat4,mat5,mat6,mat7,mat8,mat9,R))
R is a matrix that basically tells the function how many runs you get from going from one state to another.
So my question is, is there a way to cheat this system to get "true" numbers for each lineup simulation without running it 1000 times? The goal of this is to see which lineup produces the most "true" runs.
Since there are 9! ways to set a lineup, running 362880 different lineups 1000 times each isn't feasible.
Thank you!