Simple Gusek model, to minimize cost of a VM infrastructure - math

I'm trying to create a simplified Gusek model, using more or less fictive parameters, to calculate the ideal cloud model for a VM infrastrucute, based on a few parameters. The model should represent three different VM types, each used in a specific part of the year for a specific time range. VMs are used separately, neither scale-up nor scale-out is part of the model.
I have two choices:
Buy physical machine(s) with equal parameters as a VM type
Rent the VM from a cloud service provider
Other parameters:
VMs in the cloud model have only OpEx costs.
Physical machines have CapEx costs and somewhat lower OpEx costs
Physical machines have an estimated lifetime, whereupon reinvestment will be needed
The problem is I got stuck in the middle of the solution, which is the following:
/*
The default measurement units in this model:
-time: days
-price: Euros
*/
set VMs;
set Models;
param OpEx{VMs, Models};
param CapEx{VMs, Models} default 0;
param Usage{VMs};
param MachineLifeTime;
var Choice{VMs, Models} binary;
s.t. EachVmOnlyOnce{m in Models}:
sum(vm in VMs) Choice[vm,m] = 1;
s.t. ExactlyOneVmPerModel{vm in VMs}:
sum(m in Models) Choice[vm,m] = 1;
# TODO: Minimize total cost: CapEx + OpEx
minimize TotalCost: sum{v in VMs, m in Models} Usage[v] * (CapEx[v,m] / MachineLifeTime + OpEx[v,m]);
solve;
printf "\n\n";
# Here I'd like to print the solution in a more eye-readable manner
data;
set VMs:= Standard_A5 Standard_B4ms Standard_NV6;
set Models:= Cloud OnPremise;
param MachineLifeTime:= 3650;
param OpEx:=
Cloud OnPremise
Standard_A5 8 2
Standard_B4ms 4 1
Standard_NV6 27 3
;
param CapEx:=
Cloud OnPremise
Standard_A5 . 4000
Standard_B4ms . 2000
Standard_NV6 . 13500
;
param Usage:=
Standard_A5 200
Standard_B4ms 100
Standard_NV6 65
;
I'm not even sure about the constraints, besides when I run the model, I've got the following error:
Context: ...s } binary ; s.t. ExactlyOneVmPerModel { vm in VMs } : sum (
MathProg model processing error
I'd appreciate any help! :)

Related

Static variable for VUs in K6

Is there a way to use a static variable shared across VUs in K6.
Say
// init code
let x = 0 // i want this to be static
// options
export let options = {
vus : 10,
iterations : 10
};
// VU code
export default function() {
x++;
console.log(x);
}
When I run this piece of code, the output should be incremental (1 to 10) and not 1 printed 10 times (1 for each VU).
In k6, each VU is a separate independent JS runtime, so you essentially have 10 copies of x. There is no way around that with stock k6 for now, you have to use some external service as an incrementing counter via HTTP or something like that. Alternatively, if you'll run k6 locally and only on a single instance, you can use this xk6 extension (more info): https://github.com/MStoykov/xk6-counter. It was a PoC originally developed for https://community.k6.io/t/unique-test-data-per-vu-without-reserving-data-upfront/1136/3 , but can be easily extended.

How do I limit the number of concurrent processes spawned by Proc::Async in Perl 6?

I want to process a list of files in a subtask in my script and I'm using Proc::Async to spawn the subprocesses doing the work. The downside is that if I have a large list of files to process, it will spawn many subprocesses. I want to know how to limit the number of concurrent subprocesses that Proc::Async spawns?
You can explicitly limit the number of Proc::Async processes using this react block technique which Jonathan Worthington demonstrated in his concurrency/parallelism/asynchrony talk at the 2019 German Perl Workshop (see slide 39, for example). I'm using the Linux command echo N as my "external process" in the code below.
#!/bin/env perl6
my #items = <foo bar baz>;
for #items -> $item {
start { say "Planning on processing $item" }
}
# Run 2 processes at a time
my $degree = 2;
react {
# Start $degree processes at first
run-one-process for 1..$degree;
# Run one, run-one again when it ends, thus maintaining $degree active processes at a time
sub run-one-process {
my $item = #items.shift // return;
my $proc = Proc::Async.new('echo', "processing $item");
my #output;
# Capture output
whenever $proc.stdout.lines { push #output, $_; }
# Print all the output, then start the next process
whenever $proc.start {
#output.join("\n").say;
run-one-process
}
}
}
Old Answer:
Based on Jonathan Worthington's talk Parallelism, Concurrency, and Asynchrony in Perl 6 (video, slides), this sounds most like parallelism (i.e. choosing to do multiple things at once; see slide 18). Asynchrony is reacting to things in the future, the timing of which we cannot control; see slides 39 and 40. As #raiph pointed out in his comment you can have one, the other, or both.
If you care about the order of results, then use hyper, but if the order isn't important, then use race.
In this example, adapted from Jonathan Worthington's slides, you build a pipeline of steps in which data is processed in batches of 32 filenames using 4 workers:
sub MAIN($data-dir) {
my $filenames = dir($data-dir).race(batch => 32, degree => 4);
my $data = $filenames.map(&slurp);
my $parsed = $data.map(&parse-climate-data);
my $european = $parsed.grep(*.continent eq 'Europe');
my $max = $european.max(by => *.average-temp);
say "$max.place() is the hottest!";
}

Simulating multiple server (MMC) queue using R programming

I am trying to simulate a multiple server single queue model (MMC) using R programming. I have written one previously simulating a single server single queue model (MM1) but I have no idea how to change it to MMC model. Here is the code for the MM1 simulation:
lambda=2 #Arrival rate
mu=3 #Service rate
time = 0
simtime = 1000 #Simulation time
arrive = 0
service = 0
s.start = 0 #Service start time
t.service = 0 #Total service time
s.end=0 #Service end time
complete=0
while(time<simtime){
service<-rexp(1,mu)
if(length(customer)==0){
arrive<-rexp(1,lambda)
s.start<-arrive
customer<-customer+1
} else {
arrive<-arrive+rexp(1,lambda)
s.start<-max(arrive,s.end)
customer<-customer+1
}
t.service<-t.service+service
s.end<-s.start+service
time<-arrive
}
Can anybody offer some suggestions to change the code to account for multiple server? Thanks in advance.

Updating edge attributes of a large dense graph

I have a large and dense graph whose edge attributes are updated using the following code. Briefly, I set the edge attributes based on some calculations on the values fetched from other dictionaries (degdict, pifeadict, nodeneidict etc). My smallest graph has 15 million edges. When the execution reaches this stage, the CPU usage dips as low as 10% and memory hikes up to 69%. For large graphs, my process is getting killed because of 90% memory usage. I am not sure where things are going wrong.
In addition to fixing this memory problem, I also need to speed up this loop, if possible - perhaps, a parallel solution to update the edge attributes. Please suggest solutions.
for fauth, sauth in Gcparam.edges_iter():
first_deg = degdict[fauth]
sec_deg = degdict[sauth]
paval = float(first_deg*sec_deg)/float(currmaxdeg * \
currmaxdeg)
try:
f2 = dmpdict[first_deg][sec_deg]
except KeyError:
f2 = 0.0
try:
pival = pifeadict[first_deg][sec_deg]
except KeyError:
pival = 0.0
delDval = float(abs(first_deg - sec_deg))/(float(currmaxdeg)*delT)
f5 = calc_comm_kws(fauth, sauth, kwsdict)
avg_ndeg = getAvgNeiDeg(fauth, sauth, nodeneidict, currmaxdeg)/delT
prop = getPropensity(fauth, sauth, nodeneidict, currmaxdeg, Gparam)/delT
tempdict = {'years':[year], 'pa':[paval],\
'dmp':[f2], 'pi':[pival], 'deld':[delDval],\
'delndeg':[avg_ndeg], 'delprop' :[prop],\
'ck' :[f5]
}
Gcparam[fauth][sauth].update(tempdict)
You can estimate the amount of storage you need for the data on each edge like this:
In [1]: from pympler.asizeof import asizeof
In [2]: tempdict = {'years':[1900], 'pa':[1.0],\
'dmp':[2.0], 'pi':[3.0], 'deld':[7],\
'delndeg':[3.4], 'delprop' :[7.5],\
'ck' :[22.0]
}
In [3]: asizeof(tempdict)
Out[3]: 1000
So it looks like 1000 bytes is a lower bound for what you are doing. Multiply that by the number of edges for the total.
NetworkX also has some overhead for the node and edge data structures which depends on what type of object you use for nodes. Integers are smallest.

AX 2009: Adjusting User Group Length

We're looking into refining our User Groups in Dynamics AX 2009 into more precise and fine-tuned groupings due to the wide range of variability between specific people within the same department. With this plan, it wouldn't be uncommon for majority of our users to fall user 5+ user groups.
Part of this would involve us expanding the default length of the User Group ID from 10 to 40 (as per Best Practice for naming conventions) since 10 characters don't give us enough room to adequately name each group as we would like (again, based on Best Practice Naming Conventions).
We have found that the main information seems to be obtained from the UserGroupInfo table, but that table isn't present under the Data Dictionary (it's under the System Documentation, so unavailable to be changed that way by my understanding). We've also found the UserGroupName EDT, but that is already set at 40 characters. The form itself doesn't seem to restricting the length of the field either. We've discussed changing the field on the SQL directly, but again my understanding is that if we do a full synchronization it would overwrite this change.
Where can we go to change this particular setting, or is it possible to change?
The size of the user group id is defined as as system extended data type (here \System Documentation\Types\userGroupId) and you cannot change any of the properties including the size 10 length.
You should live with that, don't try to fake the system using direct SQL changes. Even if you did that, AX would still believe that length is 10.
You could change the SysUserInfo form to show the group name only. The groupId might as well be assigned by a number sequence in your context.
I wrote a job to change the string size via X++ and it works for EDTs, but it can't seem to find the "userGroupId". From the general feel of AX I get, I'd be willing to guess that they just have it in a different location, but maybe not. I wonder if this could be tweaked to work:
static void Job9(Args _args)
{
#AOT
TreeNode treeNode;
Struct propertiesExt;
Map mapNewPropertyValues;
void setTreeNodePropertyExt(
Struct _propertiesExt,
Map _newProperties
)
{
Counter propertiesCount;
Array propertyInfoArray;
Struct propertyInfo;
str propertyValue;
int i;
;
_newProperties.insert('IsDefault', '0');
propertiesCount = _propertiesExt.value('Entries');
propertyInfoArray = _propertiesExt.value('PropertyInfo');
for (i = 1; i <= propertiesCount; i++)
{
propertyInfo = propertyInfoArray.value(i);
if (_newProperties.exists(propertyInfo.value('Name')))
{
propertyValue = _newProperties.lookup(propertyInfo.value('Name'));
propertyInfo.value('Value', propertyValue);
}
}
}
;
treeNode = TreeNode::findNode(#ExtendedDataTypesPath);
// This doesn't seem to be able to find the system type
//treeNode = treeNode.AOTfindChild('userGroupId');
treeNode = treeNode.AOTfindChild('AccountCategory');
propertiesExt = treeNode.AOTgetPropertiesExt();
mapNewPropertyValues = new Map(Types::String, Types::String);
mapNewPropertyValues.insert('StringSize', '30');
setTreeNodePropertyExt(propertiesExt, mapNewPropertyValues);
treeNode.AOTsetPropertiesExt(propertiesExt);
treeNode.AOTsave();
info("Done");
}

Resources