Netlogo count links with - count

I have a own breed computer and I want to count all computers within a range which are online. Following didn't worked because "on" is not able in link context how is it done right?
let cnt count link-neighbors with [link-length <= range and online = 1]

There's a couple ways to do this.
This one is probably the simplest. Remember that link-neighbors returns the computers that the current computer is linked to, not the links themselves. So we can just look at the neighboring computers in range as follows:
count link-neighbors in-radius range with [online = 1]
Alternatively, you could look at the links themselves and use other-end to figure out if the connected computer is online:
count my-links with [link-length <= range and [online = 1] of other-end]
The advantage of this method is that you could use something besides actual physical distance as the range. For instance, if the links had a latency variable that was the time it took for messages to go across them, you could do:
count my-links with [latency <= max-latency and [online = 1] of other-end]

Related

Network flow problem with binary in AMPL?

this is an AMPL model, I'm pretty new in this, so I'm doing a classical problem of logistics, a network flow problem, where I have to find the least expensive way to transport available blood donations in a net of cities where there are different costs in the edges. So I have to minimize its objective function (maybe is better understandable reading the code).
I solve the problem right now, but now I'm facing the second task, where a fixed cost equal to 10 must be paid for each edge used for transporting blood donations (in addition to the shipping costs). For what I have understood, the question is easy. In practice I just have to add 10*numberOfEdgeUsed to the objective funcion. I want to do it in the correct way trying to add a binary variable for every edge, 1 if an edge is used, 0 if not. I'm pretty new to this kind of programmation, and I don't know how to do it.
Any help is welcome. I put just the .mod code, I don't put the .dat file becouse is not necessary.
This is the code of the first task, I have to modify this:
set Cities;
set Origins within (Cities);
set Destinations within (Cities);
set Link within (Cities cross Cities);
param Costs{Link};
param DemSup{Cities};
param fixedCost{(i, j) in Link} = 10;
var y{Link}, binary;
var Ship{Link} >= 0, <= 1000;
minimize Total_Cost: sum{(i,j) in Link} fixedCost[i,j] * y[i,j] + sum {(i,j) in Link} (Costs[i,j] * Ship[i,j]);
subject to Supply {i in Origins}: - sum {(i,k) in Link} Ship[i,k] >= DemSup[i];
subject to Demand {i in Destinations}: sum {(j,i) in Link} Ship[j,i] - sum {(i,k) in Link} Ship[i,k] == DemSup[i];
You need to add the implication: if y=0 then the corresponding link can not be used. This can be formulated as the constraint:
Ship[i,j] <= 1000 * y[i,j]
(or better Ship[i,j] <= Ship[i,j].ub * y[i,j])

Classroom Scheduling Using Graph Algorithm

This is another question from my past midterm, and i am supposed to give a formal formulation, describe the algorithm used, and justify the correctness. Here is the problem:
The University is trying to schedule n different classes. Each class has a start and finish time. All classes have to be taught on Friday. There are only two classrooms available.
Help the university decide whether it is possible to schedule these classes without causing any time conflict (i.e. two classes with overlapping class times are scheduled in the same classroom).
Sort the classes by starting time (O(nlogn)), then go through them in order (O(n)), noting starting and ending times and looking for the case of more than two classes going on at the same time.
This isn't a problem with a bipartite graph solution. Who told you it was?
#Beta is nearly correct. Create a list of pairs <START, time> and <END, time>. Each class has two pairs in the list, one START, and one END.
Now sort the list by time. Or if you like, put them in a min heap, which amounts to heapsort. For equal times, put the END triples before START. Then execute the following loop:
set N = 0
while sorted list not empty
pop <tag, time> from the head of the list
if tag == START
N = N + 1
if N > 2 return "can't schedule"
else // tag == END
N = N - 1
end
end
return "can schedule"
You can easily enrich the algorithm a bit to return the time periods where more than 2 classes are in session at the same time, return those classes, and other useful information.
This indeed IS a bipartite/bicoloring problem.
Imagine each class to be a node of a graph. Now create an edge between 2 nodes if they have time overlap. Now the final graph that you get if you can bicolor this graph then its possible to schedule all the class. Otherwise not.
The graph you created if it can be bicolored, then each "black" node will belong to room1 and each "white" node will belong to room2

Multi-commodity Flow

I have just been working on chapters of questions from the DPV textbook for my exam preparation. For one of them, I'm having some trouble but I have made some progress:
My solution:
I will be using linear programming to maximize x where
SUM {flow_i(s_i, v) for all i where v are nodes connecting to source s_i} >= x * d_i
subject to the constraints
for each edge e , f1+..fk <= c_e
for each edge e, flow_e >= 0
and some more constraints that I'm unsure of
I think I'm on the right track but I need some help getting a bit further. Should I be considering trying to use a super-node to reduce this to a regular max flow problem?
Any help would be great! Thank you.
Yes, one typical approach to multi-source, multi-sink commodity flow problems is to introduce a super-source and one super-sink. Then connect all the sources s1...sk to the super-source. Connect each sink t1,...tk to the super-sink.
Important: Give a very large capacity to all the edges leaving or entering any of the super-nodes.
Objective: Maximize the total throughoput. (Sum of flows over all edges leaving the sources 1..k)
Constraints:
Edge Capacity Constraints:
You already got this right.
for each edge e , f1+..fk <= c_e
*Flow Conservation (Flow-in == Flow_out):*
for each vertex v, sum of flow into v = sum of flow leaving v
Demand Satisfaction:
for each sink t_i, sum of flow into t_i (all edges ending in t_i) >= demand_i
Non-zero flows, which you already have.
Here's one accessible lecture handout that references your specific problem. Specifically, take a look at Example 2 in the handout.

Getting caught in loops - R

I am looking at whether or not certain 'systems' for betting really do work as claimed, namely, that they have a positive expectation. One such system is based on the rebate on loss. You basically have a large master pot, say $1 million. Your bankroll for each game is $50k.
The way it works is as follows:
Start with $50k, always bet on banker
If you win, add the money to the master pot. Then play again with $50k.
If you lose(now you're at $30k) play till you either:
(a) hit 0, you get a rebate of 10%. Begin playing again with $50k+5k=$55k.
(b) If you win more than the initial bankroll, add the money to the master pot. Then play again with $50k.
Continue until you double the master pot.
I just can't find an easy way of programming out the possible cases in R, since you can eventually go down an improbable path.
For example, you start at 50k, lose 20, win 19, now you're at 49, now you lose 20, lose 20, now you're at 9, you either lose 9 and get back 5k or you win and this cycle continues until you either end up with more than 50k or hit 0 and get the rebate on the 50k and start again with $50k +5k.
Here's some code I started, but I haven't figured out a good way of handling the cases where you get stuck and keeping track of the number of games played. Thanks again for your help. Obviously, I understand you may be busy and not have time.
p.loss <- .4462466
p.win <- .4585974
p.tie <- 1 - (p.win+p.loss)
prob <- c(p.win,p.tie,p.loss)
bet<-20
x <- c(19,0,-20)
r <- 10 # rebate = 20%
br.i <- 50
br<-200
#for(i in 1:100){
# cbr.i<-0
y <- sample(x,1,replace=T,prob)
cbr.i<-y+br.i
if(cbr.i > br.i){
br<-br+(cbr.i-br.i);
cbr.i<-br.i;
}else{
y <- sample(x,2,replace=T,prob);
if( sum(y)< cbr.i ){ cbr.i<-br.i+(1/r)*br.i; br<-br-br.i}
cbr.i<-y+
}else{
cbr.i<- sum(y) + cbr.i;
}if(cbr.i <= bet){
y <- sample(x,1,replace=T,prob)
if(abs(y)>cbr.i){ cbr.i<-br.i+(1/r)*br.i } }
The way you've phrased to rules don't make the game entirely clear to me, but here's some general advice on how you might solve your problem.
First of all, sit down with pen and paper, and see if you can make some progress towards an analytic solution. If the game is sufficiently complicated, this might not be possible, but you may get some more insight into how the game works.
Next step, if that fails, is to run a simulation. This means writing a function that accepts a starting level of player cash, and house cash (optionally this could be infinite), and a maximum number of bets to place. It then simulates playing the game, placing bets according your your betting system until either
i. The player goes broke
ii. The house goes broke
iii. You reach the maximum number of bets. (You need this maximum so you don't get stuck simulating forever.)
The function should return the amount of cash that the player has after all the bets have been placed.
Run this function lots of times, and compare the end cash with the starting cash. The average of end cash / start cash is a measure of your expectation.
Try the simulation with different inputs. (For instance, with many gambling games, even if you could theoretically make an infinite amount of money in the long run, stochastic variation means that you go broke before you get there.)

Showing too much 'skin' detection in software

I am building an ASP.NET web site where the users may upload photos of themselves. There could be thousands of photos uploaded every day. One thing my boss has asked a few time is if there is any way we could detect if any of the photos are showing too much 'skin' and automatically move flag these as 'Adults Only' before the editors make the final decision.
Your best bet is to deal with the image in the HSV colour space (see here for rgb - hsv conversion). The colour of skin is pretty much the same between all races, its just the saturation that changes. By dealing with the image in HSV you can simply search for the colour of skin.
You might do this by simply counting the number of pixel within a colour range, or you could perform region growing around pixel to calculate the size of the areas the colour.
Edit: for dealing with grainy images, you might want to perform a median filter on the image first, and then reduce the number of colours to segment the image first, you will have to play around with the settings on a large set of pre-classifed (adult or not) images and see how the values behave to get a satisfactory level of detection.
EDIT: Heres some code that should do a simple count (not tested it, its a quick mashup of some code from here and rgb to hsl here)
Bitmap b = new Bitmap(_image);
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
byte bitsPerPixel = GetBitsPerPixel(bData.PixelFormat);
byte* scan0 = (byte*)bData.Scan0.ToPointer();
int count;
for (int i = 0; i < bData.Height; ++i)
{
for (int j = 0; j < bData.Width; ++j)
{
byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;
byte r = data[2];
byte g = data[1];
byte b = data[0];
byte max = (byte)Math.Max(r, Math.Max(g, b));
byte min = (byte)Math.Min(r, Math.Min(g, b));
int h;
if(max == min)
h = 0;
else if(r > g && r > b)
h = (60 * ((g - b) / (max - min))) % 360;
else if (g > r && g > b)
h = 60 * ((b - r)/max - min) + 120;
else if (b > r && b > g)
h = 60 * ((r - g) / max - min) + 240;
if(h > _lowerThresh && h < _upperThresh)
count++;
}
}
b.UnlockBits(bData);
Of course, this will fail for the first user who posts a close-up of someone's face (or hand, or foot, or whatnot). Ultimately, all these forms of automated censorship will fail until there's a real paradigm-shift in the way computers do object recognition.
I'm not saying that you shouldn't attempt it nontheless; but I want to point to these problems. Do not expect a perfect (or even good) solution. It doesn't exist.
I doubt that there exists any off-the-shelf software that can determine if the user uploads a naughty picture. Your best bet is to let users flag images as 'Adults Only' with a button next to the picture. (Clarification: I mean users other than the one who uploaded the picture--similar to how posts can be marked offensive here on StackOverflow.)
Also, consider this review of an attempt to do the same thing in a dedicated product: http://www.dansdata.com/pornsweeper.htm.
Link stolen from today's StackOverflow podcast, of course :).
We can't even write filters that detect dirty words accurately in blog posts, and your boss is asking for a porno detector? CLBUTTIC!
I would say your answer lies in crowdsourcing the task. This almost always works and tends to scale very well.
It doesn't have to involve making some users into "admins" and coming up with different permissions - it can be as simple as to enable an "inappropriate" link near each image and keeping a count.
See the seminal paper "Finding Naked People" by Fleck/Forsyth published in ECCV. (Advanced).
http://www.cs.hmc.edu/~fleck/naked.html
Interesting question from a theoretical / algorithmic standppoint. One approach to the problem would be to flag images that contain large skin-colored regions (as explained by Trull).
However, the amount of skin shown is not a determinant of an offesive image, it's rather the location of the skin shown. Perhaps you can use face detection (search for algorithms) to refine the results -- determine how large the skin regions are relative to the face, and if they belong to the face (perhaps how far below it they are).
I know either Flickr or Picasa has implemented this. I believe the routine was called FleshFinder.
A tip on the architecture of doing this:
Run this as a windows service separate from the ASP.NET Pipeline, instead of analyzing images in real time, create a queue of new images that are uploaded for the service to work through.
You can use the normal System.Drawing stuff if you want, but if you really need to process a lot of images, it would be better to use native code and a high performance graphics library and P/invoke the routine from your service.
As resources are available, process images in the background and flag ones that are suspicious for editors review, this should prune down the number of images to review significantly, while not annoying people who upload pictures of skin colored houses.
I would approach the problem from a statistical standpoint. Get a bunch of pictures that you consider safe, and a bunch that you don't (that will make for a fun day of research), and see what they have in common. Analyze them all for color range and saturation to see if you can pick out characteristics that all of the naughty photos, and few of the safe ones have.
Perhaps the Porn Breath Test would be helpful - as reported on Slashdot.
Rigan Ap-apid presented a paper at WorldComp '08 on just this problem space. The paper is allegedly here, but the server was timing out for me. I attended the presentation of the paper and he covered comparable systems and their effectiveness as well as his own approach. You might contact him directly.
I'm afraid I can't help point you in the right direction, but I do remember reading about this being done before. It was in the context of people complaining about baby pictures being caught and flagged mistakenly. If nothing else, I can give you the hope that you don't have to invent the wheel all by yourself... Someone else has been down this road!
CrowdSifter by Dolores Labs might do the trick for you. I read their blog all the time as they seem to love statistics and crowdsourcing and like to talk about it. They use amazon's mechanical turk for a lot of their processing and know how to process the results to get the right answers out of things. Check out their blog at the very least to see some cool statistical experiments.
As mentioned above by Bill (and Craig's google quote) statistical methods can be highly effective.
Two approaches you might want to look into are:
Neural Networks
Multi Variate Analysis (MVA)
The MVA approach would be to get a "representative sample" of acceptable pictures and of unacceptable pictures. The X data would be an array of bytes from each picture, the Y would be assigned by you as a 1 for unacceptable and a 0 for acceptable. Create a PLS model using this data. Run new data against the model and see how well it predicts the Y.
Rather than this binary approach you could have multiple Y's (e.g. 0=acceptable, 1=swimsuit/underwear, 2=pornographic)
To build the model you can look at open source software or there are a number of commercial packages available (although they are typically not cheap)
Because even the best statistical approaches are not perfect the idea of also including user feedback would probably be a good idea.
Good luck (and worst case you get to spend time collecting naughty pictures as an approved and paid activity!)

Resources