There is a specific bitwise logical op on three bitmaps I would like to achieve in Flex/AS3. It would presumably have to be done with some combination of DisplayObject blendmodes, ColorMatrix filters, and possibly something else I haven't thought of, as FLex/As3 doesn't have bitwise logical ops on pixels (except for BlendMode.INVERT).
So here is the operation:
(B & S) | (F & ~S)
[where B,S,F are three bitmaps]
That's it.
Incidentally, I tried this with Pixel Bender, but incredibly, it doesn't have bitwise logical ops either. (What exactly were you thinking Adobe). So I simulated them without about 50 modulos and divides but it came out way too slow. (Could be because I didn't have a supported graphcs card which brings up another question - if Pixel Bender only works with certain cards, how do you find out at runtime from Flex/AS3 if the browser computer has a supported card.)
But anyway, my main question is how to perform that trivial little bitwise op above in Flex/AS3. (It would have to be as fast as BlendModes).
I'm not very experienced using bitwise operations, but i threw together a quick test case using the fancy new Flash player 10 vectors which are very nice for this type of data wrangling.
This runs through the 2000x2000 pixels in 115ms using the standalone debug player on my computer, it'll likely be a bit faster in the release player.
I'm not sure if this is fast enough since I don't know how often or on how large images you need to run it, but it might at least be a starting point.
package {
import flash.display.BitmapData;
import flash.display.Sprite;
import flash.geom.Rectangle;
import flash.text.TextField;
import flash.utils.getTimer;
/**
* ...
* #author Martin Jonasson
*/
public class Test extends Sprite {
public function Test() {
var output:TextField = new TextField();
output.autoSize = "left";
addChild(output);
var bmpB:BitmapData = new BitmapData(2000, 2000, false, 0xff00ff);
var bmpS:BitmapData = new BitmapData(2000, 2000, false, 0xffffff);
var bmpF:BitmapData = new BitmapData(2000, 2000, false, 0x000000);
var rect:Rectangle = new Rectangle(0, 0, 2000, 2000)
var vecB:Vector.<uint> = bmpB.getVector(rect);
var vecS:Vector.<uint> = bmpS.getVector(rect);
var vecF:Vector.<uint> = bmpF.getVector(rect);
var vecFinal:Vector.<uint> = new Vector.<uint>(vecB.length, true);
var startTime:int = getTimer();
for (var i:int = vecB.length - 1; i >= 0; --i) {
vecFinal[i] = (vecB[i] & vecS[i]) | (vecF[i] & ~vecS[i]);
}
output.appendText("bitwise stuff done, took: " + (getTimer() - startTime) + "ms \n");
startTime = getTimer();
var bmpFinal:BitmapData = new BitmapData(2000, 2000, false);
bmpFinal.setVector(rect, vecFinal);
output.appendText("created bitmapdata, took: " + (getTimer() - startTime) + "ms \n");
}
}
}
To grapefrukt:
With 1000X1000 bitmaps its 47 ms on my system, so this is looking very promising (approaching standard refresh rates with some tweaks one would hope).
I learned Flex 2 first, so didn't know about Vectors, which are evidently just highly efficient one-type only arrays.
I need to start paying it forward at some point, because this forum is amazing (4 for 4 on some fairly obscure questions since I first found out about it 10 days ago.)
Thanks again.
Related
Elden Ring is a hit game that has some interesting theorycrafting behind it.
There are hundreds of armor pieces, weapons, and spells. Finding optimal combinations of them based on player & item stats is an interesting practical problem.
I've always wanted to learn how to use Constraint Solvers and it seems a good usecase exists!
Goal:
Given a list of all armor in the game in JSON format
Find the set of armor (head, chest, legs, arms) that has:
The highest POISE and PHYSICAL_DEFENSE
For the lowest WEIGHT
Here is the repo:
https://github.com/GavinRay97/elden-ring-optimizer-optaplanner
My attempt so far:
Put the armor data here
Created data class that matches JSON
Created #PlanningEntity class for a combination of armor
Created #PlanningSolution class (not sure if this is correct)
Tried to write Solver, doesn't work
Update
I managed to solve it after advice below
The trick was to change to use PlanningEntityProperty:
#PlanningSolution
public class ArmorSetComboPlanningSolution {
public List<ArmorPiece> armorPieces;
public Map<Integer, List<ArmorPiece>> armorByType;
#ValueRangeProvider(id = "headRange")
#ProblemFactCollectionProperty
public List<ArmorPiece> headList;
#ValueRangeProvider(id = "chestRange")
#ProblemFactCollectionProperty
public List<ArmorPiece> chestList;
#ValueRangeProvider(id = "armsRange")
#ProblemFactCollectionProperty
public List<ArmorPiece> armsList;
#ValueRangeProvider(id = "legsRange")
#ProblemFactCollectionProperty
public List<ArmorPiece> legsList;
#PlanningEntityProperty
public ArmorSet armorSet;
#PlanningScore(bendableHardLevelsSize = 1, bendableSoftLevelsSize = 5)
BendableLongScore score;
ArmorSetComboPlanningSolution() {
}
ArmorSetComboPlanningSolution(List<ArmorPiece> armorPieces) {
this.armorPieces = armorPieces;
this.armorByType = armorPieces.stream().collect(groupingBy(ArmorPiece::armorCategoryID));
this.headList = armorByType.get(0);
this.chestList = armorByType.get(1);
this.armsList = armorByType.get(2);
this.legsList = armorByType.get(3);
// Need to initialize a starting value
this.armorSet = new ArmorSet(0L, this.headList.get(0), this.chestList.get(0), this.armsList.get(0), this.legsList.get(0));
}
}
Then the scorer:
public class ArmorSetEasyOptimizer implements EasyScoreCalculator<ArmorSetComboPlanningSolution, BendableLongScore> {
private final int TARGE_POISE = 61;
private final double MAX_WEIGHT = 60.64;
public ArmorSetEasyOptimizer() {
}
#Override
public BendableLongScore calculateScore(ArmorSetComboPlanningSolution solution) {
long hardScore = 0L;
ArmorSet armorSet = solution.armorSet;
if (armorSet.getTotalPoise() < TARGE_POISE) {
hardScore--;
}
if (armorSet.getTotalWeight() > MAX_WEIGHT) {
hardScore--;
}
long poiseRatio = (long) (armorSet.getTotalPoise() / (double) armorSet.getTotalWeight() * 100);
long physicalDefenseScaled = (long) (armorSet.getTotalPhysicalDefense() * 100);
long physicalDefenseToWeightRatio = (long) (physicalDefenseScaled / armorSet.getTotalWeight());
long magicDefenseScaled = (long) (armorSet.getTotalMagicDefense() * 100);
long magicDefenseToWeightRatio = (long) (magicDefenseScaled / armorSet.getTotalWeight());
return BendableLongScore.of(
new long[]{
hardScore
},
new long[]{
poiseRatio,
physicalDefenseScaled,
physicalDefenseToWeightRatio,
magicDefenseScaled,
magicDefenseToWeightRatio
}
);
}
}
Results
19:02:12.707 [main] INFO org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase - Local Search phase (1) ended: time spent (10000), best score ([0]hard/[179/3540/97/2750/75]soft), score calculation speed (987500/sec), step total (4046).
19:02:12.709 [main] INFO org.optaplanner.core.impl.solver.DefaultSolver - Solving ended: time spent (10000), best score ([0]hard/[179/3540/97/2750/75]soft), score calculation speed (985624/sec), phase total (2), environment mode (REPRODUCIBLE), move thread count (NONE).
[0]hard/[179/3540/97/2750/75]soft
ArmorSet (Weight: 36.3, Poise: 65, Physical: 35.4, Phys/Weight: 0.97, Magic: 27.5, Magic/Weight: 0.75 ) [
head: Radahn Soldier Helm (Weight: 4.0, Poise: 5),
chest: Veteran's Armor (Weight: 18.9, Poise: 37),
arms: Godskin Noble Bracelets (Weight: 1.7, Poise: 1),
legs: Veteran's Greaves (Weight: 11.7, Poise: 22)
]
This is a bit of a strange problem, because you only need one planning entity instance. There only ever needs to be one ArmorSet object - and the solver will be assigning different armor pieces as it comes closer and closer to an optimal combination.
Therefore your easy score calculator doesn't ever need to do any looping. It simply takes the single ArmorSet's weight and poise and creates a score out of it.
However, even though I think that this use case may be useful as a learning path towards constraint solvers, some sort of a brute force algorithm could work as well - your data set isn't too large. More importantly, with an exhaustive algorithm such as brute force, you're eventually guaranteed to reach the optimal solution.
(That said, if you want to enhance the problem with matching these armor sets to particular character traits, then it perhaps may be complex enough for brute force to become inadequate.)
On a personal note, I attempted Elden Ring and found it too hardcore for me. :-) I prefer games that guide you a bit more.
Is it possible to create a single gravity / force point in matter.js that is at the center of x/y coordinates?
I have managed to do it with d3.js but wanted to enquire about matter.js as it has the ability to use multiple polyshapes.
http://bl.ocks.org/mbostock/1021841
The illustrious answer has arisen:
not sure if there is any interest in this. I'm a fan of what you have created. In my latest project, I used matter-js but I needed elements to gravitate to a specific point, rather than into a general direction. That was very easily accomplished. I was wondering if you are interested in that feature as well, it would not break anything.
All one has to do is setting engine.world.gravity.isPoint = true and then the gravity vector is used as point, rather than a direction. One might set:
engine.world.gravity.x = 355;
engine.world.gravity.y = 125;
engine.world.gravity.isPoint = true;
and all objects will gravitate to that point.
If this is not within the scope of this engine, I understand. Either way, thanks for the great work.
You can do this with the matter-attractors plugin. Here's their basic example:
Matter.use(
'matter-attractors' // PLUGIN_NAME
);
var Engine = Matter.Engine,
Events = Matter.Events,
Runner = Matter.Runner,
Render = Matter.Render,
World = Matter.World,
Body = Matter.Body,
Mouse = Matter.Mouse,
Common = Matter.Common,
Bodies = Matter.Bodies;
// create engine
var engine = Engine.create();
// create renderer
var render = Render.create({
element: document.body,
engine: engine,
options: {
width: Math.min(document.documentElement.clientWidth, 1024),
height: Math.min(document.documentElement.clientHeight, 1024),
wireframes: false
}
});
// create runner
var runner = Runner.create();
Runner.run(runner, engine);
Render.run(render);
// create demo scene
var world = engine.world;
world.gravity.scale = 0;
// create a body with an attractor
var attractiveBody = Bodies.circle(
render.options.width / 2,
render.options.height / 2,
50,
{
isStatic: true,
// example of an attractor function that
// returns a force vector that applies to bodyB
plugin: {
attractors: [
function(bodyA, bodyB) {
return {
x: (bodyA.position.x - bodyB.position.x) * 1e-6,
y: (bodyA.position.y - bodyB.position.y) * 1e-6,
};
}
]
}
});
World.add(world, attractiveBody);
// add some bodies that to be attracted
for (var i = 0; i < 150; i += 1) {
var body = Bodies.polygon(
Common.random(0, render.options.width),
Common.random(0, render.options.height),
Common.random(1, 5),
Common.random() > 0.9 ? Common.random(15, 25) : Common.random(5, 10)
);
World.add(world, body);
}
// add mouse control
var mouse = Mouse.create(render.canvas);
Events.on(engine, 'afterUpdate', function() {
if (!mouse.position.x) {
return;
}
// smoothly move the attractor body towards the mouse
Body.translate(attractiveBody, {
x: (mouse.position.x - attractiveBody.position.x) * 0.25,
y: (mouse.position.y - attractiveBody.position.y) * 0.25
});
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/matter-js/0.12.0/matter.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/matter-attractors#0.1.6/build/matter-attractors.min.js"></script>
Historical note: the "gravity point" functionality was proposed as a feature in MJS as PR #132 but it was closed, with the author of MJS (liabru) offering the matter-attractors plugin as an alternate. At the time of writing, this answer misleadingly seems to indicate that functionality from the PR was in fact merged.
Unfortunately, the attractors library is 6 years outdated at the time of writing and raises a warning when using a newer version of MJS than 0.12.0. From discussion in issue #11, it sounds like it's OK to ignore the warning and use this plugin with, for example, 0.18.0. Here's the warning:
matter-js: Plugin.use: matter-attractors#0.1.4 is for matter-js#^0.12.0 but installed on matter-js#0.18.0.
Behavior seemed fine on cursory glance, but I'll keep 0.12.0 in the above example to silence it anyway. If you do update to a recent version, note that Matter.World is deprecated and should be replaced with Matter.Composite and engine.gravity.
I'm trying to make a fake download count. It should increment randomly over time. Some download-count-like patterns would be nice.
Is this possible without using a database, or storing a counter anywhere?
My idea is to check the number of seconds that have passed since my app was released. Then just throw that into a formula which spits out the fake download count. Users can request to see the download count at any time.
Is there a math function that increments randomly? I could just pass my secondsPassed into there and scale it how I'd like.
Something like this: getDownloadCount(secondsPassed)
Edit: here's an example solution. But it gets worse performance over time.
downloadCount = 0
loop secondsPassed/60 times // Loop one more time for every minute passed
downloadCount += seededRandom(0, 10)
Making a fake download count doesn't sound like a nice thing to do. However in designing secure communication protocols, there are legitimate use cases for monotonically growing functions with some randomness in their values.
I am assuming you have:
A growth model given as a monotonically growing function providing approximate values for the desired function.
Access to a time stamp, which never decreases.
Ability to store a constant random seed along with the function definition.
No way to store any updated data upon the function being queried.
First you decide on a window length, which will control how much randomness will be in the final output. I expect you will want this to be on the order of one hour or a few.
Figure out which window the current time is within. Evaluate the reference function at the start and end of this window. Consider the rectangle given by start and end time of the window as well as min and maximum value given by the reference function. Feed the corners of this rectangle and your constant seed into a PRNG. Use the PRNG to choose a random point within the rectangle. This point will be on the final curve.
Perform the same computation for one of the neighbor windows. Which neighbor window to use depend on whether the first computed point on the curve is to the left or the right of the current time.
Now that you have two points on the curve (which are reproducible and consistent), you will have to iterate the following procedure.
You are given two points on the final curve. Consider the rectangle given by those corners. Feed the corners and your constant seed into a PRNG. Use that PRNG to chose a random point within the rectangle. This point will be on the final curve. Discard one of the outer points, which is no longer needed.
Since the Y-values are restricted to integers, this procedure will eventually terminate once your two points on the curve have identical Y-coordinate, and you will know, that the function has to be constant between those two points.
You could implement a Morris Counter.
It works like this: start off by setting the counter to 1. Each time you want to increase the count (which could be every iteration of some loop or every time an event happens, but does not need to be determined randomly), then you do a random procedure to determine the effect it has on the counter.
It can have possibly no effect, or it can have the effect of raising the order of magnitude of the count. The probability is based on whether or not n successive fair coin flips all turn up heads, where n is the number of bits needed to encode the current counter value in binary. As a result, once the counter has gotten pretty high, it's very hard to make it go even higher (the state of the counter models a phenomenon where by you are already way overestimating the count, so now you need lots of nothing-happens events to compensate, making the count more accurate).
This is used as a cheap way to store an approximate count of a very large collection, but there's no reason why you can't use it as your randomly increasing counter device.
If you want better accuracy, or you want the count outputs to be more "normal" numbers instead of always powers of 2, then you can just create several Morris Counters, and at each step average together the set of current counts across them all.
You are after a sequence which always increases by a random amount, depending on how long you last requested the sequence.
This can be done through a random sequence that is always seeded the same.
Then we iterate through the same sequence each time to get the graph.
We need a function that increments our counter, store the new Time and Count and return the count.
Ideally we would model the increases as a poisson process, but a linear one here will do.
class Counter {
private static int counter = 0;
private static int time = 0;
private static double rate = 5.0;
private Random r;
public Counter(int seed){
counter = 0;
r = new Random(seed);
}
private int poisson(double rate, int diff){
// We're gonna cheat here and sample uniformly
return r.Next(0, (int)Math.Round(rate * diff));
}
public int getNext(int t){
var diff = t - time;
time = t;
if (diff <= 0) return counter;
counter += this.poisson(rate, diff);
return counter;
}
}
void Main()
{
var c = new Counter(1024);
for(var i = 0; i< 10; i++){
Console.WriteLine(String.Format("||{0}\t|{1}\t||",i,c.getNext(i)));
}
}
This outputs (for example):
||t |hit||
||0 |0 ||
||1 |3 ||
||2 |4 ||
||3 |6 ||
||4 |6 ||
||5 |8 ||
||6 |10 ||
||7 |13 ||
||8 |13 ||
||9 |16 ||
For some deterministic function f, (perhaps f(x) = x, or if your fake app is REALLY awesome f(x) = 2^x), and a random function r which outputs random number that's sometimes negative and sometimes positive.
Your graphing function g could be:
g(x) = f(x) + r
EDIT
How about this: https://gamedev.stackexchange.com/questions/26391/is-there-a-family-of-monotonically-non-decreasing-noise-functions
Well it's not "random" but you could use A*(X/B + SIN(X/B)) (scaled by some number) to introduce some noise. You can adjust A and B to change the scale of the result and how often the "noise" cycles.
Really, any periodic function that has a first derivative within some bounds could work.
as quick solution you can use something like this (code in java):
static long f(final int x) {
long r = 0; // initial counter
long n = 36969L; // seed
for (int i = 0; i <= x; i++) {
n = 69069L * n + 1234567L; // generate Ith random number
r += (n & 0xf); // add random number to counter
}
return r;
}
by playing with numbers 36969L and 0xf you can achieve different results
numbers 69069L and 1234567L are from standard LCG
the main idea - create simple random, with the same seed and for every passed x (number of seconds) replay random additions to counter
A good model for random events like downloads is the Poisson distribution. You need to estimate the average number of downloads in a given time period (hour, say) and then invert the Poisson distribution to get the number of downloads in a time period given a uniformly distributed random number. For extra realism you can vary the average according to time of day, time of week, etc. Sample algorithms are available at http://en.m.wikipedia.org/wiki/Poisson_distribution#Generating_Poisson-distributed_random_variables.
Here is a javascript implementation of a "fake" download-counter that appears the same to everyone. This always returns the same results for everyone each time and doesn't require database or files to do that. It also gracefully handles the case where you don't ask for new data at the same time, it will still look natural next time you request a day.
https://jsfiddle.net/Lru1tenL/1/
Counter = {
time:Date.now(),
count:0,
rate:0.45
};
Counter.seed = function(seed, startTime)
{
this.time = startTime,
this.count = 0,
this.prng = new Math.seedrandom(seed);
this.prng.getRandomInt = function(min, max) {
return Math.floor(this() * (max - min)) + min;
};
};
Counter.getNext = function(t){
var diff = t - this.time;
console.log(diff);
if(diff <= 0) return this.count;
this.time = t;
var max = Math.ceil(diff/100 * this.rate);
console.log("max: " + max);
this.count += this.prng.getRandomInt(0,max);
return this.count;
};
var results = [];
var today = Date.now();
Counter.seed("My Random Seed", today);
for (var i = 0; i < 7; i++) {
if(i === 4)
{
results.push(null);
} else {
var future = today + 86400000 * i;
results.push(Counter.getNext(future));
}
}
console.log(results);
var data = {
labels: ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday","Sunday"],
datasets: [
{
label: "My Second dataset",
fillColor: "rgba(151,187,205,0.2)",
strokeColor: "rgba(151,187,205,1)",
pointColor: "rgba(151,187,205,1)",
pointStrokeColor: "#fff",
pointHighlightFill: "#fff",
pointHighlightStroke: "rgba(151,187,205,1)",
data: results
}
]
};
var ctx = document.getElementById("myChart").getContext("2d");
var myLineChart = new Chart(ctx).Line(data);
Is the javascript. It creates a counter object that increments when requested based on the time of the previous value asked for. The repeatability comes through a thirdparty library "seedrandom" and the chart is drawn using chartjs.
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/1.0.2/Chart.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/seedrandom/2.4.0/seedrandom.min.js">
</script>
<body>
<canvas id="myChart" width="600" height="400"></canvas>
</body>
</html>
You can use Unix timestamp. Something like:
Downloads = constant + ( unix time / another constant )
You can vary both constants to get a reasonable number.
P.S: Thats if you want a linear function, otherwise you can do:
Downloads = (unix time) ^ constant
and so on.
I am having an issue with ODAC (Oracle Data Access Components), Entity Framework 4.3.1, and expression trees. We have a legacy database (don't we all?) that we are mapping in Entity Framework. The table has millions of records and over one hundred columns (sad face).
Here is an example query on an indexed column:
int myId = 2;
var matchingRecord = context.MyLargeTable.Where(v=>v.Id == myId).ToList(); //Super slow (5+ minutes, sometimes Out of Memory exception)
int myId = 2;
Expression<Func<bool>> myLambda = v => v.Id == myId; //Shouldn't this work now?
var matchingRecord = context.MyLargeTable.Where(myLambda).ToList(); //Still super slow (5+ minutes, sometimes Out of Memory exception)
var elementName = Expression.Parameter(typeof(LargeTable), "v");
var propertyName = Expression.Parameter(elementName, "Id");
var constantValue = Expression.Constant(myId);
var comparisonMethod = Expression.Call(
propertyName,
typeof(int).GetMethod("Equals", new[] { typeof(int) }),
constantValue
)
var finalTree = Expression.Lambda<Func<LargeTable, bool>>(comparisonMethod, elementName);
var matchingRecord = context.MyLargeTable.Where(finalTree).ToList(); //Super fast
I've read things like this that explain the different between Func<> and Expression> and how Expression> actually gets passed to the database for the query and that's why it is faster.
http://www.fascinatedwithsoftware.com/blog/post/2011/12/02/Falling-in-Love-with-LINQ-Part-7-Expressions-and-Funcs.aspx - Whole thing is good, but if in a rush, just read the section titled “Unintended Consequences” for the main takeaway
http://fascinatedwithsoftware.com/blog/post/2012/01/10/More-on-Expression-vs-Func-with-Entity-Framework.aspx
Why would you use Expression<Func<T>> rather than Func<T>? - No set of links is complete without a corresponding SO question
My question is this: Are people really sitting there constructing expression trees using Expression.* classes? Any query beyond simple comparisons get really complicated and is almost impossible to read. What am I missing about passing the Expression> to the database? Who do I go punch in the face for this manually constructed expression tree solution? Oracle? EF? What am I missing?
This is an academic exercise, I'm new to Reactive Extensions and trying to get my head around the technology. I set myself a goal of making an IObservable that returns successive digits of Pi (I happen to be really interested in Pi right at the moment for unrelated reasons). Reactive Extensions contains operators for making observables, the guidance they give is that you should "almost never need to create your own IObsevable". But I can't see how I can do this with the ready-made operators and methods. Let me elucidate a bit more.
I was planning to use an algorithm that would involve the expansion of a Taylor series for Arctan. To get the next digit of Pi, I'd expand a few more terms in the series.
So I need the series expansion going on asynchronously, occasionally throwing out the next computed digit to the IObserver. I obviosly don't want to restart the computation from scratch for each new digit.
Is there a way to implement this behaviour using RX's built-in operators, or am I going to have to code an IObservable from scratch? What strategy suggests itself?
For something like this, the simplest method would be to use a Subject. Subject is both an IObservable and IObserver, which sounds a bit strange but it allows you to use them like this:
class PiCalculator
{
private readonly Subject<int> resultStream = new Subject<int>();
public IObservable<int> ResultStream
{
get { return resultStream; }
}
public void Start()
{
// Whatever the algorithm actually is
for (int i = 0; i < 1000; i++)
{
resultStream.OnNext(i);
}
}
}
So inside your algorithm, you just call OnNext on the subject whenever you want to produce the next value.
Then to use it, you just need something like:
var piCalculator = new PiCalculator();
piCalculator.ResultStream.Subscribe(n => Console.WriteLine((n)));
piCalculator.Start();
Simplest way is to create an Enumerable and then convert it:
IEnumerable<int> Pi()
{
// algorithm here
for (int i = 0; i < 1000; i++)
{
yield return i;
}
}
Usage (for a cold observable, that is every new 'subscription' starts creating Pi from scratch):
var cold = Pi().ToObservable(Scheduler.ThreadPool);
cold.Take(5).Subscribe(Console.WriteLine);
If you want to make it hot (everyone shares the same underlying calculation), you can just do this:
var hot = cold.Publish().RefCount();
Which will start the calculation after the first subscriber, and stop it when they all disconnect. Here's a simple test:
hot.Subscribe(p => Console.WriteLine("hot1: " + p));
Thread.Sleep(5);
hot.Subscribe(p => Console.WriteLine("hot2: " + p));
Which should show hot1 printing only for a little while, then hot2 joining in after a short wait but printing the same numbers. If this was done with cold, the two subscriptions would each start from 0.