How to understanding the logic behind this programming task? (Beginner) - math

The task states:
Saver A has £25,000 which they purchase a new car with, the car loses 20% of its value each year. Saver B has £25,000 which they invest in the property market and make a rental yield of 8% each year.
Using a while loop, calculate and output which year does Saver B have 2, 3, 4, 5, 6, 7, 8, 9, 10 times as much money of Saver A.
I am having an issue with what exactly this logic states rather than programming it.
If Saver A has 25k, which they buy a car with, and then it loses 20% per year, does it mean that it loses 20% of the original value (so 5k per year every year) or that it loses 20% of the value that it becomes over the years, so first it would lose 5k so the car would be worth 20k, so then it would be 20% of the 20k that is the loss of value that year.
It's sort of the same case for Saver B. I don't see why he would get more each year from a rental property than he got before, houses just don't increase in value that drastically. It seems to me like it would be 8% of the original 25k every year, but what does the question actually say for me to do?
I am planning to do a for loop for 2,3,4,5,6 etc. and then inside a while loop. Inside the while loop I actually need to money of A and B so... this is the calculations I need to find out how to do.
Thank you for help.

That is generally how it works. Take the current value and add or deduct the percentage value.
Otherwise the car would reach negative value after only 5 years.
Otherwise the house would increase its value linearily, when infation and market do not. The house would almost stagnate after decades.
So in each loop, calculate the loss/earnings based on current value and substract/add it.

The wording of the task as you have provided it is perhaps not as clear as it could be ... but that is very common when requirements for a programming job are specified!
The calculation of a depreciating value of the car should definitely be 20% of the previous year's value, not 20% of the original value.
The calculation of rental income is different. The question does not require you to factor in any change in the value of the investment property, just the income from rent. So that investor simply receives a fixed amount (8% of £25,000) each year. At least that's the way I read the question.
If I were you, I'd ask the person who set the task to clarify it. That's what I'd do in real life too -- if the requirements are ambiguous, don't just assume what you think they mean, ask the client to clarify.
With regard to the "while loop", just notice that the loop has to go year-by-year, but the output is not year-by-year: the code does not output a value each time through the loop. Within the loop you will need to accumulate the value of each Savers' asset (The task says to compare the "money" each saver has, but that is also less than clear -- Saver A has no money, just a depreciating asset; Saver B receives rental money each year and that has to be added to the fixed value of the asset. So that might also need clarification.) Anyway, once you accumulate the value for each Saver within the current year, then the code within the loop needs to decide whether or not to output a message like "Saver B's value exceeded 2 times Saver A's value in year ".

Related

AWS DynamoDB matrix doesn't work correctly

I am using dynamoDB and curious of this
When I see table matrix with period of 10, 30 seconds, it seems that it exceeds provisioned value
Then, when I see table matrix with period of 1 minutes, it doesn't reach provisioned value at all
I want to know why this happens.
This isn't a DynamoDB thing. It's a CloudWatch rendering thing.
Consumed capacity has a natural period of 1 minute. When you set your graph period to 1 minute everything is correct. Consumed is below provisioned.
If you change the graph period to 30 seconds, your consumed view adjusts and you see consumption that's double what's real. The math behind the scenes divides by the wrong period. Graph period of 10 seconds, you get 6x reality. Graph period of 5 minutes, you get 1/5th reality.
The Provisioned line isn't based on an equation involving Period so it's not affected by the chosen period.
Maybe someone can comment on why the user is allowed to control the Period of the view but it just messes things up when it doesn't match the natural period of the data.

Optaplanner: Dynamic number of planning variable based on ProblemFactCollectionProperty

Problem: Truck assignment with limited quantity to deliver to the customers.
The truck has to be assigned to the customer. As the truck has limited quantity, the truck needs to return back to reload again to deliver to the next customer.
Trip - load at Depot, unload at customer/few customers, come back to depot.
The problem facts are available trucks and customers to be delivered. We need to find dynamically how many trips can be possible from truck-based on few timing-related conditions(like truck available time, driver hours, etc).
The solution I can think off:
Pre-compute max number of trips by the truck based on business understanding- use this as a planning variable. Provide hard score for violating time constraints, so few trips will be left unassigned if truck exceeds the available truck/trip time.
Need Help:
For every solved example, we have a fixed number of planning variables before planning. Even In the chained planning variable(Like TSP,VRP), we have the fixed number of trucks beforehand.
Any help is appreciated. If there is no direct solution, is the approach I have come up is the best possible?
That solution is indeed recommended currently:
Provide enough trucks in the anchorValueRange to make sure a feasible solution can be found. Defining that number can be tricky: typically double the average usage. For example, if you have 300 visits and do on average 100 visits per truck, give it 6 trucks, as you never expect it to use more than 6 trucks (and probably a lot less). If trucks have skills or affinity, this becomes a bunch more complex.
Add an extra score level: if you're on HardSoftScore, switch to HardMediumSoftScore.
Add a medium constraint to penalize the number of trucks used. This is softer than the hard constraints (capacity etc) and harder than the soft constraints (distance etc).
(The alternative, adding/removing values to the value ranges on the fly, is only theoretically possible in OptaPlanner's architecture at the moment (don't use addProblemFactChanges for this!). It might sound like the perfect solution, but there are many subsystems that profit from a fixed value range, so that approach would have severe trade-offs.)

Averaging when events are slower than measurement time

I'm trying to come up with a better way of providing an instantaneous average when input signal is very slow. This seems like a math-y kinda question so if it should be over there let me know.
I have events that are measured as a pulse. Normally I can collect the pulses in a counter and then read the counter value at a fixed interval, say 1/4 second. I can then take the count value and divide by the number of seconds so n/0.25 and get a rate. I then apply a low pass filter to clean up the average and that works great normally.
What do I do when the events happen once every 1-60 seconds? The obvious choice is to wait until I have a sufficient number of counts and divide by total time. However, I need to provide the user with a reading every few seconds so waiting is not an option. I need some way to estimate the value.
I've thought of one solution that's kinda hard to explain. I was wondering if there was a standard way of doing this. I'm pretty sure I have to utilize a different kind of "data," the lack of an event. The goal is to estimate until enough time/events have passed to really calculate a rate and to transition from estimate to real rate seamlessly.

Quantstrat: Execute on the same bar

I know this has been asked before here, but Id like to extend the question further.
Lets say my entry price is 50, so at the start of the day I place a limit order bid 50 for 1 lot. During the trading day, the market collapses and I get filled on my bid. In a real world live trading scenario, my execution is going to be on the same daily bar at the price of 50. Even if I'm using 1 minute bars and that fill happens at 14:00 in real time, the data and prices at 14:01 are completely irrelevant to the trade and fill.
Furthermore, if I am already in a trade (lets say short # 50s), and I place a stop-loss order at 80s and the market trades up through the 80s - Im going to get stopped out then and there, around about the price of 80s give or take some slippage. The next bar, whether it be daily, hourly or 1 minute, may open up at 150. A backtest that is going to execute that trade on the open of the next bar is now potentially waaaay out of sync with what would have happened in a real time live scenario.
I understand that any strategy that calculates its trading signals based off a bar's close can be subject to huge biases without enforcing the next bar execution. But for strategies that have predefined entry/exit signals (which I feel is going to be the majority) the ability to execute on the same bar is crucial!
In the post linked above, Josh Ulrich mentioned adding allowMagicalThinking=TRUE to the calls to applyStrategy and applyRules. However, I cant seem to find any documentation on it, and my implementation of it hasnt had any effect. What am I missing?
Call to applyRules:
test <- applyRules(strategy=strategy.st,portfolio=portfolio.st, symbol = symbols, mktdata=mktdata , allowMagicalThinking=TRUE)
Alternatively, call to strategy:
out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st, allowMagicalThinking=TRUE)
allowMagicalThinking = TRUE causes execution to occur on the same observation as order entry. There is no way to force orders to be entered on the same observation as the signal that causes them.
If your signals really are pre-defined, you can include them in your mktdata object and shift them sufficiently so that execution occurs when you think it should.
I caution anyone who does this to double- and triple-check your results, because you're side-stepping almost all of quantstrat's built-in safeguards to avoid creating look-ahead bias in your backtests.

Siemens DICOM Individual Slice Time (Private_1019_1029)

I'm seeing that the individual slice time information from the Private_0019_1029 field of the DICOM header has negative values and sometime only positive values.
I assumed that these times are with respect to the Volume Acquisition time recorded in the header.
Going by that assumption, it would mean that the Acquisition time varies. But upon checking the difference between successive volume acquisition times, I see that it's equal to TR.
So I'm at a loss about what's happening.
I'm trying to look at the raw fMRI data without slice time correction; hence it's necessary to have the individual slice times.
Does the moco series do time shifting in addition to motion correction? (I don't believe it used to, but your experience may show otherwise).
This indicates how their slice timing is measured. Try the computations with the raw and the moco series and see if the times line up. That may give you your answer.
When dealing with private tag, you should really include the Private Vendor ID, in your case the value of tag (0019,0010).
You may also want to have a look at the output of:
gdcmdump --csa input.dcm
This will dump the SIEMENS CSA header directly from the DICOM attribute.

Resources