I know this has been asked before here, but Id like to extend the question further.
Lets say my entry price is 50, so at the start of the day I place a limit order bid 50 for 1 lot. During the trading day, the market collapses and I get filled on my bid. In a real world live trading scenario, my execution is going to be on the same daily bar at the price of 50. Even if I'm using 1 minute bars and that fill happens at 14:00 in real time, the data and prices at 14:01 are completely irrelevant to the trade and fill.
Furthermore, if I am already in a trade (lets say short # 50s), and I place a stop-loss order at 80s and the market trades up through the 80s - Im going to get stopped out then and there, around about the price of 80s give or take some slippage. The next bar, whether it be daily, hourly or 1 minute, may open up at 150. A backtest that is going to execute that trade on the open of the next bar is now potentially waaaay out of sync with what would have happened in a real time live scenario.
I understand that any strategy that calculates its trading signals based off a bar's close can be subject to huge biases without enforcing the next bar execution. But for strategies that have predefined entry/exit signals (which I feel is going to be the majority) the ability to execute on the same bar is crucial!
In the post linked above, Josh Ulrich mentioned adding allowMagicalThinking=TRUE to the calls to applyStrategy and applyRules. However, I cant seem to find any documentation on it, and my implementation of it hasnt had any effect. What am I missing?
Call to applyRules:
test <- applyRules(strategy=strategy.st,portfolio=portfolio.st, symbol = symbols, mktdata=mktdata , allowMagicalThinking=TRUE)
Alternatively, call to strategy:
out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st, allowMagicalThinking=TRUE)
allowMagicalThinking = TRUE causes execution to occur on the same observation as order entry. There is no way to force orders to be entered on the same observation as the signal that causes them.
If your signals really are pre-defined, you can include them in your mktdata object and shift them sufficiently so that execution occurs when you think it should.
I caution anyone who does this to double- and triple-check your results, because you're side-stepping almost all of quantstrat's built-in safeguards to avoid creating look-ahead bias in your backtests.
Related
I'm attempting to make an app that displays a different string based on the current time. It should display a different string every minute, synchronised to the apple watch's clock (ie when a new minute starts, replace the current string on the complication).
I have lots of issues with complications for the apple watch, i can see that lots of people find apple's documentation to be confusing.
I believe my implementation of getCurrentTimelineEntry is correct, as i simply grab the current date, floor it to the nearest minute (rounded down) and process it into the relevant string and stick it on the complication.
I do not understand for the life of me what the getTimelineEndDate method does, as no matter what i pass into the handler it seems to make no difference.
The most confusing part however is the getTimelineEntries method. I understand the concept, ie pre-fetching what the complication should look like. Here, i attempt to prefetch the next hour's worth of data, (in my case, 60 different entries representing 60 different minutes). This seems to work, however, the method runs 10 times before stopping, by which point it has pre fetched 600 entries, representing 10 minutes worth. This is unintended completely, however not disastrous. The worst part is that i have no idea how to fetch even more data in the future. IE, i want this method to be called on the last date of the last current entry, to fetch the next 10 minutes worth.
In essence, once these bugs have been fleshed out, i want to fetch 24 hours worth of entries (60*24 minutes). And then, when the current time matches the time of the last entry, fetch the next 24 hours, and so on.
I will be grateful for any help, as the documentation for clock kit complications are particular poor.
I got a power consumption sensor (kWh) sending data to my TSI Gen2 environment, and it is malfunctioning in a way that it is losing its accumulated measuremente value when it is shut down. I need to create a new aggregate/variable that would "stack" the measurements , never letting it drop to zero, but always adding to the last greatest value.
I thought about creating a dataset with values from differences from right to left over a fixed timespan, if positive, and then I could create a SUM aggregation over the bucket period on top of it. I am clueless on how to do such thing based on the poor official documentation provided by Microsoft. Any Ideas?
Here are a couple of pictures illustrating my problem and What I am trying to accomplish:
You probably need to add something in the middle (before the IoT Hub/Event Hub) to save the last state of the sensor, and do the appropriate sum if if detects the device was rebooted.
I'm trying to come up with a better way of providing an instantaneous average when input signal is very slow. This seems like a math-y kinda question so if it should be over there let me know.
I have events that are measured as a pulse. Normally I can collect the pulses in a counter and then read the counter value at a fixed interval, say 1/4 second. I can then take the count value and divide by the number of seconds so n/0.25 and get a rate. I then apply a low pass filter to clean up the average and that works great normally.
What do I do when the events happen once every 1-60 seconds? The obvious choice is to wait until I have a sufficient number of counts and divide by total time. However, I need to provide the user with a reading every few seconds so waiting is not an option. I need some way to estimate the value.
I've thought of one solution that's kinda hard to explain. I was wondering if there was a standard way of doing this. I'm pretty sure I have to utilize a different kind of "data," the lack of an event. The goal is to estimate until enough time/events have passed to really calculate a rate and to transition from estimate to real rate seamlessly.
I'm seeing that the individual slice time information from the Private_0019_1029 field of the DICOM header has negative values and sometime only positive values.
I assumed that these times are with respect to the Volume Acquisition time recorded in the header.
Going by that assumption, it would mean that the Acquisition time varies. But upon checking the difference between successive volume acquisition times, I see that it's equal to TR.
So I'm at a loss about what's happening.
I'm trying to look at the raw fMRI data without slice time correction; hence it's necessary to have the individual slice times.
Does the moco series do time shifting in addition to motion correction? (I don't believe it used to, but your experience may show otherwise).
This indicates how their slice timing is measured. Try the computations with the raw and the moco series and see if the times line up. That may give you your answer.
When dealing with private tag, you should really include the Private Vendor ID, in your case the value of tag (0019,0010).
You may also want to have a look at the output of:
gdcmdump --csa input.dcm
This will dump the SIEMENS CSA header directly from the DICOM attribute.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have a question about how dynamic the Y axis of a burn down chart in Scrum should be. We plot the chart in the beginning of a sprint having the total number of estimated story points on the Y axis, and the planned days on the X axis.
Usually, during the sprint, we have a fair amount of:
unplanned tasks / stories;
tasks / stories that take longer than estimated (re-estimated by the person checking out the task);
Questions:
should the story points of the unplanned tasks be plotted into the chart? if so, extend the y axis as well and redraw the expected curve? or just plot the points and have an actual curve with points maybe higher than the starting point?
should the re estimations be counted when plotting the chart, or just the initial estimations? same questions as for the first question...
I would prefer to ignore the unplanned items and the re-estimations as they will show up in the actual focus factor calculation anyway. Is it wrong?
Try using a burn UP chart.
http://www.nearinfinity.com/blogs/lee_richardson/forget_burndown_use_burnup_charts.html
Also, I would do everything in your power to stop the unplanned items. They are typically very caustic. If it's code debt cashing in, try to address it a little bit at a time in every sprint. If it's a consistent amount of time every sprint, perhaps create a story at the start of the sprint for "unplanned tasks" or "production fixes" or something like that.
In the end, what really matters is that the burndown chart allows you to track progress (or lack thereof) toward the commitment. So as long as you're achieving that, you're good to go. Which means, really any of these solutions would work - just pick one and go with it.
We usually do option number 2 at work, adding the new story points to the actual line so that we "see" that the line goes up, reflecting new learnings and additions. But since opinions vary, I guess your team will have to agree on what suites them best, since these burndown charts are for the team to show progress throughout the sprint.
What you count or not count should depend on what you are using your burndown for.
When I use a burndown it is most often to answer the question "Are we on track to completing our commitment of this sprint - or do we need to take external action?".
In that case, the thing that is most relevant to track is the "anticipated total amount of work left to finish commitment"; whether that amount was planned/unplanned or whether the amount was originally estimated to another amount is uninteresting in this context. It is still amount of work that need to be done - so it all counts.
So, count all remaining work. If the graph points towards the goal, keep working. If it points drastically different - take external actions (e g renegotiate sprint commitment w PO).
Now, you might be trying to answer another question (e g "how good are we at planning" or "are we having scope creep during sprint"), and in that case you would count in a different way.
A burndown chart is useful for tracking progress towards the team's commitment. In this case, it sounds like your team is struggling with two things that don't relate to the burndown chart:
1. Unplanned work
2. Poor estimates.
The key here is to focus on those problems. No matter what you do with the burndown chart, if you're adding unplanned work and your estimates are poor... you'll never derive any value from the burndown chart.
I'd recommend a couple of things:
1. Switch to tracking hours for Tasks... not points. Hours are tangible for the team... they mean something. Points are typically burned down at the release level.
2. Try shortening the length of your sprints. It's easier to achieve a smaller goal.
3. Ensure that task estimates are no longer then 8 hours. In fact, I'd shorten that to probably 4 hours. Estimating tasks that take longer than a single day encourages the wrong behavior for the team.
4. Ensure that you're spending enough time in Sprint Planning that that team can make a commitment. An effective sprint planning meeeting is the first step towards an effective sprint.