Clock Kit Complication Synchronised With Time - watchkit

I'm attempting to make an app that displays a different string based on the current time. It should display a different string every minute, synchronised to the apple watch's clock (ie when a new minute starts, replace the current string on the complication).
I have lots of issues with complications for the apple watch, i can see that lots of people find apple's documentation to be confusing.
I believe my implementation of getCurrentTimelineEntry is correct, as i simply grab the current date, floor it to the nearest minute (rounded down) and process it into the relevant string and stick it on the complication.
I do not understand for the life of me what the getTimelineEndDate method does, as no matter what i pass into the handler it seems to make no difference.
The most confusing part however is the getTimelineEntries method. I understand the concept, ie pre-fetching what the complication should look like. Here, i attempt to prefetch the next hour's worth of data, (in my case, 60 different entries representing 60 different minutes). This seems to work, however, the method runs 10 times before stopping, by which point it has pre fetched 600 entries, representing 10 minutes worth. This is unintended completely, however not disastrous. The worst part is that i have no idea how to fetch even more data in the future. IE, i want this method to be called on the last date of the last current entry, to fetch the next 10 minutes worth.
In essence, once these bugs have been fleshed out, i want to fetch 24 hours worth of entries (60*24 minutes). And then, when the current time matches the time of the last entry, fetch the next 24 hours, and so on.
I will be grateful for any help, as the documentation for clock kit complications are particular poor.

Related

Quantstrat: Execute on the same bar

I know this has been asked before here, but Id like to extend the question further.
Lets say my entry price is 50, so at the start of the day I place a limit order bid 50 for 1 lot. During the trading day, the market collapses and I get filled on my bid. In a real world live trading scenario, my execution is going to be on the same daily bar at the price of 50. Even if I'm using 1 minute bars and that fill happens at 14:00 in real time, the data and prices at 14:01 are completely irrelevant to the trade and fill.
Furthermore, if I am already in a trade (lets say short # 50s), and I place a stop-loss order at 80s and the market trades up through the 80s - Im going to get stopped out then and there, around about the price of 80s give or take some slippage. The next bar, whether it be daily, hourly or 1 minute, may open up at 150. A backtest that is going to execute that trade on the open of the next bar is now potentially waaaay out of sync with what would have happened in a real time live scenario.
I understand that any strategy that calculates its trading signals based off a bar's close can be subject to huge biases without enforcing the next bar execution. But for strategies that have predefined entry/exit signals (which I feel is going to be the majority) the ability to execute on the same bar is crucial!
In the post linked above, Josh Ulrich mentioned adding allowMagicalThinking=TRUE to the calls to applyStrategy and applyRules. However, I cant seem to find any documentation on it, and my implementation of it hasnt had any effect. What am I missing?
Call to applyRules:
test <- applyRules(strategy=strategy.st,portfolio=portfolio.st, symbol = symbols, mktdata=mktdata , allowMagicalThinking=TRUE)
Alternatively, call to strategy:
out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st, allowMagicalThinking=TRUE)
allowMagicalThinking = TRUE causes execution to occur on the same observation as order entry. There is no way to force orders to be entered on the same observation as the signal that causes them.
If your signals really are pre-defined, you can include them in your mktdata object and shift them sufficiently so that execution occurs when you think it should.
I caution anyone who does this to double- and triple-check your results, because you're side-stepping almost all of quantstrat's built-in safeguards to avoid creating look-ahead bias in your backtests.

Time synchronization of views generated by different instances of the game engine

I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.
I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.

How to get IMediaControl.Run() to start a file playing with no delay

I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next.
I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames.
I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches:
Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video.
Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file.
Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least).
Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time.
The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing.
Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable.
More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this.
Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch.
Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.
Still more: perhaps I could eliminate this delay by loading the AVIs from memory rather than from a file. Unfortunately, I have no idea how to do this. IMediaControl only has a RenderFile() method, not something like a RenderStream or RenderMemory method.
If you call IMediaControl::Run on a stopped graph, the graph manager will post the call to a worker thread (so there's some variability). On the worker thread, the graph will be paused. Render filters only complete a pause transition once they have received data, so once GetState() returns S_OK, the graph manager knows that the graph is fully cued. At this point, it picks a time roughly 10ms into the future, and calls Run on each filter with that time as the start point. Since it takes time to tell each filter to Run, the dshow Run method has a parameter which is the refclock time at which a sample timestamped zero should be played -- i.e. the time at which the actual transition to run mode should take place.
To synchronise this with another graph, you first have to ensure that both graphs have the same clock. Query the graph (not the filter) for IMediaFilter, and call GetSyncSource on one graph and SetSyncSource on the other. Then you need to pause the second graph, so that it is cued and ready. When you want to start it, call IMediaFilter::Run instead of IMediaControl::Run, and you can pass your own start time. This still has to be a few milliseconds into the future, so the best thing might be to set the start time of the second graph to be the first graph's start time plus its duration (for an indexed container of uncompressed streams, the duration should be accurate).
Another approach is to use multiple graphs. Separating source from rendering would allow you to switch seamlessly between sources since they feed into a common render graph. There is sample source code for this approach at www.gdcl.co.uk/gmfbridge.
G

How do computers figure out date information?

Most languages have some sort of date function where you really don't have to do any programming to get any date information you just get it from the date function/object. I am curious what goes on behind the scenes to make this happen?
Every computer has a system clock which keeps track of date and time. On the lowest level, date and time information it retrieved from there. Above that add timezone information, etc. from the operating system and you got a Date object or something similar.
Depending on your language/environment Date objects can either perform date calculation themselves or you have to use other functions to achieve that. Those ensure that leap years get handled correctly and no invalid date can be created.
But maybe I got your question wrong.
Typically a computer is storing a count of how many of a certain unit of time has gone by since a specific time and date in the past. In Unix systems, for example, this could be the number of seconds since the Unix Epoch, which is midnight, Jan 1st 1970 GMT. In Windows, this is the number of 100 ns intervals since 1601-01-0 (thanks Johannes Rössel). Or it could be something as simple as number of seconds since the computer was powered on.
So from the number of units that have gone by since that time/date, an OS can calculate the number of years, months, days, etc that have gone by. Of course all sorts of fun stuff like leap years and leap seconds have to be taken into account for this to occur.
Systems such as NTP (Network Time Protocol) can be used to synchronize a computer's internal count to atomic clocks via an NTP server over a network. To do this, they NTP takes into account the round trip time and learns the sorts of errors the link to the NTP server.
Date and time information is provided usually by operating system, so it's a system call. Operating system deals with realtime clock mounted on computer mainboard and powered by small battery (which lasts for years).
Well ... Most computers contain a "real-time clock", which counts time on the human scale of seconds, minutes etc. Traditionally, there is a small battery on the motherboard, that lets the chip either remember the time, or even keep counting it, even when the rest of the computer is powered off.
Many computers today use services like the network time protocol to periodically query a centralized high-precision clock, to set the current time. In this way, even if the battery is removed (or just fails), the computer will still know what time and date it is, and be able to update (to correct for errors in the real-time chip's time-keeping) that information as often as necessary.
Aside from the realtime clock, date calculations are mostly a software library function.
Dates are rather irregular and so behind the scenes a mixture of approximations, corrections and lookup-tables are used.
The representation of a date can vary as well, usually some (arbitrary) startdate is used. A common system, also used by astronomers are the Julian day numbers (not to be confused with the Julian calendar). Dates can be stored as seconds-since-start or as days-since-start (the latter is usually a floating point). Here are some more algorithms.
A surprising amount of surprisingly complicated code is required for date parsing, computation, creation etc.
For example, in Java, dates are computed, modified, stored etc via the Date, Calendar, and specifically and typically, the Gregorian Calendar implementation of Calendar. (You can download the SDK/JDK and look at the source for yourself.)
In short, what I took from a quick perusal of the source is: Date handling is non-trivial and not something you want to attempt on your own. Find a good library if at all possible, else you will almost certainly be reinventing the square wheel.
Your computer has a system clock and the BIOS has a timer function that can be updated from your OS. Languages only take the information from there and some can update it too.
Buy any of these books on Calendrical Calculations. They'll fill you in on how the date libraries work under the hood.
The date/time is often stored in terms of times since a certain date. For example ticks (100 nanosecond intervals) since January 1, 0001. It is also usueally stored in reference to UTC. The underlying methods in the OS, database, framework, application, etc. can then convert these to a more usable representation. Back in the day, systems would store component parts of the date, day, month, year, etc as a part of the data structure, but we learned our lesson with the Y2K mess that this probably isn't the best approach.
Most replies have been to do with how the current date is obtained. i.e. from system clock and so on.
If you want to know how it is stored and used there are many different implementations and it depends on the system.
I believe a common one is the use of a 64 bit signed integer in T-sql the 01/01/1970 is 0 so negative numbers are pre 1970 and positive on from that each increment adding 100 th of a second (think it's a 100th would need to check).
Why 01/01/1970 you may ask this is because the gregorian calendar is on a 400 year cycle. 01/01/1970 being the closes start of a cycle to the current date.
This is because "Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100; the centurial years that are exactly divisible by 400 are still leap years. For example, the year 1900 is not a leap year; the year 2000 is a leap year." Makes it very complicated I believe the 400 year cycle coincides with the days of the week repeating as well but would nee dto check. Basically it's very complicated.
Internally it is incredibly difficult to write the datetime library accounting for all these variations such as leap years, the fact there is no year zero..... Not to mention UTC, GMT UT1 times.
We had occasion when debugging a client problem to look at how SQL stores datetimes... fairly interesting and makes pretty good sense once you see it.
SQL uses 2 4 byte integers...
The first 4 bytes are the date in days since Jan. 1st, 1753. I believe the maximum year is supposed to be 9999, which doesn't exactly line up to the number of available integers in 4 bytes, but there you go.
The second 4 bytes are the time in milliseconds since midnight.

If I wanted to work using dates and time going millions of years into the past/future how would I do it?

If I wanted to work using dates and time going millions of years into the past/future how would I do it in C/C++/C#?
For example say I was working on an algorithm to see if a comet was going to hit the earth? Are there commercial or open source libraries that do this?
Most DateTime values only work for a few years. Unixes will run out in only 2038!.
Tony
Astronomers use their own calendar, different from the civil, Gregorian calendar.
Astronomical Julian Dates are what they use.
Look at http://en.wikipedia.org/wiki/Julian_day
Here's a typical package: Solar Clock.
You can't work out future UTC dates down to the second, because of the existence of leap seconds (one of which is scheduled for December 31, a couple of weeks from now). No one knows when leap seconds will be added to the calendar, because no one knows the rate at which the Earth's rotation will continue to slow in the future.
Well, not to be blunt or anything, but if it will hit the earth in like 1700 years, I don't think we'll need to know the actual date.
Unless it's a tuesday.
Never could get the hang of tuesdays.
If the time span offered by typical datetime variables is too small, you have two options:
Using a variable with a bigger scope (i.e. more bits)
Allowing for less precision
Which one you should choose depends on what exactly you want to do. But in general, my advice is option number 2. When we are talking about centuries or millenia, milliseconds are usually not that important...
When it comes to astronomical events, it doesn't matter which part of the earth is facing the Sun, i.e. has daylight. Therefore, calendars and dates are irrelevant too. You should simply use time. A 64 bit time_t for instance is quite sufficient.
Even when you do use time, keep in mind that 3-body systems (like Earth-Sun-Jupiter) are chaotic. Predicting the position of the Earth in the far future has a rather large margin of error.
A 64-bit time_t will work until the year 292,277,026,596.
You just need more bits to store the value and/or use larger increments of time to represent by the timestamp. Instead of ms, do years, then possibly use a math library/API designed for very large numbers if that isn't enough (and depending on your precision requirements) or use floating point.
The DateTime object in C# goes from 1.1.0001 to 12.31.9999. It's even more limited in SQL Server where the earliest date is something like 1735.
You're pretty much going to have to write something yourself or try to scrounge something from someone else. Going forward is probably easier (except the leap second thing.) There is some pretty good info on calendars here.
I think the problem is not knowing whether the comet is going to hit on a Friday or Sunday but if the comet is or isn't going to hit. I guess this means that you could model time in 1000ths of a second as a 64bit long long type. You can resolve 213 Billion Years. You can then have a routine which maps this back to meaning full-ish dates...

Resources