I'm developing an apple watch application (WatchOs4). I'm interested to monitor heart rate, so I'm using a WorkoutSession to get continuous data updates. The problem is : when I start a workout session, it will automatically update the exercise ring of the Activity Rings on the watch. As it's not a sport app, I would like to prevent that.
Does anyone have a solution for that ?
I just discovered that if you pause the workout session heart rate is still collected, but calories and exercise minutes are not.
session.startActivity(with: Date())
session.pause()
builder.beginCollection(withStart: Date()) { (success, error) in
guard success else {
// Handle this
return
}
}
It's really that easy. Months of trying to figure this out and the internet didn't have an answer.
Note that the Health App will still technically shows calories being collected, but it seems to only be about 15-20 per hour which doesn't seem to be more than usual for just wearing the watch normally.
Please check https://developer.apple.com/documentation/healthkit/hkworkoutsession
Filling the Rings section.
Set your workout in one of the following activities run, walk, cycle, stair climbing, elliptical, and rowing activities which provide customized calorie calculations. So if you didn't move, nothing will be contributed to the ring.
FYI I only tried indoor walking.
Related
Hei guys,
I am creating a script that will send signals to a Telegram group from which Cornix will read those signals and, through the APIs, send those "trades" to your set exchange.
So a classic Signal Channel on TG.
Problem I am getting is that when an entry price gets detected, the exchange does not get that trade in time.
By that I mean it takes too much time for the script to send the signal on Telegram and too much time for Cornix to send the trade through.
By the time Cornix opens the limit order that entry price has already changed so I have to hope that the price gets hit again, which as you can guess, is not ideal.
Currently I am using the 5 TF. Tried the 15 minutes timeframe. Both have the same issue.
Is there a way to fix this delay or somehow reduce it?
Thanks
I’ve also noticed delays from TV to my bots and it doesn’t really matter which ones however I have noticed that TV signals mostly are delayed a few seconds in time, then I’m my case it goes to a bot and then from bot to exchange so there will naturally be a delay from signal to exchange and actually placing the order. Bot to binance has taken some 1.5 minutes to trade and you are right sometimes the price moves quickly, maybe set market orders to buy and see how it goes. I’d be interested if you find a quicker solution
I know this has been asked before here, but Id like to extend the question further.
Lets say my entry price is 50, so at the start of the day I place a limit order bid 50 for 1 lot. During the trading day, the market collapses and I get filled on my bid. In a real world live trading scenario, my execution is going to be on the same daily bar at the price of 50. Even if I'm using 1 minute bars and that fill happens at 14:00 in real time, the data and prices at 14:01 are completely irrelevant to the trade and fill.
Furthermore, if I am already in a trade (lets say short # 50s), and I place a stop-loss order at 80s and the market trades up through the 80s - Im going to get stopped out then and there, around about the price of 80s give or take some slippage. The next bar, whether it be daily, hourly or 1 minute, may open up at 150. A backtest that is going to execute that trade on the open of the next bar is now potentially waaaay out of sync with what would have happened in a real time live scenario.
I understand that any strategy that calculates its trading signals based off a bar's close can be subject to huge biases without enforcing the next bar execution. But for strategies that have predefined entry/exit signals (which I feel is going to be the majority) the ability to execute on the same bar is crucial!
In the post linked above, Josh Ulrich mentioned adding allowMagicalThinking=TRUE to the calls to applyStrategy and applyRules. However, I cant seem to find any documentation on it, and my implementation of it hasnt had any effect. What am I missing?
Call to applyRules:
test <- applyRules(strategy=strategy.st,portfolio=portfolio.st, symbol = symbols, mktdata=mktdata , allowMagicalThinking=TRUE)
Alternatively, call to strategy:
out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st, allowMagicalThinking=TRUE)
allowMagicalThinking = TRUE causes execution to occur on the same observation as order entry. There is no way to force orders to be entered on the same observation as the signal that causes them.
If your signals really are pre-defined, you can include them in your mktdata object and shift them sufficiently so that execution occurs when you think it should.
I caution anyone who does this to double- and triple-check your results, because you're side-stepping almost all of quantstrat's built-in safeguards to avoid creating look-ahead bias in your backtests.
So I've been messing around with Project Tango, and noticed that if I turn on a motion tracking app, and leave the device on a table(blocking all cameras), the motion tracking goes off in crazy directions and makes incredibly wrong predictions on where I'm going (I'm not even moving, but the device thinks I'm going 10 meters to the right). I'm wondering if their is some exception that can be thrown or some warning or api call I can call to stop this from happening.
if you block all the camera, there is not features camera can capture.
so motion tracking may be in two stages:
1. No moving,
2. drifting to Hawaii.
either ways may happen.
If you did block the fisheye camera, yes, this is expected.
For API, There is a way to handle it.
Please check life cycle for motiontracking concept
For example for C/C++ :
https://developers.google.com/project-tango/apis/c/c-motion-tracking
if API detected pose_data as TANGO_POSE_INVALID, the motion tracking system can be reinitialized in two ways. If config_enable_auto_recovery was set to true, the system will immediately enter the TANGO_POSE_INITIALIZING state. It will use the last valid pose as the starting point after recovery. If config_enable_auto_recovery was set to false, the system will essentially pause and always return poses as TANGO_POSE_INVALID until TangoService_resetMotionTracking() is called. Unlike auto recovery, this will also reset the starting point after recovery back to the origin.
Also you can add Handling Adverse Situations with UX-Framework to your app.
check the link:
https://developers.google.com/project-tango/ux/ux-framework-exceptions
The last solution is by write the function handle driftting by measuring velocity of pose_data and call TangoService_resetMotionTracking() and so on.
I run a filter on the intake that tries not to let obviously ridiculous pose changes through, and I believe no reported points whose texel is white nor any pose where the entire texture is in near shouting distance of black
I am working on a unity mobile game. Which is like a multiplayer version of temple run. Because this game is meant for mobile there is a fluctuating latency generally in the range of 200ms - 500ms.
Since the path is predetermined and actions the user can perform are limited (jump,slide, use powerup etc) , the remote player keep running on the path until it receives the updated state from its local player.
This strategy generally works pretty well since I only need to send limited amount of data over the network but there is a specific case in which I am having issues.
In the game, players die on the specific positions (obstacles).
I want remote players to die on the same obstacle/position as local player but due to latency in sending message, the remote player crosses the obstacle by the time it receives the death message.
Is there a way to sync the players on the deaths.
One of the things I tried was moving the remote player back to the local players death position but not only does it look awkward visually but can also raise other syncing issues.
Is there some other better way to do this ?
One way I may recommend is to make one player acts like server (not real server). The server-player will do all the computation like moving, jumping, creating scenes, etc. Then the server-player will send all the data to sync with the client-player. The client-player get the data and process game state. The client-player can also send his action (left-right-jump-slide) to the server-player. This way both player will have the same state of the game like position, die. You also need to deal with latency by adding some prediction.
So the solution I implemented was I spawned all the remote player behind enough so they can have some time to receive the information that local player was died on specific obstacle. And in the end there is a straight path where I just sync players again. So that result is displayed correctly.
I've this application, where two children are playing catch. One throws and the other catches. While I can show a ball object moving between two stationary objects, how do I show the objects "releasing" and "catching" the ball, in a way that is close to lifelike?
EDIT:
The movement of the hands in this game: http://www.acreativedesktop.com/animation-game-slaphands.html is what I would like to replicate. Any tips on how to do that?
As it's already been stated, you need animation to get it right. I suggest looking over Preston Blair's Cartoon Animation Book or The Animator's Survival Kit. You won't need to read the whole thing, just reference the chapters on anticipation and accents.
For example, when one throws, the action doesn't just happen, one first prepares, anticipating the throw, building up energy. In animation you prepare the viewer for the next action, thus creating a seamless link between actions. Once the ball is thrown...there is action and re-action, so the player will return to his casual pose.
The actionscript part should be pretty simple. You should get away with 3 vectors:
1 for setting the balls movement
1 for gravity
1 for friction/wind...etc.
Based on your parameters, you launch the ball, then use the distance between the ball and the catcher to figure out when you can you play the catcher's animation(s)
skeletal animation is a good technique
also reverse kinematics
or even motion-capture
I know this is essentially possible in flash... it does however sound pretty complex. My advice would be to create static animations for the throw action and the catch action, then have these actions play when certain conditions are met (i.e. the ball gets close to one of the people). Trying to get a lifelike throw and catch will be pretty tough. I would think even a lot of console games wouldn't attempt to do this dynamically (i do expect this is in the process or is changing in current gen games)