How do I set iterations for stage runs? - k6

I am trying to setup a run in K6 with staged setup and rundown. I am trying to find out how to setup K6 so that it will start a new iteration of the run once the stages are complete. Do I simply include iterations in the stage block, or is there something else I need to do?

What you basically want is a given set of stages to repeat over prolonged period of time.
There are two solutions:
Just run the script in a loop in a bash/cmd/shell script - will have some pauses in between runs as k6 will need to initialize again. Additionally you will have multiple outputs but this might not be bad, see below
Have stages that are as many as you need. You can do it by hand or just script it inside the test script:
import http from "k6/http";
import { check, sleep } from "k6";
let stages = [// total 30 seconds
// Ramp-up from 1 to 5 VUs in 10s
{ duration: "10s", target: 5 },
// Stay at rest on 5 VUs for 5s
{ duration: "5s" },
// Ramp-down from 5 to 0 in 15s
{ duration: "15s", target: 0}
];
let final_stages = [];
// 30 seconds by 120 seconds is 3600 seconds or 1 hour
for (let i =0 ;i < 120; i++) {
final_stages = final_stages.concat(stages)
}
export let options = {
stages: final_stages,
discardResponseBodies: true
};
export function setup() {
console.log(JSON.stringify(options.stages));
}
export default function(testStart) {
// normal execution
let res = http.get("http://httpbin.org/");
check(res, { "status is 200": (r) => r.status === 200 });
}
I highly advise running k6 with --no-summary and maybe even --no-thresholds or just not using thresholds if the time it will be ran for will be great as you are probably going to run out memory just collecting data inside k6. This means that you probably should use some storage for the metrics like influxdb or even Load Impact's Insights. This of course might not be true in your case - you will have to check :).
Answer as I understood the question previously: "I want to have stages and then I want to do some concrete amount of iterations of the script instead of for a given duration as will happen with another stage"
The simple answer is no.
If you have stages, iterations and duration for a test are mutually
exclusive - only one of them will be taken into account. Look at the
end for possible work around.
Long answer:
If I understand correctly what you want is to have a bunch of stages
and then after all the stages have ran to have a few VUS which do a
given number of iterations. This is currently not supported and is
interesting to us (the k6 team) why you would need this.
Workaround(s): Currently I can think of two workarounds - both hacky:
If you don't need anything from the test in the later iterations make two tests and make a bash/shell/cmd script to run them one after
the other. This will have some pause in between the test runs and you
will have two outputs but will definitely be easier.
Make another stage after all the stages have finished with long enough duration to run all the iterations. Have a variable that you
record the beginning of the whole test - preferably in the setup and
then calculate when you reach the last stage. Run as many iterations
as you want and then sleep for the remaining of the test.
It's important to note that at this point there is no way to
communicate between VUs, so you will have to decide that when you have
10 VUs each will do 5 to get 50 iterations and maybe one VU will
finish much faster or later or something like that.
import http from "k6/http";
import { check, sleep } from "k6";
export let options = {
stages: [
// Ramp-up from 1 to 5 VUs in 10s
{ duration: "10s", target: 5 },
// Stay at rest on 5 VUs for 5s
{ duration: "5s" },
// run for some amount of time after that
{ duration: "15s"}
],
discardResponseBodies: true
};
export function setup() {
return Date.now();
}
let iter = 5; // how many iteration per VU should be done in the iterations part
export default function(testStart) {
// 15ms is the amount of time for the pre iterations phase
if (Date.now() - testStart > 15 * 1000) {
// "iterations part"
if (iter == 0) { // we have ran out of iterations to do
sleep(15000) // sleep for the duration of the phase
}
iter = iter - 1;
// debug log
console.log(__VU, __ITER, iter);
// the actual part you want to do
let res = http.get("http://httpbin.org/");
check(res, { "status is 200": (r) => r.status === 200 });
} else {
// normal execution
let res = http.get("http://httpbin.org/");
check(res, { "status is 200": (r) => r.status === 200 });
}
}
This obviously needs to be tailored to your case
and won't be exact, but once again it's much more interesting what you
want to do with it and why.
Sidenote:
During the discussion of the arrival rate based execution this
has been discussed and the current plan is this to be supported but
this work is probably at least a month a way from alpha/beta
capability as there is a lot of internal refactoring needed for
supporting everything we want to add along with it. You should
probably write an issue with your use case as to be taken into
consideration in the future.

Related

How quickly can Realm return sorted data?

Realm allows you to receive the results of a query in sorted order.
let realm = try! Realm()
let dogs = realm.objects(Dog.self)
let dogsSorted = dogs.sorted(byKeyPath: "name", ascending: false)
I ran this test to see how quickly realm returns sorted data
import Foundation
import RealmSwift
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
}
class RealmSortTest {
let documentCount = 1000000
var smallestValue: TestModel = TestModel()
func writeData() {
let realm = try! Realm()
var documents: [TestModel] = []
for _ in 0 ... documentCount {
let newDoc = TestModel()
newDoc.value = Int.random(in: 0 ... Int.max)
documents.append(newDoc)
}
try! realm.write {
realm.deleteAll()
realm.add(documents)
}
}
func readData() {
let realm = try! Realm()
let sortedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
let start = Date()
self.smallestValue = sortedResults[0]
let end = Date()
let delta = end.timeIntervalSinceReferenceDate - start.timeIntervalSinceReferenceDate
print("Time Taken: \(delta)")
}
func updateSmallestValue() {
let realm = try! Realm()
let sortedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
smallestValue = sortedResults[0]
print("Originally loaded smallest value: \(smallestValue.value)")
let newSmallestValue = TestModel()
newSmallestValue.value = smallestValue.value - 1
try! realm.write {
realm.add(newSmallestValue)
}
print("Originally loaded smallest value after write: \(smallestValue.value)")
let readStart = Date()
smallestValue = sortedResults[0]
let readEnd = Date()
let readDelta = readEnd.timeIntervalSinceReferenceDate - readStart.timeIntervalSinceReferenceDate
print("Reloaded smallest value \(smallestValue.value)")
print("Time Taken to reload the smallest value: \(readDelta)")
}
}
With documentCount = 100000, readData() output:
Time taken to load smallest value: 0.48901796340942383
and updateData() output:
Originally loaded smallest value: 2075613243102
Originally loaded smallest value after write: 2075613243102
Reloaded smallest value 2075613243101
Time taken to reload the smallest value: 0.4624580144882202
With documentCount = 1000000, readData() output:
Time taken to load smallest value: 4.807577967643738
and updateData() output:
Originally loaded smallest value: 4004790407680
Originally loaded smallest value after write: 4004790407680
Reloaded smallest value 4004790407679
Time taken to reload the smallest value: 5.2308430671691895
The time taken to retrieve the first document from a sorted result set is scaling with the number of documents stored in realm rather than the number of documents being retrieved. This indicates to me that realm is sorting all of the documents at query time rather than when the documents are being written. Is there a way to index your data so that you can quickly retrieve a small number of sorted documents?
Edit:
Following discussion in the comments, I updated the code to load only the smallest value from the sorted collection.
Edit 2
I updated the code to observe the results as suggested in the comments.
import Foundation
import RealmSwift
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
}
class RealmSortTest {
let documentCount = 1000000
var smallestValue: TestModel = TestModel()
var storedResults: Results<TestModel> = (try! Realm()).objects(TestModel.self).sorted(byKeyPath: "value")
var resultsToken: NotificationToken? = nil
func writeData() {
let realm = try! Realm()
var documents: [TestModel] = []
for _ in 0 ... documentCount {
let newDoc = TestModel()
newDoc.value = Int.random(in: 0 ... Int.max)
documents.append(newDoc)
}
try! realm.write {
realm.deleteAll()
realm.add(documents)
}
}
func observeData() {
let realm = try! Realm()
print("Loading Data")
let startTime = Date()
self.storedResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
self.resultsToken = self.storedResults.observe { changes in
let observationTime = Date().timeIntervalSince(startTime)
print("Time to first observation: \(observationTime)")
let firstTenElementsSlice = self.storedResults[0..<10]
let elementsArray = Array(firstTenElementsSlice) //print this if you want to see the elements
elementsArray.forEach { print($0.value) }
let moreElapsed = Date().timeIntervalSince(startTime)
print("Time to printed elements: \(moreElapsed)")
}
}
}
and I got the following output
Loading Data
Time to first observation: 5.252112984657288
3792614823099
56006949537408
Time to printed elements: 5.253015995025635
Reading the data with an observer did not reduce the time taken to read the data.
At this time it appears that Realm sorts data when it is accessed rather than when it is written, and there is not a way to have Realm sort data at write time. This means that accessing sorted data scales with the number of documents in the database rather than the number of documents being accessed.
The actual time taken to access the data varies by use case and platform.
dogs and dogsSorted are Realm Results Collection object that essentially contains pointers to the underlying data, not the data itself.
Defining a sort order does NOT load all of the objects and they remain lazy - only loading as needed, which is one of the huge benefits to Realm; giant datasets can be used without worrying about overloading memory.
It's also one of the reasons that Realm Results objects always reflect the current state of the data of the underlying data; that data can change many times and what you see in your app Results vars (and Realm Collections in general) will always show the updated data.
As a side node, at this time working with Realm Collection objects with Swift High Level functions causes that data to load into memory - so don't do that. Sort, Filter etc with Realm functions and everything stays lazy and memory friendly.
Indexing is a trade off; on one hand it can improve the performance of certain queries like an equality ( "name == 'Spot'" ) but on the other hand it can slow down write performance. Additionally, adding indexes takes up a bit more space.
Generally speaking, indexing is best for specific use cases; maybe in a situation were you doing some kind of type ahead autofill where performance is critical. We have several apps with very large datasets (Gb's) and nothing is indexed because the performance advantage received is offset by slower writes, which are done frequently. I suggest starting without indexing.
EDIT:
Going to update the answer based on additional discussion.
First and foremost, copying data from one object to another is not a measure of database loading performance. The real objective here is the user experience and/or being able to access that data - from the time the user expects to see the data to when it's shown. So let's provide some code to demonstrate general performance:
We'll first start with a similar model to what the OP used
class TestModel: Object {
#Persisted(indexed: true) var value: Int = 0
convenience init(withIndex: Int) {
self.init()
self.value = withIndex
}
}
Then define a couple of vars to hold the Results from disk and a notification token which allows us to know when that data is available to be displayed to the user. And then lastly a var to hold the time of when the loading starts
var modelResults: Results<TestModel>!
var modelsToken: NotificationToken?
var startTime = Date()
Here's the function that writes lots of data. The objectCount var will be changed from 10,000 objects on the first run to 1,000,000 objects on the second. Note this is bad coding as I am creating a million objects in memory so don't do this; for demonstration purposes only.
func writeLotsOfData() {
let realm = try! Realm()
let objectCount = 1000000
autoreleasepool {
var testModelArray = [TestModel]()
for _ in 0..<objectCount {
let m = TestModel(withIndex: Int.random(in: 0 ... Int.max))
testModelArray.append(m)
}
try! realm.write {
realm.add(testModelArray)
}
print("data written: \(testModelArray.count) objects")
}
}
and then finally the function that loads those objects from realm and outputs when the data is available to be shown to the user. Note they are sorted per the original question - and in fact will maintain their sort as data is added and changed! Pretty cool stuff.
func loadBigData() {
let realm = try! Realm()
print("Loading Data")
self.startTime = Date()
self.modelResults = realm.objects(TestModel.self).sorted(byKeyPath: "value")
self.modelsToken = self.modelResults?.observe { changes in
let elapsed = Date().timeIntervalSince(self.startTime)
print("Load completed of \(self.modelResults.count) objects - elapsed time of \(elapsed)")
}
}
and the results. Two runs, one with 10,000 objects and one with 1,000,000 objects
data written: 10000 objects
Loading Data
Load completed of 10000 objects - elapsed time of 0.0059670209884643555
data written: 1000000 objects
Loading Data
Load completed of 1000000 objects - elapsed time of 0.6800119876861572
There are three things to note
A Realm Notification object fires an event when the data has
completed loading, and also when there are additional changes. We are
leveraging that to notify the app when the data has completed loading
and is available to be used - shown to the user for example.
We are lazily loading all of the objects! At no point are we going
to run into a memory overloading issue. Once the objects have loaded
into the results, they are then freely available to be shown to the
user or processed in whatever way is needed. Super important to work
with Realm objects in a Realm way when working with large datasets.
Generally speaking, if it's 10 objects well, no problem tossing
them into an array, but when there are 1 Million objects - let Realm
do it's lazy job.
The app is protected using the above code and techniques. There
could be 10 objects or 1,000,000 objects and the memory impact is
minimal.
EDIT 2
(see comment to the OP's question for more info about this edit)
Per a request fromt the OP, they wanted to see the same exercise with printed values and times. Here's the updated code
self.modelsToken = self.modelResults?.observe { changes in
let elapsed = Date().timeIntervalSince(self.startTime)
print("Load completed of \(self.modelResults.count) objects - elapsed time of \(elapsed)")
print("print first 10 object values")
let firstTenElementsSlice = self.modelResults[0..<10]
let elementsArray = Array(firstTenElementsSlice) //print this if you want to see the elements
elementsArray.forEach { print($0.value)}
let moreElapsed = Date().timeIntervalSince(self.startTime)
print("Printing of 10 elements completed: \(moreElapsed)")
}
and then the output
Loading Data
Load completed of 1000000 objects - elapsed time of 0.6730009317398071
print first 10 object values
12264243738520
17242140785413
29611477414437
31558144830373
32913160803785
45399774467128
61700529799916
63929929449365
73833938586206
81739195218861
Printing of 10 elements completed: 0.6745189428329468

Update firebase database value every 5 seconds using Pub/sub OR Cloud Tasks?

I am new to Firebase and I am totally confused about what should I use. Here is my flow.
I have a collection score on firebase and it has values
- start_time
- count
- max_count
Now when start_time matches with the current time, I need to increment the count every five seconds till it matches max_count to the database. This should be in the backend. Now here I got confused. What can be suitable for this?
There are so many documents about Cloud Tasks and Pub/Sub.
If I Call the firebase function from Pub/Sub to update the count every 5 seconds then I will be paying for un-used compute time for calling a function.
I am not aware more about Cloud Tasks that is it matches my requirement? Can anyone please guide me?
Neither Cloud Tasks nor Pub/Sub would be the right solution for this and I wouldn't recommend using a cron-type service for such a menial task.
Instead consider moving the incremental logic to your client and just storing start_time and max_count in your database. Here's an example:
// Let's set a start_time 10 seconds in the future and pretend this was in the database
const start_time = Math.floor((new Date()).getTime() / 1000) + 10;
// Pretend this came from the database, we only want to iterate 10 times
const max_count = 10;
let prev_count = 0;
document.write("Waiting 10 seconds before starting<br />");
// Let's iterate once a second until we reach the start_time
let interval = setInterval(() => {
const now = Math.floor((new Date()).getTime() / 1000);
// If it's not start time, exit
if (now < start_time) return;
// Determine the count by dividing by 5 seconds
let count = Math.floor((now - start_time) / 5);
if (count > prev_count) {
document.write(`Tick: ${count}<br />`);
}
prev_count = count;
if (count >= max_count) {
clearInterval(interval);
}
}, 1000);
If you need the count stored in the database, have it update the count value in your database each time it increments.

Is there a better way to write this firebase cloud function than what I have right now?

I have been trying to learn firebase cloud functions recently and I have wrote an http that takes the itemName, sellerUid, and quantity. Then I have a background trigger (an onWrite) that finds the Item Price with the provided sellerUid and itemName and computes the total (Item Price * Quantity) and then writes it into a document in firestore.
My question is:
with what I have right now, suppose my client purchases N items, this means that I will have:
N reads (from the N items' price searching),
2 writes (one initial write for the N items and 1 for the Total Amount after computation),
N number of searches from cloud function??
I am not exactly sure how cloud functions count towards read and writes as well as the amount of compute time it needs (though it's all just text though so should be negligible?)
Would love to hear your thoughts on if what I have is already good enough or is there a much more efficient way of going about this.
Thanks!
exports.itemAdded = functions.firestore.document('CurrentOrders/{documentId}').onWrite(async (change, context) => {
const snapshot = change.after.data();
var total = 0;
for (const [key, value] of Object.entries(snapshot)) {
if (value['Item Name'] != undefined) {
await admin.firestore().collection('Items')
.doc(key).get().then((dataValue) => {
const itemData = dataValue.data();
if (!dataValue.exists) {
console.log('This is empty');
} else {
total += (parseFloat(value['Item Quantity']) * parseFloat(itemData[value['Item Name']]['Item Price']));
}
});
console.log('This is in total: ', total);
}
}
snapshot['Total'] = total;
console.log('This is snapshot afterwards: ', snapshot);
return change.after.ref.set(snapshot);
});
With your current approach you will be billed with:
N reads (from the N items' price searching);
1 write that triggers your onWrite function;
1 write that persists the total value;
One better approach that I can think of is one of comparing the size of the list of values in change.before.data() and change.after.data(), and reading the current total value (0 if this is the first time) and afterwards add only the values that were added in change.after.data() instead of N values, which would potentially result in you being charged for less reads.
For the actual pricing, if you check this Documentation for Cloud Functions, you will see that on your case only invocation and compute billing applies to your case, however there is a free tier for both, so if you are using this only to learn and this app does not have a lot of use, you should be on the free tier with either approach.
Let me know if you need any more information.

Stream Heart Pulse from Apple Watch OS 2 [duplicate]

Can we access the heart rate directly from the apple watch? I know this is a duplicate question, but no one has asked this in like 5 months. I know you can access it from the Health App but I'm not sure how "real-time" that will be.
Heart Rate Raw Data information is now available in Watchkit for watchOS 2.0.
WatchOS 2 includes many enhancements to other existing frameworks such as HealthKit, enabling access to the health sensors that access heart rate and health information in real-time.
You could check this information in the following session which is total 30 minutes presentation.If you do not want to watch entire session, then you directly jump to Healthkit API features which is in between 25-28 min:
WatchKit for watchOS 2.0 Session in WWDC 2015
Here is the source code implementation link
As stated in the HKWorkout Class Reference:
The HKWorkout class is a concrete subclass of the HKSample class.
HealthKit uses workouts to track a wide range of activities. The
workout object not only stores summary information about the activity
(for example, duration, total distance, and total energy burned), it
also acts as a container for other samples. You can associate any
number of samples with a workout. In this way, you can add detailed
information relevant to the workout.
In that given link, the following part of the code defines sample rate of heartRate
NSMutableArray *samples = [NSMutableArray array];
HKQuantity *heartRateForInterval =
[HKQuantity quantityWithUnit:[HKUnit unitFromString:#"count/min"]
doubleValue:95.0];
HKQuantitySample *heartRateForIntervalSample =
[HKQuantitySample quantitySampleWithType:heartRateType
quantity:heartRateForInterval
startDate:intervals[0]
endDate:intervals[1]];
[samples addObject:heartRateForIntervalSample];
As they state there:
You need to fine tune the exact length of your associated samples
based on the type of workout and the needs of your app. Using 5 minute
intervals minimizes the amount of memory needed to store the workout ,
while still providing a general sense of the change in intensity over
the course of a long workout. Using 5 second intervals provides a
much-more detailed view of the workout, but requires considerably more
memory and processing.
After exploring HealthKit and WatchKit Extension, My findings are as follows:
We do not need the WatchKit Extension to get the Heart Rate Data.
You just need to have an iPhone with paired Apple watch (which is obvious)
The Default Apple Watch Heart Rate monitor app updates the HealthKit data immediately only when it is in the foreground.
When the Default Apple Watch Heart Rate monitor app is in the Background, it updates the HealthKit data at the interval of 9-10 mins.
To get the Heart rate data from the HealthKit following query needs to be fired periodically.
func getSamples() {
let heathStore = HKHealthStore()
let heartrate = HKQuantityType.quantityType(forIdentifier: .heartRate)
let sort: [NSSortDescriptor] = [
.init(key: HKSampleSortIdentifierStartDate, ascending: false)
]
let sampleQuery = HKSampleQuery(sampleType: heartrate!, predicate: nil, limit: 1, sortDescriptors: sort, resultsHandler: resultsHandler)
heathStore.execute(sampleQuery)
}
func resultsHandler(query: HKSampleQuery, results: [HKSample]?, error: Error?) {
guard error == nil else {
print("cant read heartRate data", error!)
return
}
guard let sample = results?.first as? HKQuantitySample else { return }
// let heartRateUnit: HKUnit = .init(from: "count/min")
// let doubleValue = sample.quantity.doubleValue(for: heartRateUnit)
print("heart rate is", sample)
}
Please update me if anyone gets more information.
Happy Coding.
Update
I've updated your code to be clear and general, and be aware that you need to get authorization for reading HeathKit data and adding info.plist key Privacy - Health Records Usage Description
There is no direct way to access any sensors on the Apple Watch. You will have to rely on access from HealthKit.
An Apple evangelist said this
It is not possible to create a heart monitor app at this time. The
data isn't guaranteed to be sent to iPhone in real-time, so you won't
be able to determine what's going on in any timely fashion.
See https://devforums.apple.com/message/1098855#1098855
You can get heart rate data by starting a workout and query heart rate data from healthkit.
Ask for premission for reading workout data.
HKHealthStore *healthStore = [[HKHealthStore alloc] init];
HKQuantityType *type = [HKQuantityType quantityTypeForIdentifier:HKQuantityTypeIdentifierHeartRate];
HKQuantityType *type2 = [HKQuantityType quantityTypeForIdentifier:HKQuantityTypeIdentifierDistanceWalkingRunning];
HKQuantityType *type3 = [HKQuantityType quantityTypeForIdentifier:HKQuantityTypeIdentifierActiveEnergyBurned];
[healthStore requestAuthorizationToShareTypes:nil readTypes:[NSSet setWithObjects:type, type2, type3, nil] completion:^(BOOL success, NSError * _Nullable error) {
if (success) {
NSLog(#"health data request success");
}else{
NSLog(#"error %#", error);
}
}];
In AppDelegate on iPhone, respond this this request
-(void)applicationShouldRequestHealthAuthorization:(UIApplication *)application{
[healthStore handleAuthorizationForExtensionWithCompletion:^(BOOL success, NSError * _Nullable error) {
if (success) {
NSLog(#"phone recieved health kit request");
}
}];
}
Then implement Healthkit Delegate:
-(void)workoutSession:(HKWorkoutSession *)workoutSession didFailWithError:(NSError *)error{
NSLog(#"session error %#", error);
}
-(void)workoutSession:(HKWorkoutSession *)workoutSession didChangeToState:(HKWorkoutSessionState)toState fromState:(HKWorkoutSessionState)fromState date:(NSDate *)date{
dispatch_async(dispatch_get_main_queue(), ^{
switch (toState) {
case HKWorkoutSessionStateRunning:
//When workout state is running, we will excute updateHeartbeat
[self updateHeartbeat:date];
NSLog(#"started workout");
break;
default:
break;
}
});
}
Now it's time to write **[self updateHeartbeat:date]**
-(void)updateHeartbeat:(NSDate *)startDate{
//first, create a predicate and set the endDate and option to nil/none
NSPredicate *Predicate = [HKQuery predicateForSamplesWithStartDate:startDate endDate:nil options:HKQueryOptionNone];
//Then we create a sample type which is HKQuantityTypeIdentifierHeartRate
HKSampleType *object = [HKSampleType quantityTypeForIdentifier:HKQuantityTypeIdentifierHeartRate];
//ok, now, create a HKAnchoredObjectQuery with all the mess that we just created.
heartQuery = [[HKAnchoredObjectQuery alloc] initWithType:object predicate:Predicate anchor:0 limit:0 resultsHandler:^(HKAnchoredObjectQuery *query, NSArray<HKSample *> *sampleObjects, NSArray<HKDeletedObject *> *deletedObjects, HKQueryAnchor *newAnchor, NSError *error) {
if (!error && sampleObjects.count > 0) {
HKQuantitySample *sample = (HKQuantitySample *)[sampleObjects objectAtIndex:0];
HKQuantity *quantity = sample.quantity;
NSLog(#"%f", [quantity doubleValueForUnit:[HKUnit unitFromString:#"count/min"]]);
}else{
NSLog(#"query %#", error);
}
}];
//wait, it's not over yet, this is the update handler
[heartQuery setUpdateHandler:^(HKAnchoredObjectQuery *query, NSArray<HKSample *> *SampleArray, NSArray<HKDeletedObject *> *deletedObjects, HKQueryAnchor *Anchor, NSError *error) {
if (!error && SampleArray.count > 0) {
HKQuantitySample *sample = (HKQuantitySample *)[SampleArray objectAtIndex:0];
HKQuantity *quantity = sample.quantity;
NSLog(#"%f", [quantity doubleValueForUnit:[HKUnit unitFromString:#"count/min"]]);
}else{
NSLog(#"query %#", error);
}
}];
//now excute query and wait for the result showing up in the log. Yeah!
[healthStore executeQuery:heartQuery];
}
You also have a turn on Healthkit in capbilities. Leave a comment below if you have any questions.

Implementing an IObservable to compute digits of Pi

This is an academic exercise, I'm new to Reactive Extensions and trying to get my head around the technology. I set myself a goal of making an IObservable that returns successive digits of Pi (I happen to be really interested in Pi right at the moment for unrelated reasons). Reactive Extensions contains operators for making observables, the guidance they give is that you should "almost never need to create your own IObsevable". But I can't see how I can do this with the ready-made operators and methods. Let me elucidate a bit more.
I was planning to use an algorithm that would involve the expansion of a Taylor series for Arctan. To get the next digit of Pi, I'd expand a few more terms in the series.
So I need the series expansion going on asynchronously, occasionally throwing out the next computed digit to the IObserver. I obviosly don't want to restart the computation from scratch for each new digit.
Is there a way to implement this behaviour using RX's built-in operators, or am I going to have to code an IObservable from scratch? What strategy suggests itself?
For something like this, the simplest method would be to use a Subject. Subject is both an IObservable and IObserver, which sounds a bit strange but it allows you to use them like this:
class PiCalculator
{
private readonly Subject<int> resultStream = new Subject<int>();
public IObservable<int> ResultStream
{
get { return resultStream; }
}
public void Start()
{
// Whatever the algorithm actually is
for (int i = 0; i < 1000; i++)
{
resultStream.OnNext(i);
}
}
}
So inside your algorithm, you just call OnNext on the subject whenever you want to produce the next value.
Then to use it, you just need something like:
var piCalculator = new PiCalculator();
piCalculator.ResultStream.Subscribe(n => Console.WriteLine((n)));
piCalculator.Start();
Simplest way is to create an Enumerable and then convert it:
IEnumerable<int> Pi()
{
// algorithm here
for (int i = 0; i < 1000; i++)
{
yield return i;
}
}
Usage (for a cold observable, that is every new 'subscription' starts creating Pi from scratch):
var cold = Pi().ToObservable(Scheduler.ThreadPool);
cold.Take(5).Subscribe(Console.WriteLine);
If you want to make it hot (everyone shares the same underlying calculation), you can just do this:
var hot = cold.Publish().RefCount();
Which will start the calculation after the first subscriber, and stop it when they all disconnect. Here's a simple test:
hot.Subscribe(p => Console.WriteLine("hot1: " + p));
Thread.Sleep(5);
hot.Subscribe(p => Console.WriteLine("hot2: " + p));
Which should show hot1 printing only for a little while, then hot2 joining in after a short wait but printing the same numbers. If this was done with cold, the two subscriptions would each start from 0.

Resources