Escaping closure setting views in DispatchQueue.main.async Swift 3 - asynchronous

I'm dealing with some asynchronous functions and trying to update views. In short I have function 1 with asynchronous function that will return a string to be passed to function 2. I am updating views in both functions, on main thread. It all works but I need to understand if this is correct way.
class A {
var varA = ""
var varB = ""
func f1 (_ completion: #escaping (String) -> void ){
some asynchronous call ... { in
...
DispatchQueue.main.async {
self.varA = "something"
sef.labelA.text = self.varA
completion(self.varA)
}
}
}
func f2 (_ string: String){
another asynchronous call ... { in
...
DispatchQueue.main.async {
self.varB = string
sef.labelB.text = self.varB
}
}
}
// funcation call
f1(completion: f2)
}
Three questions, 1) What is the right way to run a dependent function where there is wait for an asynchronous callback?
2) Is DispatchQueue.main.async needed to update views?
3) Is it ok to call async func in another async callback? Isn't there chance self may be nil in some cases if you are updating views in some escaping function?

I'm going to try helping you according to your questions:
Question 1) There are many right ways and each developer can have its own logic, but in this case, what I personally would probably do is something like this:
class A {
func f1 (_ completion: #escaping (String) -> void ){
some asynchronous call ... { in
...
DispatchQueue.main.async { [weak self] in // 1
guard let strongSelf = self else { return } // 2
let varA = "something" // 3
strongSelf.label.text = varA
completion(varA) // 4
}
}
}
func f2 (_ string: String){
another asynchronous call ... { in
...
DispatchQueue.main.async {
sef.labelB.text = string // 5
}
}
}
// function call
// 6
f1(completion: { [weak self] text in
guard let strongSelf = self else { return }
strongSelf.f2(text)
})
}
1 - Here I'm using [weak self] to avoid retain cycles.
2 - Just unwrapping the optional self, case it's nil, I'll just return.
3 - In your case, it's not really necessary to have class variables, so I'm just creating local variables inside the block.
4 - Finally, I'm calling the completion with the variable containing the string.
5 - I also don't really need to set a class variable in here, so I'm just updating the label text with the string provided as a paramater.
6 - Then, I just need to call the first function and use the completion block to call the second after the first one completes.
Question 2) Yes, you must call DispatchQueue.main to update the view. This way your making sure that your code will be executed in the main thread that is crucial for things interacting with UI because it allow us to have a sincronization point as you can read in Apple's documentation.
Question 3) Using [weak self] and guard let strongSelf = self else { return }, I'm avoiding retain cycles and the cases where self can be nil.

Related

Parallel work stealing in arbitrary order in Rust

I'm trying to write a parallel data loader for deep learning in Rust. The task is to write an iterator that under the hood does the following
Reads files from disk and applies some compute-heavy preprocessing to them, the result is generally a numeric array (or multiple)
Groups the results of the previous step into batches of size B and "collates" them - this generally means just concatenating the arrays - moderately compute heavy
Yields the results from step 2.
Step 1 can be both IO and compute bound, depending on network latency, size of files and complexity of preprocessing. It has to be run in parallel by many workers. Step 2 should be off the main thread but likely doesn't need a pool of workers. Step 3 happens on main thread (exposed to Python).
The reason I write it in Rust is that Python offers two options: pure Python implementation shipped with PyTorch, based on multiprocessing, which is somewhat slow but very flexible (arbitrary user-defined data preprocessing and batching) and C++ implementation shipped with Tensorflow, which is assembled by the user from a set of predefined primitives. The latter is substantially faster but too restrictive for the kinds of data processing I wish to do. I expect that Rust will give me the speed of Tensorflow with flexibility of arbitrary code as in PyTorch.
My question is purely about the way to implement parallelism. The ideal setup is to have N workers for step 1) -> channel -> worker for step 2) -> channel -> step 3. Because the iterator object may be dropped at any time, there is a strict requirement to be able to terminate the whole scheme after Drop. On the other hand, there is the flexibility of loading the files in an arbitrary order: for example if the batch size B == 16 and max_n_threads == 32, it is perfectly fine to start 32 workers and yield the first batch containing the 16 examples which happen to return first. This can be exploited for speed.
My naive implementation creates the DataLoader in 3 steps:
Create a n_working: Arc<AtomicUsize> to control the number of worker threads active and should_shutdown: Arc<AtomicBool> to signal shutdown (when Drop is called)
Create a thread responsible for maintaining the pool. It spins on n_working < max_n_threads and keeps spawning worker threads which terminate on should_shutdown, otherwise fetch a single example, send it down the worker->batcher channel and decrement n_working
Create a batching thread which polls the worker->batcher channel, upon receiving B objects concatenates them into a batch and sends down the batcher->yielder channel
#[pyclass]
struct DataLoader {
collate_worker: Option<thread::JoinHandle<()>>,
example_worker: Option<thread::JoinHandle<()>>,
should_shut_down: Arc<AtomicBool>,
receiver: Receiver<Batch>,
length: usize,
}
impl DataLoader {
fn new(
dataset: Dataset,
batch_size: usize,
capacity: usize,
) -> Self {
let n_batches = dataset.len() / batch_size;
let max_n_threads = capacity * batch_size;
let (example_sender, collate_receiver) = bounded((batch_size - 1) * capacity);
let should_shut_down = Arc::new(AtomicBool::new(false));
let shutdown_flag = should_shut_down.clone();
let example_worker = thread::spawn(move || {
rayon::scope_fifo(|s| {
let dataset = &dataset;
let n_working = Arc::new(AtomicUsize::new(0));
let mut current_index = 0;
while current_index < n_batches * batch_size {
if n_working.load(Ordering::Relaxed) == max_n_threads {
continue;
}
if shutdown_flag.load(Ordering::Relaxed) {
break;
}
let index = current_index.clone();
let sender = example_sender.clone();
let counter = n_working.clone();
let shutdown_flag = shutdown_flag.clone();
s.spawn_fifo(move |_s| {
let example = dataset.get_example(index);
if !shutdown_flag.load(Ordering::Relaxed) {
_ = sender.send(example);
} // if we should shut down, skip sending
counter.fetch_sub(1, Ordering::Relaxed);
});
current_index += 1;
n_working.fetch_add(1, Ordering::Relaxed);
};
});
});
let (batch_sender, final_receiver) = bounded(capacity);
let shutdown_flag = should_shut_down.clone();
let collate_worker = thread::spawn(move || {
'outer: loop {
let mut batch = vec![];
for _ in 0..batch_size {
if let Ok(example) = collate_receiver.recv() {
batch.push(example);
} else {
break 'outer;
}
};
let collated = collate(batch);
if shutdown_flag.load(Ordering::Relaxed) {
break; // skip sending
}
_ = batch_sender.send(collated);
};
});
Self {
collate_worker: Some(collate_worker),
example_worker: Some(example_worker),
should_shut_down: should_shut_down,
receiver: final_receiver,
length: n_batches,
}
}
}
#[pymethods]
impl DataLoader {
fn __iter__(slf: PyRef<Self>) -> PyRef<Self> { slf }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
impl Drop for DataLoader {
fn drop(&mut self) {
self.should_shut_down.store(true, Ordering::Relaxed);
if self.collate_worker.take().unwrap().join().is_err() {
println!("Panic in collate worker");
};
if self.example_worker.take().unwrap().join().is_err() {
println!("Panic in example_worker");
};
println!("dropped the dataloader");
}
}
This implementation works and roughly matches the performance of PyTorch but provides no significant speedup. I don't know where to look for improvements, but I imagine it would help to have the thing load-balance automatically in a work-stealing way and to flexibly spawn workers depending on the proportion of IO and compute time. I am also expecting performance issues due to the spinning pool manager and likely corner cases in my handling of Drop.
My question is how to best approach the problem. I am generally unsure if this should be tackled with parallel crates like rayon, async crates like tokio, or a mix of both. I also have the hunch my implementation could be much simpler with the correct use of their combinators/higher order APIs. I tried with rayon but I couldn't get a solution which doesn't wastefully enforce the original sequential returning order and respects the Drop requirement.
Okay I think I've figured out a solution for you that uses rayon parallel iterators.
The trick is to use Results in the rayon iterators, and return Err if the cancellation flag is set.
I first created a utility type to create a cancellable thread in which you can execute rayon iterators. You use it by passing in the thread closure which takes the atomic cancellation token as a parameter. Then you have to check if the cancellation token is true, and if so, exit early.
use std::sync::Arc;
use std::sync::atomic::{Ordering, AtomicBool};
use std::thread::JoinHandle;
fn collate(batch: &[Computed]) -> Batch {
batch.iter().map(|&x| i128::from(x)).sum()
}
#[derive(Debug)]
struct Cancelled;
struct CancellableThread<Output: Send + 'static> {
cancel_token: Arc<AtomicBool>,
thread: Option<JoinHandle<Result<Output, Cancelled>>>,
}
impl<Output: Send + 'static> CancellableThread<Output> {
fn new<F: FnOnce(Arc<AtomicBool>) -> Result<Output, Cancelled> + Send + 'static>(init: F) -> Self {
let cancel_token = Arc::new(AtomicBool::new(false));
let thread_cancel_token = Arc::clone(&cancel_token);
CancellableThread {
thread: Some(std::thread::spawn(move || init(thread_cancel_token))),
cancel_token,
}
}
fn output(mut self) -> Output {
self.thread.take().unwrap().join().unwrap().unwrap()
}
}
impl<Output: Send + 'static> Drop for CancellableThread<Output> {
fn drop(&mut self) {
self.cancel_token.store(true, Ordering::Relaxed);
if let Some(thread) = self.thread.take() {
let _ = thread.join().unwrap();
}
}
}
I found it useful to create a closure that returns a Result<(), Cancelled> so I could use the try operator (?) to exit early.
CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
loop {
// was the thread dropped?
// if so, stop what we're doing
cancelled?;
// do stuff and
// eventually return a result
}
});
I then used that CancellableThread abstraction in the DataLoader. No need to create a special Drop impl for it, because by default, it will call drop on each field anyways, which will handle the cancellation.
type Data = Vec<u8>;
type Dataset = Vec<Data>;
type Computed = u64;
type Batch = i128;
use rayon::prelude::*;
use crossbeam::channel::{unbounded, Receiver};
struct DataLoader {
example_worker: CancellableThread<()>,
collate_worker: CancellableThread<()>,
receiver: Receiver<Batch>,
length: usize,
}
I used unbounded channels, as it was one less thing to bother about. It shouldn't be hard to switch to bounded ones instead.
impl DataLoader {
fn new(dataset: Dataset, batch_size: usize) -> Self {
let (example_sender, collate_receiver) = unbounded();
let (batch_sender, final_receiver) = unbounded();
I'm not sure if you can always guarantee that the number of items in your dataset will be a multiple of the batch_size, so I decided to handle that explicitly.
let length = if dataset.len() % batch_size == 0 {
dataset.len() / batch_size
} else {
dataset.len() / batch_size + 1
};
I created the collating worker first, though that may not be necessary. As you can see, I had to duplicate a little bit to handle partial batches.
let collate_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
'outer: loop {
let mut batch = Vec::with_capacity(batch_size);
for _ in 0..batch_size {
cancelled()?;
if let Ok(data) = collate_receiver.recv() {
batch.push(data);
} else {
if !batch.is_empty() {
// handle the last batch, if there
// weren't enough items to fill it
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
break 'outer;
}
}
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
Ok(())
});
The example worker is where things are really made much simpler, because we can just use rayon parallel iterators. As you can see, we check for cancellation before each heavy computation.
let example_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
let heavy_compute = |data: Data| -> Result<Computed, Cancelled> {
cancelled()?;
Ok(data.iter().map(|&x| u64::from(x)).product())
};
dataset
.into_par_iter()
.map(heavy_compute)
.try_for_each(|computed| {
example_sender.send(computed?).unwrap();
Ok(())
})
});
Then we just construct the DataLoader. You can see the Python impl is identical:
DataLoader {
example_worker,
collate_worker,
receiver: final_receiver,
length,
}
}
}
// #[pymethods]
impl DataLoader {
fn __iter__(this: Self /* PyRef<Self> */) -> Self /* PyRef<Self> */ { this }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
playground

firestore, coroutine and flow

firebase method is working on worker thread automatically. but I have used coroutine and callbackflow to implement firebase listener code synchronously or get return from the listener.
below is my code that I explained
coroutine await with firebase for one shot
override suspend fun checkNickName(nickName: String): Results<Int> {
lateinit var result : Results<Int>
fireStore.collection("database")
.document("user")
.get()
.addOnCompleteListener { document ->
if (document.isSuccessful) {
val list = document.result.data?.get("nickNameList") as List<String>
if (list.contains(nickName))
result = Results.Exist(1)
else
result = Results.No(0)
//document.getResult().get("nickNameList")
}
else {
}
}.await()
return result
}
callbackflow with firebase listener
override fun getOwnUser(): Flow<UserEntity> = callbackFlow{
val document = fireStore.collection("database/user/userList/")
.document("test!!!!!")
val subscription = document.addSnapshotListener { snapshot,_ ->
if (snapshot!!.exists()) {
val ownUser = snapshot.toObject<UserEntity>()
if (ownUser != null) {
trySend(ownUser)
}
}
}
awaitClose { subscription.remove() }
}
so I really wonder these way is good or bad practice and its reason
Do not combine addOnCompleteListener with coroutines await(). There is no guarantee that the listener gets called before or after await(), so it is possible the code in the listener won't be called until after the whole suspend function returns. Also, one of the major reasons to use coroutines in the first place is to avoid using callbacks. So your first function should look like:
override suspend fun checkNickName(nickName: String): Results<Int> {
try {
val userList = fireStore.collection("database")
.document("user")
.get()
.await()
.get("nickNameList") as List<String>
return if (userList.contains(nickName)) Results.Exist(1) else Results.No(0)
} catch (e: Exception) {
// return a failure result here
}
}
Your use of callbackFlow looks fine, except you should add a buffer() call to the flow you're returning so you can specify how to handle backpressure. However, it's possible you will want to handle that downstream instead.
override fun getOwnUser(): Flow<UserEntity> = callbackFlow {
//...
}.buffer(/* Customize backpressure behavior here */)

Unable to change pathBuff/path variable in async function

I was unsure if I should post this here or in code review.
Code review seems to have only functioning code.
So I've a multitude of problems I don't really understand.
(I’m a noob) full code can be found here: https://github.com/NicTanghe/winder/blob/main/src/main.rs
main problem is here:
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
I’ve tried various things to get around problems with borrowing as mutable or as reference,
and I can’t seem to get it to work.
I just get a different set of errors with everything I try.
Furthermore, I'm sorry if this is a duplicate, but looking for separate solutions to the errors just gave me a different error. In a circle.
Full function
async fn print_events(mut selector_loc1:i8, location_loc1: PathBuf) {
let mut reader = EventStream::new();
loop {
//let delay = Delay::new(Duration::from_millis(1_000)).fuse();
let mut event = reader.next().fuse();
select! {
// _ = delay => {
// print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
// },
maybe_event = event => {
match maybe_event {
Some(Ok(event)) => {
//println!("Event::{:?}\r", event);
// if event == Event::Mouse(MouseEvent::Up("Left").into()) {
// println!("Cursor position: {:?}\r", position());
// }
print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
if event == Event::Key(KeyCode::Char('k').into()) {
if selector_loc1 > 0 {
selector_loc1 -= 1;
};
//println!("go down");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('j').into()) {
selector_loc1 += 1;
//println!("go up");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('h').into()) {
//-----------------------------------------
//-------------BackLogic-------------------
//-----------------------------------------
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
//------------------------------------------
//------------------------------------------
} else if event == Event::Key(KeyCode::Char('l').into()) {
//go to next dir
} if event == Event::Key(KeyCode::Esc.into()) {
break;
}
printtype(location_loc1,selector_loc1);
}
Some(Err(e)) => println!("Error: {:?}\r", e),
None => break,
}
}
};
}
}
also, it seems using
use async_std::path::{Path, PathBuf};
makes the rust not recognize unwrap() function → how would I use using ?
There are two problems with your code.
Your PathBuf is immutable. It's not possible to modify immutable objects, unless they support interior mutability. PathBuf does not. Therefore you have to make your variable mutable. You can either add mut in front of it like that:
async fn print_events(mut selector_loc1:i8, mut location_loc1: PathBuf) {
Or you can re-bind it:
let mut location_loc1 = location_loc1;
You cannot have borrow it both mutable and immutably - the mutable borrows are exclusive! Given that the method .parent() borrows the buffer, you have to create a temporary owned value:
// the PathBuf instance
let mut path = PathBuf::from("root/parent/child");
// notice the .map(|p| p.to_owned()) method - it helps us avoid the immutable borrow
let parent = path.parent().map(|p| p.to_owned()).unwrap();
// now it's fine to modify it, as it's not borrowed
path.push(parent);
Your second question:
also, it seems using use async_std::path::{Path, PathBuf}; makes the rust not recognize unwrap() function → how would I use using ?
The async-std version is just a wrapper over std's PathBuf. It just delegates to the standard implementation, so it should not behave differently
// copied from async-std's PathBuf implementation
pub struct PathBuf {
inner: std::path::PathBuf,
}

Issue with coroutines fold function and Firestore

thank you for taking your time to read my problem.
Im currently using Firebase Firestore to retrieve a list of objects that I which to display to the UI, im trying to use a suspend function to fold the accumulative values of a sequence of calls from the Firestore server, but at the moment im unable to pass the result value outside the scope of the coroutine.
This is my fold function:
suspend fun getFormattedList(): FirestoreState {
return foldFunctions(FirestoreModel(""), ::getMatchesFromBackend, ...., ....)
}
This is my custom fold function:
suspend fun foldFunctions(model: FirestoreModel,
vararg functions: suspend (FirestoreModel, SuccessData) -> FirestoreState): FirestoreState {
val successData: SuccessData = functions.fold(SuccessData()) { updatedSuccessData, function ->
val status = function(model, updatedSuccessData)
if (status !is FirestoreState.Continue) {
return status
}
updatedSuccessData <--- I managed to retrieve the list of values correctly here
}
val successModel = SuccessData()
successData.matchList?.let { successModel.matchList = it }
successData.usermatchList?.let { successModel.usermatchList = it }
successData.formattedList?.let { successModel.formattedList = it }
return FirestoreState.Success(successModel) <--- I cant event get to this line with debugger on
}
This is my first function (which is working fine)
suspend fun getMatchesFromBackend(model: FirestoreModel, successData: SuccessData): FirestoreState {
return try {
val querySnapshot: QuerySnapshot? = db.collection("matches").get().await()
querySnapshot?.toObjects(Match::class.java).let { list ->
val matchList = mutableListOf<Match>()
list?.let {
for (document in it) {
matchList.add(Match(document.away_score,
document.away_team,
document.date,
document.home_score,
document.home_team,
document.match_id,
document.matchpoints,
document.played,
document.round,
document.tournament))
}
successData.matchList = matchList <--- where list gets stored
}
}
FirestoreState.Continue
} catch (e : Exception){
when (e) {
is RuntimeException -> FirestoreState.MatchesFailure
is ConnectException -> FirestoreState.MatchesFailure
is CancellationException -> FirestoreState.MatchesFailure
else -> FirestoreState.MatchesFailure
}
}
}
My hypothesis is that the suspen fun get cancelled and the continuation of the scope gets blocked, I have tried to use runBlocking { } without vail. If someone has an idea of how to circumvent this issue I'd be very gratefull.

Adding observer for KVO without pointers using Swift

In Objective-C, I would normally use something like this:
static NSString *kViewTransformChanged = #"view transform changed";
// or
static const void *kViewTransformChanged = &kViewTransformChanged;
[clearContentView addObserver:self
forKeyPath:#"transform"
options:NSKeyValueObservingOptionNew
context:&kViewTransformChanged];
I have two overloaded methods to choose from to add an observer for KVO with the only difference being the context argument:
clearContentView.addObserver(observer: NSObject?, forKeyPath: String?, options: NSKeyValueObservingOptions, context: CMutableVoidPointer)
clearContentView.addObserver(observer: NSObject?, forKeyPath: String?, options: NSKeyValueObservingOptions, kvoContext: KVOContext)
With Swift not using pointers, I'm not sure how to dereference a pointer to use the first method.
If I create my own KVOContext constant for use with the second method, I wind up with it asking for this:
let test:KVOContext = KVOContext.fromVoidContext(context: CMutableVoidPointer)
EDIT: What is the difference between CMutableVoidPointer and KVOContext? Can someone give me an example how how to use them both and when I would use one over the other?
EDIT #2: A dev at Apple just posted this to the forums: KVOContext is going away; using a global reference as your context is the way to go right now.
There is now a technique officially recommended in the documentation, which is to create a private mutable variable and use its address as the context.
(Updated for Swift 3 on 2017-01-09)
// Set up non-zero-sized storage. We don't intend to mutate this variable,
// but it needs to be `var` so we can pass its address in as UnsafeMutablePointer.
private static var myContext = 0
// NOTE: `static` is not necessary if you want it to be a global variable
observee.addObserver(self, forKeyPath: …, options: [], context: &MyClass.myContext)
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey: Any]?, context: UnsafeMutableRawPointer?) {
if context == &myContext {
…
}
else {
super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
}
}
Now that KVOContext is gone in Xcode 6 beta 3, you can do the following. Define a global (i.e. not a class property) like so:
let myContext = UnsafePointer<()>()
Add an observer:
observee.addObserver(observer, forKeyPath: …, options: nil, context: myContext)
In the observer:
override func observeValueForKeyPath(keyPath: String!, ofObject object: AnyObject!, change: [NSObject : AnyObject]!, context: UnsafePointer<()>) {
if context == myContext {
…
} else {
super.observeValueForKeyPath(keyPath, ofObject: object, change: change, context: context)
}
}
Swift 4 - observing contentSize change on UITableViewController popover to fix incorrect size
I had been searching for an answer to change to a block based KVO because I was getting a swiftlint warning and it took me piecing quite a few different answers together to get to the right solution. Swiftlint warning:
Block Based KVO Violation: Prefer the new block based KVO API with keypaths when using Swift 3.2 or later. (block_based_kvo).
My use case was to present a popover controller attached to a button in a Nav bar in a view controller and then resize the popover once it's showing - otherwise it would be too big and not fitting the contents of the popover. The popover itself was a UITableViewController that contained static cells, and it was displayed via a Storyboard segue with style popover.
To setup the block based observer, you need the following code inside your popover UITableViewController:
// class level variable to store the statusObserver
private var statusObserver: NSKeyValueObservation?
// Create the observer inside viewWillAppear
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
statusObserver = tableView.observe(\UITableView.contentSize,
changeHandler: { [ weak self ] (theTableView, _) in self?.popoverPresentationController?.presentedViewController.preferredContentSize = theTableView.contentSize
})
}
// Don't forget to remove the observer when the popover is dismissed.
override func viewDidDisappear(_ animated: Bool) {
if let observer = statusObserver {
observer.invalidate()
statusObserver = nil
}
super.viewDidDisappear(animated)
}
I didn't need the previous value when the observer was triggered, so left out the options: [.new, .old] when creating the observer.
Update for Swift 4
Context is not required for block-based observer function and existing #keyPath() syntax is replaced with smart keypath to achieve swift type safety.
class EventOvserverDemo {
var statusObserver:NSKeyValueObservation?
var objectToObserve:UIView?
func registerAddObserver() -> Void {
statusObserver = objectToObserve?.observe(\UIView.tag, options: [.new, .old], changeHandler: {[weak self] (player, change) in
if let tag = change.newValue {
// observed changed value and do the task here on change.
}
})
}
func unregisterObserver() -> Void {
if let sObserver = statusObserver {
sObserver.invalidate()
statusObserver = nil
}
}
}
Complete example using Swift:
//
// AppDelegate.swift
// Photos-MediaFramework-swift
//
// Created by Phurg on 11/11/16.
//
// Displays URLs for all photos in Photos Library
//
// #see http://stackoverflow.com/questions/30144547/programmatic-access-to-the-photos-library-on-mac-os-x-photokit-photos-framewo
//
import Cocoa
import MediaLibrary
// For KVO: https://developer.apple.com/library/content/documentation/Swift/Conceptual/BuildingCocoaApps/AdoptingCocoaDesignPatterns.html#//apple_ref/doc/uid/TP40014216-CH7-ID12
private var mediaLibraryLoaded = 1
private var rootMediaGroupLoaded = 2
private var mediaObjectsLoaded = 3
#NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {
#IBOutlet weak var window: NSWindow!
var mediaLibrary : MLMediaLibrary!
var allPhotosAlbum : MLMediaGroup!
func applicationDidFinishLaunching(_ aNotification: Notification) {
NSLog("applicationDidFinishLaunching:");
let options:[String:Any] = [
MLMediaLoadSourceTypesKey: MLMediaSourceType.image.rawValue, // Can't be Swift enum
MLMediaLoadIncludeSourcesKey: [MLMediaSourcePhotosIdentifier], // Array
]
self.mediaLibrary = MLMediaLibrary(options:options)
NSLog("applicationDidFinishLaunching: mediaLibrary=%#", self.mediaLibrary);
self.mediaLibrary.addObserver(self, forKeyPath:"mediaSources", options:[], context:&mediaLibraryLoaded)
NSLog("applicationDidFinishLaunching: added mediaSources observer");
// Force load
self.mediaLibrary.mediaSources?[MLMediaSourcePhotosIdentifier]
NSLog("applicationDidFinishLaunching: done");
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
NSLog("observeValue: keyPath=%#", keyPath!)
let mediaSource:MLMediaSource = self.mediaLibrary.mediaSources![MLMediaSourcePhotosIdentifier]!
if (context == &mediaLibraryLoaded) {
NSLog("observeValue: mediaLibraryLoaded")
mediaSource.addObserver(self, forKeyPath:"rootMediaGroup", options:[], context:&rootMediaGroupLoaded)
// Force load
mediaSource.rootMediaGroup
} else if (context == &rootMediaGroupLoaded) {
NSLog("observeValue: rootMediaGroupLoaded")
let albums:MLMediaGroup = mediaSource.mediaGroup(forIdentifier:"TopLevelAlbums")!
for album in albums.childGroups! {
let albumIdentifier:String = album.attributes["identifier"] as! String
if (albumIdentifier == "allPhotosAlbum") {
self.allPhotosAlbum = album
album.addObserver(self, forKeyPath:"mediaObjects", options:[], context:&mediaObjectsLoaded)
// Force load
album.mediaObjects
}
}
} else if (context == &mediaObjectsLoaded) {
NSLog("observeValue: mediaObjectsLoaded")
let mediaObjects:[MLMediaObject] = self.allPhotosAlbum.mediaObjects!
for mediaObject in mediaObjects {
let url:URL? = mediaObject.url
// URL does not extend NSObject, so can't be passed to NSLog; use string interpolation
NSLog("%#", "\(url)")
}
}
}
}

Resources