thank you for taking your time to read my problem.
Im currently using Firebase Firestore to retrieve a list of objects that I which to display to the UI, im trying to use a suspend function to fold the accumulative values of a sequence of calls from the Firestore server, but at the moment im unable to pass the result value outside the scope of the coroutine.
This is my fold function:
suspend fun getFormattedList(): FirestoreState {
return foldFunctions(FirestoreModel(""), ::getMatchesFromBackend, ...., ....)
}
This is my custom fold function:
suspend fun foldFunctions(model: FirestoreModel,
vararg functions: suspend (FirestoreModel, SuccessData) -> FirestoreState): FirestoreState {
val successData: SuccessData = functions.fold(SuccessData()) { updatedSuccessData, function ->
val status = function(model, updatedSuccessData)
if (status !is FirestoreState.Continue) {
return status
}
updatedSuccessData <--- I managed to retrieve the list of values correctly here
}
val successModel = SuccessData()
successData.matchList?.let { successModel.matchList = it }
successData.usermatchList?.let { successModel.usermatchList = it }
successData.formattedList?.let { successModel.formattedList = it }
return FirestoreState.Success(successModel) <--- I cant event get to this line with debugger on
}
This is my first function (which is working fine)
suspend fun getMatchesFromBackend(model: FirestoreModel, successData: SuccessData): FirestoreState {
return try {
val querySnapshot: QuerySnapshot? = db.collection("matches").get().await()
querySnapshot?.toObjects(Match::class.java).let { list ->
val matchList = mutableListOf<Match>()
list?.let {
for (document in it) {
matchList.add(Match(document.away_score,
document.away_team,
document.date,
document.home_score,
document.home_team,
document.match_id,
document.matchpoints,
document.played,
document.round,
document.tournament))
}
successData.matchList = matchList <--- where list gets stored
}
}
FirestoreState.Continue
} catch (e : Exception){
when (e) {
is RuntimeException -> FirestoreState.MatchesFailure
is ConnectException -> FirestoreState.MatchesFailure
is CancellationException -> FirestoreState.MatchesFailure
else -> FirestoreState.MatchesFailure
}
}
}
My hypothesis is that the suspen fun get cancelled and the continuation of the scope gets blocked, I have tried to use runBlocking { } without vail. If someone has an idea of how to circumvent this issue I'd be very gratefull.
Related
firebase method is working on worker thread automatically. but I have used coroutine and callbackflow to implement firebase listener code synchronously or get return from the listener.
below is my code that I explained
coroutine await with firebase for one shot
override suspend fun checkNickName(nickName: String): Results<Int> {
lateinit var result : Results<Int>
fireStore.collection("database")
.document("user")
.get()
.addOnCompleteListener { document ->
if (document.isSuccessful) {
val list = document.result.data?.get("nickNameList") as List<String>
if (list.contains(nickName))
result = Results.Exist(1)
else
result = Results.No(0)
//document.getResult().get("nickNameList")
}
else {
}
}.await()
return result
}
callbackflow with firebase listener
override fun getOwnUser(): Flow<UserEntity> = callbackFlow{
val document = fireStore.collection("database/user/userList/")
.document("test!!!!!")
val subscription = document.addSnapshotListener { snapshot,_ ->
if (snapshot!!.exists()) {
val ownUser = snapshot.toObject<UserEntity>()
if (ownUser != null) {
trySend(ownUser)
}
}
}
awaitClose { subscription.remove() }
}
so I really wonder these way is good or bad practice and its reason
Do not combine addOnCompleteListener with coroutines await(). There is no guarantee that the listener gets called before or after await(), so it is possible the code in the listener won't be called until after the whole suspend function returns. Also, one of the major reasons to use coroutines in the first place is to avoid using callbacks. So your first function should look like:
override suspend fun checkNickName(nickName: String): Results<Int> {
try {
val userList = fireStore.collection("database")
.document("user")
.get()
.await()
.get("nickNameList") as List<String>
return if (userList.contains(nickName)) Results.Exist(1) else Results.No(0)
} catch (e: Exception) {
// return a failure result here
}
}
Your use of callbackFlow looks fine, except you should add a buffer() call to the flow you're returning so you can specify how to handle backpressure. However, it's possible you will want to handle that downstream instead.
override fun getOwnUser(): Flow<UserEntity> = callbackFlow {
//...
}.buffer(/* Customize backpressure behavior here */)
I'm trying to run a Firebase Transaction under a suspended function in Kotlin and i see no documentation about it.
I'm using
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.5.2'
for coroutines with firebase (eg: setValue(*).await() ) but there seems to be no await function for runTransaction(*)
override suspend fun modifyProductStock(
product: ProductModel,
valueToModify: Long,
replace: Boolean
) {
CoroutineScope(Dispatchers.Main).launch {
val restaurantId = authRepository.restaurantId.value ?: throw Exception("No restaurant!")
val productId = product.id ?: throw Exception("No Product ID!")
val reference = FirebaseDatabase.getInstance().getReference("database/$restaurantId").child("products")
if (replace) {
reference.child(productId).child("stock").setValue(valueToModify).await()
} else {
reference.child(productId).child("stock")
.runTransaction(object : Transaction.Handler {
override fun doTransaction(p0: MutableData): Transaction.Result {
//any operation
return Transaction.success(p0)
}
override fun onComplete(p0: DatabaseError?, p1: Boolean, p2: DataSnapshot?) {
}
})
}
}
}
You could wrap it in suspendCoroutine:
val result: DataSnapshot? = suspendCoroutine { c ->
reference.child(productId).child("stock")
.runTransaction(object : Transaction.Handler {
override fun doTransaction(p0: MutableData): Transaction.Result {
//any operation
return Transaction.success(p0)
}
override fun onComplete(error: DatabaseError?, p1: Boolean, snapshot: DataSnapshot?) {
c.resume(snapshot)
}
})
}
suspendCoroutine
Obtains the current continuation instance inside suspend functions and suspends the currently running coroutine.
In this function both Continuation.resume and Continuation.resumeWithException can be used either synchronously in the same stack-frame where the suspension function is run or asynchronously later in the same thread or from a different thread of execution.
Given that the Kotlin example in the Firebase documentation on transactions uses the same callback style that you have, it seems indeed that there is no specific support for co-routines there.
It might be worth posting an issue on the Android SDK repo to get it added, or hear why it wasn't added.
Sorry if that title is not clear enough but I didn't know how to sum it up in one sentence.
I have a webservice that returns an ArrayList of objects named Father.
The Father object is structured like this:
class Father {
ArrayList<Child> children;
}
I have another webservice that returns me the detail of the object Child.
How can I concat the first call that returns me the arraylist of Father and the multiple calls for the multiple objects Child ?
So far I can make the calls separately, like this:
Call for ArrayList of Father
myRepository.getFathers().subscribeOn(Schedulers.io())
.observeOn(Schedulers.io()).subscribeWith(new DisposableSingleObserver<List<Father>>() {
})
multiple call for ArrayList of Child
childListObservable
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.io())
.flatMap((Function<List<Child>, ObservableSource<Child>>) Observable::fromIterable)
.flatMap((Function<Child, ObservableSource<Child>>) this::getChildDetailObservable)
.subscribeWith(new DisposableObserver<Child>() {
// do whatever action after the result of each Child
}))
Prerequisite
Gradle
implementation("io.reactivex.rxjava2:rxjava:2.2.10")
testImplementation("io.mockk:mockk:1.10.0")
testImplementation("org.assertj:assertj-core:3.11.1")
testImplementation("org.junit.jupiter:junit-jupiter-api:5.3.1")
testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.3.1")
Classes / Interfaces
interface Api {
fun getFather(): Single<List<Father>>
fun childDetailInfo(child: Child): Single<ChildDetailInfo>
}
interface Store {
fun store(father: Father): Completable
fun store(child: ChildDetailInfo): Completable
}
class ApiImpl : Api {
override fun getFather(): Single<List<Father>> {
val child = Child("1")
val child1 = Child("2")
return Single.just(listOf(Father(listOf(child, child1)), Father(listOf(child))))
}
override fun childDetailInfo(child: Child): Single<ChildDetailInfo> {
return Single.just(ChildDetailInfo(child.name))
}
}
data class Father(val childes: List<Child>)
data class Child(val name: String)
data class ChildDetailInfo(val name: String)
Solution
val fathersStore = api.getFather()
.flatMapObservable {
Observable.fromIterable(it)
}.flatMapCompletable {
val detailInfos = it.childes.map { child ->
api.childDetailInfo(child)
.flatMapCompletable { detail -> store.store(detail) }
}
store.store(it)
.andThen(Completable.concat(detailInfos))
}
On each emit of a List of fathers, the list is flatten. The next opreator (flatMapCompletable) will take an Father. The completable will get the details of each Child with Api#childDetailInfo. The result is build by calling the API one by one. There is no concurrency happening wegen "concat". When the father is stored sucessfully, the childs will be stored as-well, when retrieved successfully. If one of the API-calls fails (e.g. network) everything fails, because the onError will be propgated to the subscriber.
Test
#Test
fun so62299778() {
val api = ApiImpl()
val store = mockk<Store>()
every { store.store(any<Father>()) } returns Completable.complete()
every { store.store(any<ChildDetailInfo>()) } returns Completable.complete()
val fathersStore = api.getFather()
.flatMapObservable {
Observable.fromIterable(it)
}.flatMapCompletable {
val detailInfos = it.childes.map { child ->
api.childDetailInfo(child)
.flatMapCompletable { detail -> store.store(detail) }
}
store.store(it)
.andThen(Completable.concat(detailInfos))
}
fathersStore.test()
.assertComplete()
verify { store.store(eq(Father(listOf(Child("1"), Child("2"))))) }
verify { store.store(eq(Father(listOf(Child("1"))))) }
verify(atLeast = 2) { store.store(eq(ChildDetailInfo("1"))) }
verify(atLeast = 1) { store.store(eq(ChildDetailInfo("2"))) }
}
Please provide next time some classes/ interfaces. When your question contains all vital information, you will get an answer quicker.
Let's say I have list of repos. I want to iterate through all of them. As each repo returns with result, I wanted to pass it on.
val repos = listOf(repo1, repo2, repo3)
val deferredItems = mutableListOf<Deferred<List<result>>>()
repos.forEach { repo ->
deferredItems.add(async { getResult(repo) })
}
val results = mutableListOf<Any>()
deferredItems.forEach { deferredItem ->
results.add(deferredItem.await())
}
println("results :: $results")
In the above case, It waits for each repo to return result. It fills the results in sequence, result of repo1 followed by result of repo2. If repo1 takes more time than repo2 to return result, we will be waiting for repo1's result even though we have result for repo2.
Is there any way to pass the result of repo2 as soon as we have the result?
The Flow API supports this almost directly:
repos.asFlow()
.flatMapMerge { flow { emit(getResult(it)) } }
.collect { println(it) }
flatMapMerge first collects all the Flows that come out of the lambda you pass to it and then concurrently collects those and sends them into the downstream as soon as any of them completes.
That's what channels are for:
val repos = listOf("repo1", "repo2", "repo3")
val results = Channel<Result>()
repos.forEach { repo ->
launch {
val res = getResult(repo)
results.send(res)
}
}
for (r in results) {
println(r)
}
This example is incomplete, as I don't close the channel, so the resulting code will be forever suspended. Make sure that in your real code you close the channel once all results are received:
val count = AtomicInteger()
for (r in results) {
println(r)
if (count.incrementAndGet() == repos.size) {
results.close()
}
}
you should use Channels.
suspend fun loadReposConcurrent() = coroutineScope {
val repos = listOf(repo1, repo2, repo3)
val channel = Channel<List<YourResultType>>()
for (repo in repos) {
launch {
val result = getResult(repo)
channel.send(result)
}
}
var allResults = emptyList<YourResultType>()
repeat(repos.size) {
val result = channel.receive()
allResults = allResults + result
println("results :: $result")
//updateUi(allResults)
}
}
in the code above in for (repo in repos) {...} loop all the requests calculated in seprate coroutines with launch and as soon as their result is ready will send to channel.
in repeat(repos.size) {...} the channel.receive() waits for new values from all coroutines and consumes them.
I did some tests which compare speed of using async as a method of deferring results and CompletableDeferred with combination of Job or startCoroutine to do the same job.
In summary there are 3 use cases:
async with default type of start (right away) [async]
CompletableDeferred + launch (basically Job) [cdl]
CompletableDeferred + startCoroutine [ccdl]
results are presented here:
In short every iteration of each use case test generates 10000 of async / cdl / ccdl requests and waits for them to complete. This is repeated 225 times with 25 times as a warmUp (not included in results) and data points are collected over 100 iteration of process above (as min, max, avg).
here is a code:
import com.source.log.log
import kotlinx.coroutines.*
import kotlin.coroutines.Continuation
import kotlin.coroutines.startCoroutine
import kotlin.system.measureNanoTime
import kotlin.system.measureTimeMillis
/**
* #project Bricks
* #author SourceOne on 28.11.2019
*/
/*I know that there are better ways to benchmark speed
* but given the produced results this method is fine enough
* */
fun benchmark(warmUp: Int, repeat: Int, action: suspend () -> Unit): Pair<List<Long>, List<Long>> {
val warmUpResults = List(warmUp) {
measureNanoTime {
runBlocking {
action()
}
}
}
val benchmarkResults = List(repeat) {
measureNanoTime {
runBlocking {
action()
}
}
}
return warmUpResults to benchmarkResults
}
/* find way to cancel startedCoroutine when deferred is
* canceled (currently you have to cancel whole context)
* */
fun <T> CoroutineScope.completable(provider: suspend () -> T): Deferred<T> {
return CompletableDeferred<T>().also { completable ->
provider.startCoroutine(
Continuation(coroutineContext) { result ->
completable.completeWith(result)
}
)
}
}
suspend fun calculateAsyncStep() = coroutineScope {
val list = List(10000) {
async { "i'm a robot" }
}
awaitAll(*list.toTypedArray())
}
suspend fun calculateCDLStep() = coroutineScope {
val list = List(10000) {
CompletableDeferred<String>().also {
launch {
it.complete("i'm a robot")
}
}
}
awaitAll(*list.toTypedArray())
}
suspend fun calculateCCDLStep() = coroutineScope {
val list = List(10000) {
completable { "i'm a robot" }
}
awaitAll(*list.toTypedArray())
}
fun main() {
val labels = listOf("async", "cdl", "ccdl")
val collectedResults = listOf(
mutableListOf<Pair<List<Long>, List<Long>>>(),
mutableListOf(),
mutableListOf()
)
"stabilizing runs".log()
repeat(2) {
println("async $it")
benchmark(warmUp = 25, repeat = 200) {
calculateAsyncStep()
}
println("CDL $it")
benchmark(warmUp = 25, repeat = 200) {
calculateCDLStep()
}
println("CCDL $it")
benchmark(warmUp = 25, repeat = 200) {
calculateCCDLStep()
}
}
"\n#Benchmark start".log()
val benchmarkTime = measureTimeMillis {
repeat(100) {
println("async $it")
collectedResults[0] += benchmark(warmUp = 25, repeat = 200) {
calculateAsyncStep()
}
println("CDL $it")
collectedResults[1] += benchmark(warmUp = 25, repeat = 200) {
calculateCDLStep()
}
println("CCDL $it")
collectedResults[2] += benchmark(warmUp = 25, repeat = 200) {
calculateCCDLStep()
}
}
}
"\n#Benchmark completed in ${benchmarkTime}ms".log()
"#Benchmark results:".log()
val minMaxAvg = collectedResults.map { stageResults ->
stageResults.map { (_, benchmark) ->
arrayOf(
benchmark.minBy { it }!!, benchmark.maxBy { it }!!, benchmark.average().toLong()
)
}
}
minMaxAvg.forEachIndexed { index, list ->
"results for: ${labels[index]} [min, max, avg]".log()
list.forEach { results ->
"${results[0]}\t${results[1]}\t${results[2]}".log()
}
}
}
There is no surprise that the first two use cases (async and cdl) are very close to each other and async is always better (because you don't have the overhead of creating job to complete deferred object) but comparing async vs CompletableDeferred + startCoroutine there is a huge gap between them (almost 2 times) in favor of the last one. Why there is such a big difference and if anyone knows, why shouldn't we just be using CompletableDeferred + startCoroutine wrapper (like completable() here) instead of async?
Addition1:
Here is a sample for 1000 points:
There are constant spikes in async and cdl results and some in ccdl (maybe gc?) but still there is far less with ccdl. I will rerun these tests with changed order of tests interleaving but it seems that it's related to something under the coroutines machinery.
Edit1:
I've accepted Marko Topolnik answer, but in addition to it, you still can use this 'as he called' bare launch method if you await for the result within the scope you have launched it.
In example if you will launch few deffered coroutines (async) and at the end of that scope you will await them all then the ccdl method will work as expected (at least from what i've seen in my tests).
Since launch and async are built as a layer on top of the low-level primitive createCoroutineUnintercepted(), whereas startCoroutine is practically a direct call into it, there aren't any surprises in your benchmark results.
why shouldn't we just be using CompletableDeferred + startCoroutine wrapper (like completable() here) instead of async?
A comment in your code already hints to the answer:
/*
* find way to cancel startedCoroutine when deferred is
* canceled (currently you have to cancel whole context)
*/
The layer you short-circuited with startCoroutine is precisely the layer that handles things as cancellation, coroutine hierarchy, exception handling and propagation, and so on.
Here's a simple example that shows you one of the things that break when you replace launch with a bare coroutine:
fun main() = runBlocking {
bareLaunch {
try {
delay(1000)
println("Coroutine done")
} catch (e: CancellationException) {
println("Coroutine cancelled, the exception is: $e")
}
}
delay(10)
}
fun CoroutineScope.bareLaunch(block: suspend () -> Unit) =
block.startCoroutine(Continuation(coroutineContext) { Unit })
fun <T> CoroutineScope.bareAsync(block: suspend () -> T) =
CompletableDeferred<T>().also { deferred ->
block.startCoroutine(Continuation(coroutineContext) { result ->
result.exceptionOrNull()?.also {
deferred.completeExceptionally(it)
} ?: run {
deferred.complete(result.getOrThrow())
}
})
}
When you run this, you'll see the bare coroutine got cancelled after 10 milliseconds. The runBlocking builder didn't realize it had to wait for it to complete. If you replace bareLaunch { with launch {, you'll restore the designed behavior where the child coroutine completes normally. The same thing happens with bareAsync.