Connect Objectify caching with Google Cloud MemoryStore - objectify

I try to implement Objectify's MemcacheService to run with Google Cloud MemoryStore.
I managed to implement the basic methods like put and get, but I have no idea how to implement
Set<String> putIfUntouched(final Map<String, CasPut> values);
and
Map<String, IdentifiableValue> getIdentifiables(final Collection<String> keys);
I do not find a way to get the necessary Cas infos from MemoryStore.
I already tried to connect Objectify with a running memcached instance, but I feel I do not have as much control as I would with the MemoryStore (in terms of monitoring, flushing etc.).
Any tips on how to implement the MemcacheService are very much appreciated.

Related

Spring Kafka Non-Blocking retries

I have batch #KafkaListener as follows:
#KafkaListener(
topicPattern = "ProductTopic",
containerFactory = "kafkaBatchListenerFactory")
public void onBatch(List<Message<String>> messages, Acknowledgment acknowledgment) {
consume(messages); // goes to DB
acknowledgment.acknowledge();
}
I also have 3 more topics created: ProductTopic.Retry-1, ProductTopic.Retry-2 and ProductTopic.Retry-DLT. Idea is to consume batch of messages from ProductTopic, and to do non-blocking exponential retries if DB bulk insert fails. I would like to publish message to ProductTopic.Retry-# each time the retry fails, and finally send it to ProductTopic.Retry-DLT. Also lets assume that because of some other limitations, I cannot let the framework create retry and dlt topics for me.
What's the best approach for such situation? Should I use RetryTopicConfigurer to configure such logic? How can I manually define names of my retry and dead lettered topics? Should I create #KafkaListener for each of the retry and dl topics?
Or is the best approach to use RecoveringBatchErrorHandler?
Please share any examples and good practices on this. I came across lots of comments and support on such topics, but some of the comments are old now and as such related to the older versions of spring-kafka. I can see there are few of the modern approaches to work with batch listeners, but I would also like to ask #Garry Russell and the team to point me in the right direction. Thanks!
The framework non-blocking retry mechanism does not support batch listeners.
EDIT
The built-in infrastructure is strongly tied to the KafkaBackoffAwareMessageListenerAdapter; you would need to create a version of that implements BatchAcknowledgingConsumerAwareMessageListener.
It should then be possible to wrap your existing listener with that but you would also need a custom error handler to send the whole batch to the next retry topic.
It would not be trivial.

How do I check whether an e-mail exists in the Firebase database using Kotlin?

I have seen some answers to this question using a Java implementation; however, I wasn't able to convert it successfully. The Java implementation seems to use a try, catch approach to determine whether the FirebaseAuthUserCollisionException is ran into or not. But from researching online, it seems try, catch is obsolete in Kotlin.
FirebaseAuth.getInstance().createUserWithEmailAndPassword(email,password)
.addOnCompleteListener(){it:Task<AuthResult>
if(!it.isSuccessful)
This is the only piece of code I have so far. I appreciate all help! Thank you in advance.
try/catch is not obsolete in Kotlin. It still works essentially the same way. It's just not very useful when working with the asynchronous APIs provided by Firebase and Play services.
The Firebase APIs are all asynchronous, so you need to check for errors in the callbacks that you attach to the Task object returned from the call. This is the same for both Java and Kotlin. You might want to learn more about the Task API in order to better deal with Firebase APIs that return a Task to represent asynchronous work. In the code you've shown, you check for errors in an onComplete listener by looking at the exception object in the result if it wasn't successful:
FirebaseAuth.getInstance().createUserWithEmailAndPassword(email,password)
.addOnCompleteListener() { it:Task<AuthResult>
if (!it.isSuccessful) {
val exception = it.exception
// do something with this exception, according to the Firestore API docs
}

HttpClient, seems so hard to use it correctly

just another question about the correct usage of HttpClient, because unless i am missing something, I find contradicting information about HttpClient in Microsoft Docs. These two links are the source of my confusion:
https://learn.microsoft.com/en-us/azure/architecture/antipatterns/improper-instantiation/#how-to-fix-the-problem
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-3.1#typed-clients
First one states that the best approach is a shared singleton HttpClient instance, the second that the AddHttpClient<TypedClient>() registers the service as transient, and more specifically (copied from that URL):
The typed client is registered as transient with DI. In the preceding
code, AddHttpClient registers GitHubService as a transient service.
This registration uses a factory method to:
Create an instance of HttpClient.
Create an instance of GitHubService, passing in the instance of HttpClient to its constructor.
I was always using AddHttpClient<TypedClient>() and feeling safe, but now i am puzzled again... And making things worse, I found this github issue comment by #rynowak which states:
If you are building a library that you plan to distribute, I would
strongly suggest that you don't take a dependency on
IHttpClientFactory at all, and have your consumers pass in an
HttpClient instance.
Why this is important to me? Because, I am in a process of creating a library that mainly does two things:
Retrieve an access token from a token service (IdentityServer4)
Use that token to access a protected resource
And I am following the typed clients approach described in the link 2 above:
//from https://github.com/georgekosmidis/IdentityServer4.Contrib.HttpClientService/blob/master/src/IdentityServer4.Contrib.HttpClientService/Extensions/ServiceCollectionExtensions.cs
services.AddHttpClient<IIdentityServerHttpClient, IdentityServerHttpClient>()
.SetHandlerLifetime(TimeSpan.FromMinutes(5));
Any advises or examples of how a concrete implementation based on HttpClient looks like, will be very welcome.
Thank you!

Spinnaker custom clouddriver

I'm trying to use Spinnaker to deploy applications to Mesos / Marathon. As this cloud driver does not exist, I'm looking at coding it myself.
I looked at spinnaker-clouddriver, and tried to get inspiration from azure, cf and google ones. But I think I miss some informations about how I is supposed to work.
Do you know any documentation about contributing to spinnaker-clouddriver ? Or could someone explain to me the steps to create my custom driver ?
Thanks.
So far I created :
#Component
class MarathonCloudProvider implements CloudProvider
#Component
class MarathonApplicationProvider implements ApplicationProvider
But I really don't understand what to put in here.
Kubernetes has a nice commit stream ( https://github.com/spinnaker/clouddriver/pulls?utf8=%E2%9C%93&q=kubernetes ) you can follow as examples.
This is the initial PR to introduce the cloud provider - https://github.com/spinnaker/clouddriver/pull/214/files
From there, you would need to implement all the operations and descriptions to fit this.
Essentially, to create a new cloud provider, you would need to do the following:
Sort out how you would map the concepts in your cloud provider to Spinnaker concepts of Server Groups, Security Groups, Load Balancers and Jobs. Some cloud providers won't have this, but you would at the very least have the notion of a server group you would like to index.
Implement caching agents and providers to get an internal cache of your infrastructure. Here is where you would map the existing infrastructure to spinnaker concepts.
Implement cloud operations ( such as deploy, enable / disable ).
Provide an UI.
Adding a new cloud provider is not really trivial, I wouldn't recommend it as an individual undertaking.

Asynchronous Database Access Layer in PureMVC

I'm trying to refactor an existing project into PureMVC. This is an Adobe AIR desktop app taking advantage of the SQLite library included with AIR and building upon it with a few other libraries:
Paul Robertson's excellent async SQLRunner
promise-as3 implementation of asynchronous promises
websql-js documentation for good measure
I made my current implementation of the database similar to websql-js's promise based SQL access layer and it works pretty well, however I am struggling to see how it can work in PureMVC.
Currently, I have my VOs that will be paired with DAOs (data access objects) for database access. Where I'm stuck is how to track the dbFile and sqlRunner instances across the entire program. The DAOs will need to know about the sqlRunner, or at the very least, the dbFile. Should the sqlRunner be treated as singleton-esque? Or created for every database query?
Finally, how do I expose the dbFile or sqlRunner to the DAOs? In my head right now I see keeping these in a DatabaseProxy that would be exposed to other proxies, and instantiate DAOs when needed. What about a DAO factory pattern?
I'm very new to PureMVC but I really like the structure and separation of roles. Please don't hesitate to tell me if this implementation simply will not work.
Typically in PureMVC you would use a Proxy to fetch remote data and populate the VOs used by your View, so in that respect your proposed architecture sounds fine.
DAOs are not a pattern I've ever seen used in conjunction with PureMVC (which is not to say that nobody does or should). However, if I was setting out to write a crud application in PureMVC, I would probably think in terms of a Proxy (or proxies) to read information from the database, and Commands to write it back.

Resources