Dagger hilt with room and job scheduler - retrofit

I want to locally store data in absence of internet connection and thus am using job scheduler to schedule my syncing. So my service needs access to dao and I am not sure what the correct components needs to be defined for dagger to correctly inject dao inside my service. I do not know how to constructor inject in service either. I think it should not be constructor injected into the service. What is the proper approach? And lastly, which coroutine scope should i be using to access database from service? I also need retrofit api to make network calls. How should i inject them into my jobservice?

Related

Adding Context TO DI CreateScope

In a console application I'm using masstransit to process messages coming from a rabbitMQ queue. Every message deals with a specific customer and contains a customterId (this could be in header or message, tbd).
I'm using standard Microsoft Dependency Injection and I have several services I wish to be scoped to this customer.
I can use IServiceProvider.CreateScope in the masstransit consumer. But I cannot seem to determine the proper way to set a specific "value" to a scope. Something that every scoped service provided can use to determine the customer for which the scope was defined. Something that the scoped service can use to determine the "context"
My feeling is that I am looking to define something like HttpContext (This off course is not available in console application).
Do I have to do this manually by setting properties on a scoped service on which all other (scoped) services have a dependency? That feels not very thread safe to me (what if a service is instantiated in a scope before I can set the customer property?).
I know this is a little open for a question here, yet maybe someone can still provide an answer?
I would provide code, but this more an architectural question and on internal the workings of MSDI.
MassTransit creates a scope for every consumer, so creating another scope is a bad idea. Any scoped dependencies will be resolved through the consumer's constructor automatically by the container.

What's the lifetime of an object using SignalR with Unity Dependency Resolver?

Let me start with a little setup info... I am using the repository pattern and dependency injection via Unity. The repository is implemented via Linq-To-Sql. I inject my repositories into class constructors in my web project. The repositories have a PerWebRequest lifetime.
I have implemented a few SignalR hubs and have setup a Unity dependency resolver for SignalR. I'm injecting the same repositories into the hubs using the same Unity config file, which specifies these repositories are PerWebRequest also.
Now the punchline... I ran into a problem where the web project would update an Linq-To-Sql entity and the SignalR hub would read that entity and not get the updates. I have "solved" this problem by clearing the Linq-To-Sql cache before reading the entity in the SignalR hub; DataContact.Refresh() didn't update the entire object graph.
My DataContext for these repositories used in hubs are also PerWebRequest but it seams that the SignalR hubs are using a separate DataContext that does not get destroyed after the web request completes. It appears they are acting as singleton instances instead.
Do SignalR apps run in their own process and therefore my DataContext access from the hubs is a separate DataContext in that separate process?
How could the DataContext in the SignalR hub be instantiated with a PerWebRequest lifetime if it a separate process, apart from the web request lifecycle? Also, how does it seemingly act like a Singleton?
It's a while I don't use stuff like Linq2Sql or concepts like PerWebRequest, so I'm not 100% sure, but if I'm correct in saying that PerWebRequest is definitely tied to the lifetime of underlying HTTP requests, then those will hardly work with SignalR because its behavior can change a lot according to the chosen transport strategy. With WebSockets you might have several hub instantiations/method calls over the same connection, while with Long Polling you would probably have one (or zero) per HTTP request. Check here and here.
Given that the code you write with SignalR should be the same regardless of the transport, I think for hubs you'd always have to handle repositories in a specific way, maybe with an ad-hoc factory always clearing the cache each time a repository has to be supplied to a SignalR hub (you could try to be smart and check the transport strategy used, but those could be muddy waters).

How to know an application is available?

when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service

Symfony2 service loading

I am currently designing an application in Symfony2 and had a question around when services are instantiated. Basically, are all services instantiated when the container is configured in the application load cycle or at the point when the service is requested from the container?
I understand you can flag services to be lazy loaded through the proxy manager but I just wanted to know what happens by default.
To add some context, I want to create a factory method that returns different services and am unsure whether to define the services in the service config and fetch them from the container when requested or simply instantiate them in the factory itself.
If Symfony loads all the services when the container is loaded then that's a lot of excessive overheard for what I'm trying to do. Also I'd rather not define concrete classes in the factory method.
Thanks for your help.

Singleton vs Single Thread

Normally Servlets are initiated just once and web container simple spawns a new thread for every user request. Let's say if I create my own web container from scratch and instead of Threads, I simply create Servlets as Singleton. Will I be missing anything here? I guess, in this case, the singleton can only service one user request at a time and not multiple.
Normally Servlets are initiated just once and web container simple spawns a new thread for every user request.
The first statement is true, but the second actually not. Normally, threads are been created once during applications startup and kept in a thread pool. When a thread has finished its request-response processing job, it will be returned to the pool. That's also why using ThreadLocal in a servletcontainer must be taken with high care.
Let's say if I create my own web container from scratch and instead of Threads, I simply create Servlets as Singleton. Will I be missing anything here?
They does not necessarily need to follow the singleton pattern. Just create only one instance of them during application's startup and keep them in memory throughout application's lifetime and just let all threads access the same instance.
I guess, in this case, the singleton can only service one user request at a time and not multiple.
This is not true. This will only happen when you synchronize the access to the singleton's methods on an application-wide lock. For example by adding the synchronized modifier to the method of your servlet or a synchronized(this) in the manager's method who is delegating the requests to the servlets.
JavaEE used to have a mechanism for this - a marker interface called SingleThreadModel that your servlet could implement:
Ensures that servlets handle only one request at a time. This interface has no methods.
If a servlet implements this interface, you are guaranteed that no two threads will execute concurrently in the servlet's service method. The servlet container can make this guarantee by synchronizing access to a single instance of the servlet, or by maintaining a pool of servlet instances and dispatching each new request to a free servlet.
Note that SingleThreadModel does not solve all thread safety issues. For example, session attributes and static variables can still be accessed by multiple requests on multiple threads at the same time, even when SingleThreadModel servlets are used. It is recommended that a developer take other means to resolve those issues instead of implementing this interface, such as avoiding the usage of an instance variable or synchronizing the block of the code accessing those resources. This interface is deprecated in Servlet API version 2.4.
Containers could use this to instantiate a new servlet for each request, or maintain a pool of them, if they chose to.
This was deprecated in Servlet 2.4, for the reasons documented above. Those same reasons still apply to your question.
That's basically it.
I would question the motivations for creating your own container, with so many available for a wide range of purposes.

Resources