How to use Symfony Lock component in different servers instance? - symfony

From the documentation of Symfony lock component
here
I Found That
Unlike other implementations, the Lock Component distinguishes locks instances even when they are created for the same resource. If a lock has to be used by several services, they should share the same Lock instance returned by the Factory::createLock method.
For code example
public function tryLock( $job):
{
$lock = $this->factory->createLock($job->getUniqueId());
$lock->acquire();
return $lock->isAcquired() ;
}
If I run this function 2 times from other class with the same job, it will lock each other
How I can prevent this?! while I’m using this on the same server
And the other problem is How I can use this if I use multiple server instances

Related

Is it possible to determine whether a Thrift TBaseClient is currently busy or available?

In our system, we have one C++ component acting as a Thrift Server, and one .netCore/C# component as a client.
So far, I was managing a single connection, so using a singleton to create my ThriftPushClientWrapper which implements TBaseClient. (via the generated object from the thrift interface)
.AddSingleton<IThriftPushClientWrapper>(sp =>
{
var localIpAddress = IPAddress.Parse(serverIp);
var transport = new TSocketTransport(localIpAddress, dataPort);
var protocol = new TBinaryProtocol(transport);
return new ThriftPushClientWrapper(protocol);
});
(so far using 0.13 version of the Thrift library, need to update to 0.14.1 soon, but wonder if the server part must be updated too/first).
This is working great.
Now, I want multiple clients that can connect to the server simultaneously, all on the same ip:port
So I am starting a ClientFactory, but wonder how to deal with the creation of the client.
To be more precise, the server part is configured for 5 threads, so I need 5 clients.
One simple approach would be to create a new client each time, but probably inefficient.
A better approach is to have a collection of 5 clients, and using the next available free one.
So I started with the following factory, where I should get the index from outside.
private readonly ConcurrentDictionary<int, IThriftPushClientWrapper> _clientDict;
public IThriftPushClientWrapper GetNextAvailablePushClient(int index)
{
IThriftPushClientWrapper client;
if (_clientDict.ContainsKey(index))
{
if (_clientDict.TryGetValue(index, out client) && client != null)
return client;
else // error handling
}
// add new client for the expecting index
client = CreateNewPushClient();
_clientDict.TryAdd(index, client);
return client;
}
private IThriftPushClientWrapper CreateNewPushClient()
{
var localIpAddress = IPAddress.Parse(serverIp);
var transport = new TSocketTransport(localIpAddress, dataPort);
var protocol = new TBinaryProtocol(transport);
return new ThriftPushClientWrapper(protocol);
}
My next issue it to determine how to set the index from outside.
I started with a SemaphoreSlim(5,5) using the semaphore.CurrentCount as index, but probably not the best idea. Also tried with a rolling index from 0 to 5. But apparently, a CancellationToken is used to cancel further procceesing. Not sure the root cause yet.
Is it possible to determine whether a TBaseClient is currently busy or available?
What is the recommended strategy to deal with a pool of clients?
The easiest solution to solve this is to do it right. If you are going to use some resource from a pool of resources, either get it off the pool, or mark it used in some suitable way for that time.
It's notable that the question has nothing to do with Thrift in particular. You are trying to solve a weak resource management approach by trying to leverage other peoples code that was never intended to work in such a context.
Regarding how to implement object pooling, this other question can provide further advice. Also keep in mind that especially on Windows platforms not all system resources can be shared freely across threads.

Implementing (Abstract) Nameko Service Inheritance

I have a nameko service that deals with lots of entities, and having the entrypoints in a single service.py module would render the module highly unreadable and harder to maintain.
So I've decided to split up the module in to multiple Services which are then used to extend the main service. I am kind of worried about dependency injection and thought a dependency like the db, might have multiple instances due to this approach. This is what I have so far:
the customer service module with all customer-related endpoints
# app/customer/service.py
class HTTPCustomerService:
"""HTTP endpoints for customer module"""
name = "http_customer_service"
db = None
context = None
dispatch = None
#http("GET,POST", "/customers")
def customer_listing(self, request):
session = self.db.get_session()
return CustomerListController.as_view(session, request)
#http("GET,PUT,DELETE", "/customers/<uuid:pk>")
def customer_detail(self, request, pk):
session = self.db.get_session()
return CustomerDetailController.as_view(session, request, pk)
and main service module that inherits from customer service, and possibly other abstract services
# app/service.py
class HTTPSalesService(HTTPCustomerService):
"""Nameko http service."""
name = "http_sales_service"
db = Database(Base)
context = ContextData()
dispatch = EventDispatcher()
and finally I run it with:
nameko run app.service
So this works well, but is the approach right? Especially with regards to dependency injection?
Yep, this approach works well.
Nameko doesn't introspect the service class until run-time, so it sees whatever standard Python class inheritance produces.
One thing to note is that your base class is not "abstract" -- if you point nameko run at app/customer/service.py it will attempt to run it. Related, if you put your "concrete" subclass in the same module, nameko run will try to run both of them. You can mitigate this by specifying the service class, i.e. nameko run app.services:HTTPSalesService

Where to place domain services in AxonIQ

I have a user aggregate which is created using CreateUser command which consists of aggregate identifier and username.
Along with that i have domain service that communicates with mongo db and checks if username exists, if not it puts it there.
eg registerUsername(username) -> true / false whether it registered it or not
My question is, would it be good idea to create command handler on top of the user aggregate that would handle the CreateUser command and whether it has username or not will dispatch proper commands/events? like so:
#Component
class UserCommandHandler(
#Autowired
private val repository: Repository<User>,
#Autowired
private val eventBus: EventBus,
#Autowired
private val service: UniqueUserService
) {
#CommandHandler
fun createUser(cmd: CreateUser) {
if (this.service.registerUsername(cmd.username)) {
this.repository.newInstance { User(cmd.id) }
.handle(GenericCommandMessage(cmd))
} else {
this.eventBus.publishEvent(UserCreateFailed(cmd.id, cmd.username))
}
}
}
This question is not necessarily related to the set uniqueness in ddd but more of a question where should i put dependency of domain services? I could probably create user registration saga and inject that service inside saga but i think saga should only rely on command dispatching and not have any if/else logic.
I think the place to put your domain service depends on the use case at hand.
I typically try to have domain service do virtual no outbound calls to other services or databases, at all.
The domain service you're now conceiving however does exactly that to, like you're point out, solve the uniqueness issue.
In this situation, you could likely come by with the suggested approach.
You could also think of introducing a MessageHandlerInterceptor (or even fancier, a HandlerEnhancerDefinition as described here), specifically triggering on the create command and performing the desired check.
If it would be domain service like I depicted mine just now (e.g. zero outbound calls from domain service), then you can safely wire it in your command handling functions to perform some action.
If you're in a Spring environment, simply having your domain service as a bean and providing it as a parameter to your message handling function is sufficient for Axon to resolve it for you (through the means of ParameterResolvers, as described here).
Hope this helps you out #PolishCivil!

Any disadvantages to calling keepSynced on same ref multiple times?

Due to the flow of my app I'm forced to call keepSynced(true) on the same ref every time the user opens the app. I was wondering if it's bad to do so or if Firebase just ignores any redundant keepSynced() calls on the same ref.
How about calling keepSynced(true) on a sub-ref of a ref you already called keepSynced(true) on, are those ignored too?
I'm really looking for a conclusive answer.
keeySynced is either on or off for a path given by a reference. There is no "multiple keepSynced" state - that would be pointless to implement inside the SDK since there is no advantage to doing so.
You only need to call keepSynced(true) once. The way I implement it is to extend the Application Class.
public class GlobalApp extends Application {
#Override
public void onCreate() {
super.onCreate();
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
FirebaseDatabase.getInstance().getReference().keepSynced(true);
}
}
Calling keepSynced(true) on a node ensures that the Firebase Database client will synchronize that node whenever it has a connection to the database servers. There is no built-in API to keep a node synchronized when there is no such connection.
keepSynced(true);
will be useful if we enable offline support
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
If we set keepSynced(true), then whenever a user's internet connection is online, it will update it's node data. More explanation can be read in here
For example : if other user delete the node, than if another user offline. The offline user data will still exist if we're not setting the keepSynced(true).
In some case it will make a force close.
So My conclusion is, either we didn't support offline database,
or support offline but with keepSynced(true). There is also another option, we can choose whenever to keepSynced true or false.

Oracle Coherence

I'm new to Oracle Coherence. I read the documentation and done the hands-on using the command prompt. I've no issues in understanding. Then I downloaded the eclipse with oracle coherence tools. I created the application client for the oracle coherence as given below
http://docs.oracle.com/cd/E18686_01/coh.37/e18692/installjdev.htm
I ran the same. It was working fine as I did in my console application. Then I created a new project in the same workspace, created a main class accessed the named cache, put and retrieved some values using the below code,
package coherenceClient;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
public class Main {
public static void main(String[] args) {
NamedCache cache = CacheFactory.getCache("myCache");
cache.put("MyFirstCacheObject", "This is my first Cache Object");
System.out.println(cache.get("MyFirstCacheObject"));
}
}
I retieved the same value. Then I created another class tried retrieved the same value but it was returning null. Is there are any mistakes in the code?
package coherenceClient;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
public class Recevier {
public static void main(String[] args) {
NamedCache cache = CacheFactory.getCache("myCache");
System.out.println(cache.get("MyFirstCacheObject"));
}
}
If the coherence cache resides in the JVM (it is not ran as standalone server), then all the data get discarded after your program finishes (you use in-memory storage). Try to put Thread.sleep(200000); to the end of the first program and then run the second instance within the timeout.
In the command prompt you have started the server(as stand-alone) and the clients have joined the server. So all the data in the cache will be available until the server stops, even if the client which inserted the data into the cache leaves the server session.
But in the above case, the coherence cache resides in the JVM(Eclipse) itself and not as stand alone server. So you are getting null value when the program exists.
When you run the second JVM check the original coherence cache server node stdout to see if you actually see the new member joining in the cluster (check the MemberSet). You might just be running two separate JVMs which are completely unaware of each other; hence CacheFactory.getCache("myCache") is creating the cache in each JVM.
The way to go around this is to use cache-server.cmd to start a coherence cache server and then run your eclipse program with a distributed/partitioned or replicated scheme. That way, even when your program exits, the actual data would be live in the the coherence cache server for the second JVM to retrieve when it joins the "same cluster".

Resources