Implement own transaction management by removing ejb - ejb

I have old ejb transaction management implemented in CMP. I need to remove this EJBs and build custom transaction management. I can't use spring for this. So please suggest me how to custom implement the transaction management which supports NotSupported, Required, RequiresNew

Related

Pact flow for Event Driven Applications

Although Pact supports testing of messages, I find that the recommended flow in the "Pact Nirvana" doesn't quite match the flow that I understand an Event Driven Application needs.
Let's say we have an Order management service and a Shipping management service.
The Shipping service emits ShippingPreparedEvents that are received by the Order service.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it.
That way, there wouldn't have any downtime on the services.
However, I believe Pact would expect to deploy the Shipping service first (it's the provider of the event) so that the contract can be verified before deploying the consumer. In this case, deploying the provider first will break my consumer.
Can this situation be avoided somehow? Am I missing anything?
Just to provide more context, we can see in this link that different changes would require different order of deployment. https://docs.confluent.io/current/schema-registry/avro.html#summary
I won't be using Kafka nor Avro, but I believe my flow would be similar.
Thanks a lot.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it. That way, there wouldn't have any downtime on the services.
I agree. What specifically in the Pact Nirvana guide gives you the impression this isn't the way to go? Pact (and the Pact Broker) don't actually care about the order of deployments.
In your case, removing the field would cause a failure of a can-i-deploy check, because removing the field would break the Order Management Service. The only approach would be to remove the field usage from the consumer, publish a new version of that contract and deploy to Production first.

Invoke web service on creating a web service proxy on data power

We have a SOA Service Registry with in our organization. This is a custom build web application. We ask different teams to register their developed services in the Service Registry. But we are not able to ensure that every team is registering all their services in the service registry. To enable better SOA governance, we want to enforce automatic service registration in the service repository by application teams, the idea is when ever they create a web service proxy on data power xg45 appliance, we want to invoke a web service call which will automatically create the service in the custom registry.
Our team is using IBM Data power xg45.
Is it possible to integrate IBM Data Power XG45 with the custom registry?
There is a few management interfaces to DataPower. What you can do is to poll those interfaces on a regular basis to extract any info regarding deployed web service proxies. It's pretty easy to setup a web service call from whatever app have you. If you poll the management interface often, it's pretty much the same as if DP would have created the registry entry itself.
A good e-book (although a bit old) is: http://www.redbooks.ibm.com/redpapers/pdfs/redp4446.pdf
For instance, the AMP interface can query available WSGateway services in a domain called RIV. The response will include info about referenced WSDL files, service names and referenced HTTP protocol handlers. For some details, you may need to query further and for some details, you may be able to figure out what to enter into the registry from the export.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns="http://www.datapower.com/schemas/appliance/management/3.0">
<soapenv:Header/>
<soapenv:Body>
<ns:GetReferencedObjectsRequest>
<ns:Domain>RIV</ns:Domain>
<ns:ObjectClass>WSGateway</ns:ObjectClass>
<ns:ObjectName></ns:ObjectName>
</ns:GetReferencedObjectsRequest>
</soapenv:Body>
</soapenv:Envelope>

Is there a direct way to query and update App data from within a proxy or do I have to use the management API?

I have a need to change Attributes of an App and I understand I can do it with management server API calls.
The two issues with using the management server APIs are:
performance: it’s making calls to the management server, when it
might be possible directly in the message processor. Performance
issues can probably be mitigated with caching.
availability: having to use management server APIs means that the system is
dependent on the management server being available. While if it were
done directly in the proxy itself, it would reduce the number of
failure points.
Any recommended alternatives?
Finally all entities are stored in the cassandra ( for the runtime )
Your best choice is using access entity policy for getting any info about an entity. That would not hit the MS. But just for your information - most of the time you do not even need an access entity policy. When you use a validate apikey or validate access token policy - all the related entity details are made available as flow variable by the MP. So no additional access entity calls should be required.
When you are updating any entity (like developer, application) - I really assume it is management type use case and not a runtime use case. Hence using management APIs should be fine.
If your use case requires a runtime API call to in-turn update an attribute in the application then possibly that attribute should not be part of the application. Think how you can take it out to a cache, KVM or some other place where you can access it from MP (Just a thought without completely knowing the use cases ).
The design of the system is that all entity editing goes through the Management Server, which in turn is responsible for performing the edits in a performant and scalable way. The Management Server is also responsible for knowing which message processors need to be informed of the changes via zookeeper registration. This also ensures that if a given Message Processor is unavailable because it, for example, is being upgraded, it will get the updates whenever it becomes available. The Management Server is the source of truth.
In the case of Developer App Attributes, (or really any App meta-data) the values are cached for 3 minutes (I think), so that the Message Processor may not see the new values for up to 3 minutes.
As far as availability, the Management Server is designed to be highly available, relying on the same underlying architecture as the message processor design.

Experiences with single-instance multi-tenant web application in Seam?

Any experiences with Seam in a one-instance multi-tenant setup? Is Seam suited for that setup? How did you realise it? What were the costs involved?
Our situation: A Seam 2.1 SaaS web-app (POJO, no EJB). Available development budget forced us towards a simplistic one-instance per tenant design. The application is not in production yet but nearly finished.
I expect our customer might reconsider a one-instance multi-tenant setup if it lowers the projected hosting costs.
We've developed a multi-tenant SaaS application with Seam. I don't think that Seam has any advantages or disadvantages for this sort of thing.
The only piece of functionality that is possibly useful are Hibernate Filters (eg. have a company id on every table and set a hibernate filter for it). Means every query will have this ID automatically appended.
I have a class called User, and it has as it's members all of that users data. So, there's a one to many relationship from User to Task, for instance. Then my query for all of a users tasks is simply: select task from Task task, User user where user.id = #{user.id} and task member of user.taskList. I could also have used filters as another has mentioned. However, since the #{user} object is created on log in, it's available via Seams parsing of the EL string. Quite handy.
So, while there is nothing in Seam to support multi-tenant, it's fairly easy to do.

In which layer would you implement transactions using asp.NET TransactionScope?

I have a service, business and data access layer. In which layer should I implement transactions using asp.NET transactionscope? Also, is nesting Transactions a good thing because I had problems with that?
Transaction Scope is part of .net not specific to asp.net
We would place the tansaction scope in the business layer. The service layer is more of a facade. If something requires a transaction it should be within a single business operation.

Resources