As described in OpenEJB docs, we can configure JMS connection factory and queues, and they will appear in JNDI as:
openejb:Resource/MyJmsConnectionFactory,
openejb:Resource/MyQueue
Given those JNDI entries, how can I tell to MDB to use them?
Is it possible to change JNDI name, for example ConnectionFactory to appear as java:/ConnectionFactory
or ConnectionFactory
Things work differently than you may be imagining. Specifying that an MDB is tied to a javax.jms.Queue and the name of that queue is part of the EJB specification and done via the ActivationConfig, like so:
#MessageDriven(activationConfig = {
#ActivationConfigProperty(
propertyName = "destinationType",
propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(
propertyName = "destination",
propertyValue = "FooQueue")})
public static class JmsBean implements MessageListener {
public void onMessage(Message message) {
}
}
The MDB container itself is not actually JMS-aware at all. It simply understands that it should hook the bean up to a specific Resource Adapter.
<openejb>
<Resource id="MyJmsResourceAdapter" type="ActiveMQResourceAdapter">
ServerUrl tcp://someHostName:61616
</Resource>
<Container id="MyJmsMdbContainer" ctype="MESSAGE">
ResourceAdapter MyJmsResourceAdapter
</Container>
</openejb>
The above shows an MDB Container hooked up to a Resource Adapter that uses JMS via ActiveMQ.
Here is an example that shows an MDB Container hooked up to a Quartz Resource Adapter
It isn't possible to tell the MDB Container about JMS specific things as per specification, the relationship is much more generic than that. This blog post gives some insight as to how things work.
Related
I have the gRPC server code as below:
public void buildServer() {
List<BindableService> theServiceList = new ArrayList<BindableService>();
theServiceList.add(new CreateModuleContentService());
theServiceList.add(new RemoveModuleContentService());
ServerBuilder<?> sb = ServerBuilder.forPort(m_port);
for (BindableService aService : theServiceList) {
sb.addService(aService);
}
m_server = sb.build();
}
and client code as below:
public class JavaMainClass {
public static void main(String[] args) {
CreateModuleService createModuleService = new CreateModuleService();
ESDStandardResponse esdReponse = createModuleService.createAtomicBlock("8601934885970354030", "atm1");
RemoveModuleService moduleService = new RemoveModuleService();
moduleService.removeAtomicBlock("8601934885970354030", esdReponse.getId());
}
}
While I am running the client I am getting an exception as below:
Exception in thread "main" io.grpc.StatusRuntimeException: UNIMPLEMENTED: Method grpc.blocks.operations.ModuleContentServices/createAtomicBlock is unimplemented
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:139)
In the above server class, if I am commenting the line theServiceList.add(new RemoveModuleContentService()); then the CreateModuleContentService service is working fine, also without commenting all the services of RemoveModuleContentService class are working as expected, which means the problem is with the first service when another gets added.
Can someone please suggest how can I add two services to Server Builder.
A particular gRPC service can only be implemented once per server. Since the name of the gRPC service in the error message is ModuleContentServices, I'm assuming CreateModuleContentService and RemoveModuleContentService both extend ModuleContentServicesImplBase.
When you add the same service multiple times, the last one wins. The way the generated code works, every method of a service is registered even if you don't implement that particular method. Every service method defaults to a handler that simply returns "UNIMPLEMENTED: Method X is unimplemented". createAtomicBlock isn't implemented in RemoveModuleContentService, so it returns that error.
If you interact with the ServerServiceDefinition returned by bindService(), you can mix-and-match methods a bit more, but this is a more advanced API and is intended more for frameworks to use because it can become verbose to compose every application service individually.
public class Startup
{
public IConfiguration Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
loggerFactory.AddFile(logFilePath1);
services.AddSingleton<ILoggerFactory>(loggerFactory);
loggerFactory.AddFile(logFilePath2);
services.AddSingleton<ILoggerFactory>(loggerFactory);
}
}
With in the startup.cs class, I create two loggers . Since it has two loggers how can I set the Ilogger data in the controller? can it do using normal way? Or is there any different way to pass logger filename when logged within the controller?
OK, so you want to have two different loggers in a single controller and you want these two loggers to log to different files. The .NET Core logging does not have good support for this scenario so it requires a bit of hacking to achieve this. Whenever I find myself in a situation where I get a a lot of resistance from the framework I'm using I reconsider if what I'm trying to do is good idea and if it is whether I should use another framework so you might want to do the same. With that in mind here is a way to achieve what you want.
Loggers can be identified by a category. In your case you want a single controller to have two different loggers so you have to use ILoggerFactory to create the loggers (you could use the generic ILogger<T> interface but it becomes a bit weird because you need two different types for T):
public class MyController : Controller
{
private readonly ILogger logger1;
private readonly ILogger logger2;
public Controller1(ILoggerFactor loggerFactory)
{
logger1 = loggerFactory.Create("Logger1");
logger2 = loggerFactory.Create("Logger2");
}
}
The categories of the loggers are Logger1 and Logger2.
Each logger will by default log to all the configured providers. You want a logger with one category to log to one provider and a logger with another category to log to another provider.
While you can create filters that are based on category, provider and log level the problem is that you want to use the same provider for both categories. Providers are identified by their type so you cannot create a rule that targets a specific instance of a provider. If you create a rule for the file provider it will affect all configured file providers.
So this is where the hacking starts: You have to create your own provider types that are linked to the files to be able to filter on each file.
.NET Core does not have support for logging to files so you need a third party provider. You have not specified which provider you use so for this example I will use the Serilog file sink together with the Serilog provider that allows you to plug a Serilog logger into the .NET Core logging framework.
To be able to filter on provider you have to create your own provider. Luckily, that is easily done by deriving from the SerilogLoggerProvider:
class SerilogLoggerProvider1 : SerilogLoggerProvider
{
public SerilogLoggerProvider1(Serilog.ILogger logger) : base(logger) { }
}
class SerilogLoggerProvider2 : SerilogLoggerProvider
{
public SerilogLoggerProvider2(Serilog.ILogger logger) : base(logger) { }
}
These two providers does not add any functionality but allows you to create filter that targets a specific provider.
Next step is crating two different Serilog loggers that log to different files:
var loggerConfiguration1 = new LoggerConfiguration()
.WriteTo.File("...\1.log");
var loggerConfiguration2 = new LoggerConfiguration()
.WriteTo.File("...\2.log");
var logger1 = loggerConfiguration1.CreateLogger();
var logger2 = loggerConfiguration2.CreateLogger();
You configure your logging in Main by calling the extension method .ConfigureLogging:
.ConfigureLogging((hostingContext, loggingBuilder) =>
{
loggingBuilder
.AddProvider(new SerilogLoggerProvider1(logger1))
.AddFilter("Logger1", LogLevel.None)
.AddFilter<SerilogLoggerProvider1>("Logger1", LogLevel.Information)
.AddProvider(new SerilogLoggerProvider2(logger2))
.AddFilter("Logger2", LogLevel.None)
.AddFilter<SerilogLoggerProvider2>("Logger2", LogLevel.Information);
})
Each provider (which is associated with a specific file) are added and then two filters are configured for each provider. I find the filter evaluation rules hard to reason about but the two filters added - one with LogLevel.None and another with LogLevel.Information - actually achieves the desired result of ensuring that log messages for the two different categories are routed correctly to the two different providers. If a third provider is added it will not be affected by these filters and messages from both categories will be logged by the third provider.
Despite what is claimed here:
for applications not working because of missing #Path at class level
-> it should work now
I still have to annotate my endpoint implementations, as annotations on interfaces are not being picked up.
Is it related to the way I configure JAX-RS, or is it a bug still present in TomEE?
interface:
#Path("myPath") public interface MyEndpoint {
#Path("{id}") String getById(#PathParam("id") long id);
}
implementation:
#Stateless class EJBBackedMyEndpoint implements MyEndpoint {
String getById(long id) { return "foo"; }
}
openejb-jar.xml
<openejb-jar xmlns="http://www.openejb.org/openejb-jar/1.1">
<ejb-deployment ejb-name="EJBBackedMyEndpoint">
<properties>cxf.jaxrs.providers = exceptionMapper</properties>
</ejb-deployment>
</openejb-jar>
resources.xml
<resources>
<Service id="exceptionMapper" class-name="package.MyExceptionMapper"/>
</resources>
beans.xml present with just empty root element
Update:
JAX-RS Spec apparently doesn't mention class-level annotations at all
#Consumes and #Produces work when applied on the interface,
#Path (class level) doesn't work when applied on the interface,
#Path on method level is honoured when routing requests, however the UriBuilder is failing:
UriBuilder.path(EJBBackedMyEndpoint.class, "getById") throws IllegalArgumentException: No Path annotation for 'retrieve' method.
That blog post is perhaps misleading. Putting #Path, #GET, #PathParam or other JAX-RS annotations on an interface is not supported by JAX-RS. Per spec all these need to be on the "Resource Class", which is the #Stateless bean class in this situation.
If you move #Path from the interface to bean class it should work. At least it should get further.
read this article on SO, and had some clarifying questions.
I put my config.properties under src/main/resources
In spring-servlet.xml config file
I added the following:
<context:property-placeholder location="classpath:config.properties"/>
In my business layer, I am trying to access it via
#Value("${upload.file.path}")
private String uploadFilePath;
Eclipse shows error:
The attribute value is undefined for the annotation type Value
Can i not access the property in the business layer or are property files only read in the controller?
UPDATE::
src/main/java/com.companyname.controllers/homecontroller.java
public String home(Locale locale, Model model) {
MyServiceObject myObj = new MyServiceObject();
System.out.println("Property from my service object: = " + myObj.PropertyValue());
if(myObj.PerformService())
{
///
}
}
src/main/java/com.companyname.services/MyService.java
public class MyServiceObject {
#Value("${db.server.ip}")
private String _dbServerIP;
public String PropertyValue() {
return _dbServerIPaseURL;
}
}
Another site where I found the explanation
Please check that you import Value from org.springframework.beans.factory.annotation package:
import org.springframework.beans.factory.annotation.Value;
also the property placeholder must be declared in the respective context configuration file, in case of controller it's likely Spring dispatcher servlet configuration file.
update You are confusing property-placeholder that post processes bean values, that contain dollar symbol ${<property name>} with Spring expression language container extension that process values containing a hash symbol #{<Spring expression language expression>}, in the link you have shown the latter approach is used.
regarding the instantiation of MyServiceObject myObj
If you want the object to be managed by Spring you should delegate its creation to the container:
if MyServiceObject is a stateless service then it's a singleton with the singleton bean scope, you should register it in your application context, for example with the following xml configuration:
<bean class="my.package.MyServiceObject"/>
and inject it to your controller:
private MyServiceObject myServiceObject;
#Autowired
public void setMyServiceObject(MyServiceObject myServiceObject){
this.myServiceObject = myServiceObject;
}
if many instances of MyServiceObject are required, you can declare it as a bean with some other (non-singleton) bean scope (prototype, or request, for example).
However, as there's only one instance of the controller, you can't merely let the Spring container to autowire MyServiceObject instance to the controller field, because there will be only one field and many instances of MyServiceObject class. You can read about the different approaches(for the different bean scopes) for resolving this issue in the respective section of the documentation.
Here is a method that allows us to fetch whatever values are needed from a property file. This can be in Java code (Controller class) or in a JSP.
Create a property file WEB-INF/classes/messageSource.properties It will be in the classpath and accessible in both controllers and JSTL.
Like any property file, this one consists of a key and a value
For example:
hello=Hello JSTL, Hello Contoller
For the Controllers
Locate the xml file that you use to define your Spring beans. In my case it is named servlet-context.xml. You may need to determine that by looking in your web.xml in cases where you are using the servlet contextConfigLocation property.
Add a new Spring bean definition with the id="messageSource". This bean will be loaded at runtime by Spring with the property file key value pairs. Create the bean with the following properties:
bean id="messageSource"
class = org.springframework.context.support.ReloadableResourceBundleMessageSource
property name="basename" value="WEB-INF/classes/messageSource
In the bean definition file for the controller class (testController) add the messageSource as a property. This will inject the messageSource bean into the controller.
bean id="testController" class="com.app.springhr.TestController"
beans:property name="messageSource" ref="messageSource"
In the controller JAVA class, add the messageSource Spring Bean field and its getters and setters. Note the field type is ReloadableResourceBundleMessageSource.
private org.springframework.context.support.ReloadableResourceBundleMessageSource messageSource;
public org.springframework.context.support.ReloadableResourceBundleMessageSource getMessageSource() {
return messageSource;
}
public void setMessageSource(
org.springframework.context.support.ReloadableResourceBundleMessageSource messageSource) {
this.messageSource = messageSource;
}
In your controller code, you can now fetch any known property value from the bundle.
String propValue = getMessageSource().getMessage("hello", objArray, null);
Using JSTL
Since the property file messageSource.properties is in the classpath, JSTL will be able to find it and fetch the value for the given key.
Add the import of the fmt taglib
taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt"
Use the fmt:tag in the JSP to fetch a value from the property file.
Sorry about the pseudo syntax, this editor doesn't seem to render any XML.
fmt:bundle basename="messageSource"
fmt:message key="hello"
Hope this helps others
long time ASP.Net interface developer being asked to learn WCF, looking for some education on more architecture related fronts - as its not my strong suit but I'm having to deal.
In our current ASMX world we adopted a model of creating ServiceManager static classes for our interaction with web services. We're starting to migrate to WCF, attempting to follow the same model. At first I was dealing with performance problems, but I've tweaked a bit and we're running smoothly now, but I'm questioning my tactics. Here's a simplified version (removed error handling, caching, object manipulation, etc.) of what we're doing:
public static class ContentManager
{
private static StoryManagerClient _clientProxy = null;
const string _contentServiceResourceCode = "StorySvc";
// FOR CACHING
const int _getStoriesTTL = 300;
private static Dictionary<string, GetStoriesCacheItem> _getStoriesCache = new Dictionary<string, GetStoriesCacheItem>();
private static ReaderWriterLockSlim _cacheLockStories = new ReaderWriterLockSlim();
public static Story[] GetStories(string categoryGuid)
{
// OMITTED - if category is cached and not expired, return from cache
// get endpoint address from FinderClient (ResourceManagement SVC)
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// Get proxy
StoryManagerClient svc = GetStoryServiceClient(ur.Url);
// create request params
GetStoriesRequest request = new GetStoriesRequest{}; // SIMPLIFIED
Manifest manifest = new Manifest{}; // SIMPLIFIED
// execute GetStories at WCF service
try
{
GetStoriesResponse response = svc.GetStories(manifest, request);
}
catch (Exception)
{
if (svc.State == CommunicationState.Faulted)
{
svc.Abort();
}
throw;
}
// OMITTED - do stuff with response, cache if needed
// return....
}
internal static StoryManagerClient GetStoryServiceClient(string endpointAddress)
{
if (_clientProxy == null)
_clientProxy = new StoryManagerClient(GetServiceBinding(_contentServiceResourceCode), new EndpointAddress(endpointAddress));
return _clientProxy;
}
public static Binding GetServiceBinding(string bindingSettingName)
{
// uses Finder service to load a binding object - our alternative to definition in web.config
}
public static void PreloadContentServiceClient()
{
// get finder location
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// preload proxy
GetStoryServiceClient(ur.Url);
}
}
We're running smoothly now with round-trip calls completing in the 100ms range. Creating the PreloadContentServiceClient() method and adding to our global.asax got that "first call" performance down to that same level. And you might want to know we're using the DataContractSerializer, and the "Add Service Reference" method.
I've done a lot of reading on static classes, singletons, shared data contract assemblies, how to use the ChannelFactory pattern and a whole bunch of other things that I could do to our usage model...admittedly, some of its gone over my head. And, like I said, we seem to be running smoothly. I know I'm not seeing the big picture, though. Can someone tell me what I've ended up here with regards to channel pooling, proxy failures, etc. and why I should head down the ChannelFactory path? My gut says to just do it, but my head can't comprehend why...
Thanks!
ChannelFactory is typically used when you aren't using Add Service Reference - you have the contract via a shared assembly not generated via a WSDL. Add Service Reference uses ClientBase which is essentially creating the WCF channel for you behind the scenes.
When you are dealing with REST-ful services, WebChannelFactory provides a service-client like interface based off the shared assembly contract. You can't use Add Service Reference if your service only supports a REST-ful endpoint binding.
The only difference to you is preference - do you need full access the channel for custom behaviors, bindings, etc. or does Add Service Reference + SOAP supply you with enough of an interface for your needs.