Confluent Kafka consumer consumes messages only after changing groupId - .net-core

I have a .Net core console application, that uses Confluent.Kafka.
I build a consumer for consuming messages from specific topic.
the app is intended to run a few times every-day, consume the messages on the specified topic and process them.
It took me a while to understand the consumer's vehavior, but the it will consume messages only if its groupId is a one that was never in use before.
Every time I change the consumer's groupId - the comsumer will fetch the messages in the subscribed topic. But on the next runs - with same groupId - the consumer.Consume returns null.
This behvior seems rlated to rebalance between consumers on same group. But I don't understand why - since the consumer should exist only throughout the application liftime. Before leaving the app, I call to consumer.close() and consumer.Dispose(). These should destoy the consumer, so that on the next run, when I create the consumer, again it will be the first and single consumer on the specified groupId. But as I said, this is not what happens in fact.
I know there are messages on the topic - I check it via command-line. And I also made sure the topic has only 1 partition.
The most weird thing is, that I have another .net core console app, which does the same process - and with no issue at all.
I attach the codes of the 2 apps.
Working app - always consuming:
class Program
{
...
static void Main(string[] args)
{
if (args.Length != 2)
{
Console.WriteLine("Please provide topic name to read and SMTP topic name");
}
else
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<ProducerConfig, ProducerConfig>();
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
var pConfig = serviceProvider.GetService<ProducerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "confluence-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
pConfig.BootstrapServers = Environment.GetEnvironmentVariable("producer_bootstrap_servers");
var consumer = new ConsumerHelper(cConfig, args[0]);
messages = new Dictionary<string, Dictionary<string, UserMsg>>();
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"Current consumed msg-json: {result.Message.Value}");
...
result = consumer.ReadMessage();
}
consumer.Close();
Console.WriteLine($"Done consuming messages from topic {args[0]}");
}
}
class ConsumerHelper.cs
namespace AggregateMailing
{
using System;
using Confluent.Kafka;
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string>(_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ReadMessage: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ReadMessage: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
}
}
}
Not working app - consuming only on first run after changing consumer groupId to one never in use:
class Program.cs
class Program
{
private static SmtpClient smtpClient;
private static Random random = new Random();
static void Main(string[] args)
{
try
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<SmtpClient>(new SmtpClient("smtp.gmail.com"));
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "smtp-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
var consumer = new ConsumerHelper(cConfig, args[0]);
...
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"current consumed message: {result.Message.Value}");
var msg = JsonConvert.DeserializeObject<EmailMsg>(result.Message.Value);
result = consumer.ReadMessage();
}
Console.WriteLine("Done sending emails consumed from SMTP topic");
consumer.Close();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Main: {exc.ToString()}");
}
}
class ConsumerHelper.cs
using Confluent.Kafka;
using System;
using System.Collections.Generic;
namespace Mailer
{
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string> (_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
//_consumer.Assign(new TopicPartition(_topicName, 0));
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ConsumeResult: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumeResult: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
Console.WriteLine("Close: end");
}
}
}

Related

How to manage WebSocket objects that are no longer needed ASP.Net Core

I am using Asp.Net core 3.1 . If I want to create a WebSockets backend for example for
a chat app , I need to store all the related WebSocket objects for broadcasting events , my question is what is the best way to manage removing objects that are no longer useful (if disconnected or no longer open). keeping in mind that I want other parts of the application to access the WebScoket groups to also broadcast events if needed. I store the related connections in a ConnectionNode which is the nearest layer to the Websocket objects , a class called WebsocketsManager manage these nodes, a service in the background runs to clear the unused objects every timeout period. but since I want the group(related connections)to be accessible for the application (for example other endpoints); to avoid any concurrent modification errors , if a broadcast is required during the cleaning process,the broadcast will have to wait for the cleaning process to finish, thats why the WebsocketsManager if the related connections are larger than a certain limit it will divide them into multiple related ConnectionNodes , that way the cleaning process can continue partially for related connection while broadcasting if needed. I want to know how good my solution will behave or what is the best way to do it. any help would be really appreciated.
ConnectionNode
public class ConnectionNode
{
private List<WebSocket> connections;
private BroadcastQueue BroadcastQueue = new BroadcastQueue();
private bool isBroadCasting = false;
private bool isCleaning = false;
public void AddConnection(WebSocket socket)
{
if (connections == null)
connections = new List<WebSocket>();
connections.Add(socket);
}
public void Broadcast(Broadcast broadCast)
{
while (isCleaning)
{
}
BroadcastQueue.QueueBroadcast(broadCast);
if (isBroadCasting)
{
return;
}
isBroadCasting = true;
var broadcast = BroadcastQueue.GetNext();
while (broadCast != null)
{
foreach (var ws in connections)
{
broadCast.Dispatch(ws);
}
broadCast = BroadcastQueue.GetNext();
}
isBroadCasting = false;
}
public int CleanUnUsedConnections()
{
if (isBroadCasting)
return 0;
isCleaning = true;
var i =connections.RemoveAll(s => s.State != WebSocketState.Open);
isCleaning = false;
return i;
}
public int ConnectionsCount()
{
return connections.Count;
}
}
Manager class
public class WebSocketsManager
{
static int ConnectionNodesDividerLimit = 1000;
private ConcurrentDictionary<String, List<ConnectionNode>> mConnectionNodes;
private readonly ILogger<WebSocketsManager> logger;
public WebSocketsManager(ILogger<WebSocketsManager> logger)
{
this.logger = logger;
}
public ConnectionNode RequireNode(string Id)
{
if (mConnectionNodes == null)
mConnectionNodes = new ConcurrentDictionary<String, List<ConnectionNode>>();
var node = mConnectionNodes.GetValueOrDefault(Id);
if (node == null)
{
node = new List<ConnectionNode>();
node.Add(new ConnectionNode());
mConnectionNodes.TryAdd(Id, node);
return node[0];
}
if (ConnectionNodesDividerLimit != 0)
{
if (node[0].ConnectionsCount() == ConnectionNodesDividerLimit)
{
node.Insert(0,new ConnectionNode());
}
}
return node[0];
}
public void ClearUnusedConnections()
{
logger.LogInformation("Manager is Clearing ..");
if (mConnectionNodes == null)
return;
if (mConnectionNodes.IsEmpty)
{
logger.LogInformation("Empty ## Nothing to clear ..");
return;
}
Dictionary<String,ConnectionNode> ToBeRemovedNodes = new Dictionary<String, ConnectionNode>();
foreach (var pair in mConnectionNodes)
{
bool shoudlRemoveStack = true;
foreach (var node in pair.Value)
{
int i = node.CleanUnUsedConnections();
logger.LogInformation($"Removed ${i} from connection node(s){pair.Key}");
if (node.ConnectionsCount() == 0)
{
ToBeRemovedNodes[pair.Key] = node;
logger.LogInformation($"To be Removed A node From ..{pair.Key}");
}
else
{
shoudlRemoveStack = false;
}
}
if (shoudlRemoveStack)
{
ToBeRemovedNodes.Remove(pair.Key);
List<ConnectionNode> v =null;
var b = mConnectionNodes.TryRemove(pair.Key,out v);
logger.LogInformation($"Removing the Stack ..{pair.Key} Removed ${b}");
}
}
foreach (var pair in ToBeRemovedNodes)
{
mConnectionNodes[pair.Key].Remove(pair.Value);
logger.LogInformation($"Clearing Nodes : Clearing Nodes from stack #{pair.Key}");
}
}
public void Broadcast(string id, Broadcast broadcast)
{
var c = mConnectionNodes.GetValueOrDefault(id);
foreach (var node in c)
{
node.Broadcast(broadcast);
}
}
the service
public class SocketsConnectionsCleaningService : BackgroundService
{
private readonly IServiceProvider Povider;
private Timer Timer = null;
private bool isRunning = false;
private readonly ILogger Logger;
public SocketsConnectionsCleaningService(IServiceProvider Provider, ILogger<SocketsConnectionsCleaningService> Logger )
{
this.Povider = Provider;
this.Logger = Logger;
}
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
Logger.LogInformation("Execute Sync is called ");
Timer = new Timer(DeleteClosedConnections, null, TimeSpan.FromMinutes(0), TimeSpan.FromMinutes(2));
return Task.CompletedTask;
}
private void DeleteClosedConnections(object state)
{
Logger.LogInformation("Clearing ");
if (isRunning)
{
Logger.LogInformation("A Task is Running Return ");
return;
}
isRunning = true;
var connectionManager = Povider.GetService(typeof(WebSocketsManager)) as WebSocketsManager;
connectionManager.ClearUnusedConnections();
isRunning = false;
Logger.LogInformation($"Finished Cleaning !");
}
}
Usage in a controller be like
[HttpGet("ws")]
public async Task SomeRealtimeFunction()
{
if (HttpContext.IsWebSocketsRequest())
{
using var socket = await HttpContext.AcceptSocketRequest();
try
{
await socket.SendString(" Connected! ");
webSocketsManager.RequireNode("Chat Room")
.AddConnection(socket);
var RecieverHelper = socket.GetRecieveResultsHelper();
string str = await RecieverHelper.ReceiveString();
while (!RecieverHelper.Result.CloseStatus.HasValue)
{
webSocketsManager
.Broadcast("Chat Room", new StringBroadcast(str));
str = await RecieverHelper.ReceiveString();
}
}
catch (Exception e)
{
await socket.SendString("Error!");
await socket.SendString(e.Message);
await socket.SendString(e.ToString());
}
}
else
{
HttpContext.Response.StatusCode = 400;
}
}

How to implement asynchronous data streaming in .Net Core Service Bus triggered Azure Function processing huge data not to get OutOfMemoryException?

I have a service bus triggered Azure Function which listens for messages containing just blob URL strings of JSON data which each one of them is at least 10MB.
Message queue is near real-time(If I use the correct term) so producers keep putting messaging to the queue with a frequency so there is always data in the queue to be processed.
I have designed a solution but it gets OutOfMemoryException most of the time. The steps involved in the current solution sequentially are:
Consume a message
Download the file from the URL within the consumed message to a temporary folder
Read the whole file as a string
Deserialize it to an object
Partition into the chunks to supply Mongo bulk upsert limit
Bulk upsert to Mongo
I have tried to solve OutOfMemoryException and I thought that it's because my function/consumer don't have the same pace with the producer, so I think that at the time t1 when it gets the first message and process it and then while it's upserting to the mongo the function keeps getting the messages and they accumulate in the memory and waiting to be upserted.
Is my reasoning right?
Thus I think that If I could implement a streaming solution starting from #3, reading from file by chunking and putting it to a stream then I would prevent the memory keep growing and reduce time also. I have mostly Java background and I somehow know that with custom iterator/spliterator/iterable it is possible to do streaming and asynchronous processing.
How can I do asynchronous data streaming with .Net Core in an Azure Function?
Are there other approaches to solve this problem?
namespace x.y.Z
{
public class MyFunction
{
//...
[FunctionName("my-func")]
public async Task Run([ServiceBusTrigger("my-topic", "my-subscription", Connection = "AzureServiceBus")] string message, ILogger log, ExecutionContext context)
{
var data = new PredictionMessage();
try
{
data = myPredictionService.genericDeserialize(message);
await myPredictionService.ValidateAsync(data);
await myPredictionService.AddAsync(data);
}
catch (Exception ex)
{
//...
}
}
}
}
public class PredictionMessage
{
public string BlobURL { get; set; }
}
namespace x.y.z.Prediction
{
public abstract class BasePredictionService<T> : IBasePredictionService<T> where T : PredictionMessage, new()
{
protected readonly ILogger log;
private static JsonSerializer serializer;
public BasePredictionService(ILogger<BasePredictionService<T>> log)
{
this.log = log;
serializer = new JsonSerializer();
}
public async Task ValidateAsync(T message)
{
//...
}
public T genericDeserialize(string message)
{
return JsonConvert.DeserializeObject<T>(message);
}
public virtual Task AddAsync(T message)
{
throw new System.NotImplementedException();
}
public async Task<string> SerializePredictionResult(T message)
{
var result = string.Empty;
using (WebClient client = new WebClient())
{
var tempPath = Path.Combine(Path.GetTempPath(), DateTime.Now.Ticks + ".json");
Uri srcPath = new Uri(message.BlobURL);
await client.DownloadFileTaskAsync(srcPath, tempPath);
using (FileStream fs = File.Open(tempPath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
using (BufferedStream bs = new BufferedStream(fs))
using (StreamReader sr = new StreamReader(bs))
{
result = sr.ReadToEnd();
}
}
Task.Run(() =>
{
File.Delete(tempPath);
});
return result;
}
}
protected TType StreamDataDeserialize<TType>(string streamResult)
{
var body = default(TType);
using (MemoryStream stream = new MemoryStream(Encoding.Default.GetBytes(streamResult)))
{
using (StreamReader streamReader = new StreamReader(stream))
{
body = (TType)serializer.Deserialize(streamReader, typeof(TType));
}
}
return body;
}
protected List<List<TType>> Split<TType>(List<TType> list, int chunkSize = 1000)
{
List<List<TType>> retVal = new List<List<TType>>();
while (list.Count > 0)
{
int count = list.Count > chunkSize ? chunkSize : list.Count;
retVal.Add(list.GetRange(0, count));
list.RemoveRange(0, count);
}
return retVal;
}
}
}
namespace x.y.z.Prediction
{
public class MyPredictionService : BasePredictionService<PredictionMessage>, IMyPredictionService
{
private readonly IMongoDBRepository<MyPrediction> repository;
public MyPredictionService(IMongoDBRepoFactory mongoDBRepoFactory, ILogger<MyPredictionService> log) : base(log)
{
repository = mongoDBRepoFactory.GetRepo<MyPrediction>();
}
public override async Task AddAsync(PredictionMessage message)
{
string streamResult = await base.SerializePredictionResult(message);
var body = base.StreamDataDeserialize<List<MyPrediction>>(streamResult);
if (body != null && body.Count > 0)
{
var chunkList = base.Split(body);
await BulkUpsertProcess(chunkList);
}
}
private async Task BulkUpsertProcess(List<List<MyPrediction>> chunkList)
{
foreach (var perChunk in chunkList)
{
var filterContainers = new List<IDictionary<string, object>>();
var updateContainer = new List<IDictionary<string, object>>();
foreach (var item in perChunk)
{
var filter = new Dictionary<string, object>();
var update = new Dictionary<string, object>();
filter.Add(/*...*/);
filterContainers.Add(filter);
update.Add(/*...*/);
updateContainer.Add(update);
}
await Task.Run(async () =>
{
await repository.BulkUpsertAsync(filterContainers, updateContainer);
});
}
}
}
}

Circuit Breaker with gRPC

In a REST service adding a circuit breaker with hystrix, I could do the following:
#HystrixCommand(fallbackMethod = "getBackupResult")
#GetMapping(value = "/result")
public ResponseEntity<ResultDto> getResult(#RequestParam("request") String someRequest) {
ResultDto resultDto = service.parserRequest(someRequest);
return new ResponseEntity<>(resultDto, HttpStatus.OK);
}
public ResponseEntity<ResultDto> getBackupResult(#RequestParam("request") String someRequest) {
ResultDto resultDto = new ResultDto();
return new ResponseEntity<>(resultDto, HttpStatus.OK);
}
Is there something similar I can do for the gRPC call?
public void parseRequest(ParseRequest request, StreamObserver<ParseResponse> responseObserver) {
try {
ParseResponse parseResponse = service.parseRequest(request.getSomeRequest());
responseObserver.onNext(parseResponse);
responseObserver.onCompleted();
} catch (Exception e) {
logger.error("Failed to execute parse request.", e);
responseObserver.onError(new StatusException(Status.INTERNAL));
}
}
I solved my problem by implementing the circuit-breaker on my client. I used the sentinel library
To react on exceptions ratio for example I added this rule:
private static final String KEY = "callGRPC";
private void callGRPC(List<String> userAgents) {
initDegradeRule();
ManagedChannel channel = ManagedChannelBuilder.forAddress(grpcHost, grpcPort).usePlaintext()
.build();
for (String userAgent : userAgents) {
Entry entry = null;
try {
entry = SphU.entry(KEY);
UserAgentServiceGrpc.UserAgentServiceBlockingStub stub
= UserAgentServiceGrpc.newBlockingStub(channel);
UserAgentParseRequest request = UserAgentRequest.newBuilder().setUserAgent(userAgent).build();
UserAgentResponse userAgentResponse = stub.getUserAgentDetails(request);
} catch (BlockException e) {
logger.error("Circuit-breaker is on and the call has been blocked");
} catch (Throwable t) {
logger.error("Exception was thrown", t);
} finally {
if (entry != null) {
entry.exit();
}
}
}
channel.shutdown();
}
private void initDegradeRule() {
List<DegradeRule> rules = new ArrayList<DegradeRule>();
DegradeRule rule = new DegradeRule();
rule.setResource(KEY);
rule.setCount(0.5);
rule.setGrade(RuleConstant.DEGRADE_GRADE_EXCEPTION_RATIO);
rule.setTimeWindow(60);
rules.add(rule);
DegradeRuleManager.loadRules(rules);
}

App Crash when comes back from Sleep

I am using xamarin for crossplatform app development . We have used asure mobile service to connect with database . Basically This application is for chatting purpose so we have used SignalR with .NET Framework.When app comes from sleep after some duration like 60 seconds it get crashed. Is there any way to connect back using SignalR.Issue seems with SignalR. Where exactly do I need to update code at client side or server side .
Client side code
public class SignalRClient
{
private static string CONNECTION_URL = "http://";
private static TimeSpan CONNECT_TIMEOUT = new TimeSpan(0, 0, 30);
private readonly HubConnection _hubConnection;
private readonly IHubProxy _chatHubProxy;
private static string AuthToken = "";
public string UserID;
public event SignalRConnectionStateChangedDelegate SignalRConnectionStateChangedEvent;
public SignalRClient(string authToken)
{
AuthToken = authToken;
_hubConnection = new HubConnection(CONNECTION_URL);
_hubConnection.Headers["xauth"] = AuthToken;
_hubConnection.TransportConnectTimeout = CONNECT_TIMEOUT;
_hubConnection.Error += ex =>
{
if (SignalRConnectionStateChangedEvent != null)
{
_hubConnection.Stop();
SignalRConnectionStateChangedEvent("Error :" + ex.Message);
}
};
_chatHubProxy = _hubConnection.CreateHubProxy("ChatServer");
}
public async Task Connect(string UserID)
{
if (_hubConnection.State != ConnectionState.Connected)
{
try
{
await _hubConnection.Start();
_hubConnection.StateChanged += (connectionState) =>
{
if (SignalRConnectionStateChangedEvent != null)
{
SignalRConnectionStateChangedEvent(connectionState.NewState.ToString().ToLower());
}
};
}
catch (Exception ex)
{
var message = ex.Message;
}

How to call EJB from another app on the same server?

I have java SE sample client which run on desktop (code below). But I have access to WebSphere were called EJB is deployed. How to rewrite below code to work on WebSphere? (When I leave this code just like it is program works but I think this can be done more simple and clear)
Main method:
WSConn connection = new WSConn();
final Plan plan = connection.getPlanBean();
com.ibm.websphere.security.auth.WSSubject.doAs(connection.getSubject(), new java.security.PrivilegedAction<Object>() {
public Object run() {
try {
// App logic
} catch (Throwable t) {
System.err.println("PrivilegedAction - Error calling EJB: " + t);
t.printStackTrace();
}
return null;
}
}); // end doAs
WSConn class:
public class WSConn {
private static final String INITIAL_CONTEXT_FACTORY = "com.ibm.websphere.naming.WsnInitialContextFactory";
private static final String JAAS_MODULE = "WSLogin";
private static final String MODEL_EJB_NAME_LONG = "ejb/com/ibm/ModelHome";
private static final String PLAN_EJB_NAME_LONG = "ejb/com/ibm/PlanHome";
private Subject subject;
private InitialContext initialContext;
private String serverName;
private String serverPort;
private String uid;
private String pwd;
private String remoteServerName;
private Model modelBean;
private Plan planBean;
public WSConn() {
Properties props = new Properties();
try {
props.load(WSConn.class.getClassLoader().getResourceAsStream("WSConn.properties"));
} catch (IOException e) {
e.printStackTrace();
}
serverName = props.getProperty("WSConn.serverName");
serverPort = props.getProperty("WSConn.serverPort");
uid = props.getProperty("WSConn.userID");
pwd = props.getProperty("WSConn.password");
remoteServerName = props.getProperty("WSConn.remoteServerName");
}
private void init() {
if (subject == null || initialContext == null) {
subject = login();
}
}
private Subject login() {
Subject subject = null;
try {
LoginContext lc = null;
// CRATE LOGIN CONTEXT
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);
env.put(Context.PROVIDER_URL, "corbaloc:iiop:" + serverName + ":" + serverPort);
initialContext = new InitialContext(env);
// Just to test the connection
initialContext.lookup("");
lc = new LoginContext(JAAS_MODULE, new WSCallbackHandlerImpl(uid, pwd));
lc.login();
subject = lc.getSubject();
} catch (javax.naming.NoPermissionException exc) {
System.err.println("[WSConn] - Login Error: " + exc);
} catch (Exception exc) {
System.err.println("[WSConn] - Error: " + exc);
}
return subject;
}
public wModel getModelBean() {
if (modelBean == null) {
init();
modelBean = (wModel) com.ibm.websphere.security.auth.WSSubject.doAs(subject,
new java.security.PrivilegedAction<wModel>() {
public wModel run() {
wModel session = null;
try {
Object o = initialContext.lookup(MODEL_EJB_NAME_LONG);
wModelHome home = (wModelHome) PortableRemoteObject.narrow(o, wModelHome.class);
if (home != null) {
session = home.create(remoteServerName);
}
} catch (Exception exc) {
System.err.println("Error getting model bean: " + exc);
}
return session;
}
}); // end doAs
}
return modelBean;
}
public wPlan getPlanBean() {
if (planBean == null) {
init();
planBean = (wPlan) com.ibm.websphere.security.auth.WSSubject.doAs(subject,
new java.security.PrivilegedAction<wPlan>() {
public wPlan run() {
wPlan session = null;
try {
Object o = initialContext.lookup(PLAN_EJB_NAME_LONG);
wPlanHome home = (wPlanHome) PortableRemoteObject.narrow(o, wPlanHome.class);
if (home != null) {
session = home.create(remoteServerName);
}
} catch (Exception exc) {
System.err.println("Error getting plan bean: " + exc);
}
return session;
}
}); // end doAs
}
return planBean;
}
public Subject getSubject() {
if (subject == null) {
init();
}
return subject;
}
}
As indicated in another answer, the classic mechanism is to lookup and narrow the home interface.
Get the initial context
final InitialContext initialContext = new InitialContext();
Lookup for the home by jndi name, specifying either the full jndi name
Object obj = initialContext.lookup("ejb/com/ibm/tws/conn/plan/ConnPlanHome");
or you can create e reference in your WAR and use java:comp/env/yourname
Then narrow the home to the home interface class
ConnPlanHome planHome = (ConnPlanHome)PortableRemoteObject.narrow(obj, ConnPlanHome.class);
and then create the EJB remote interface
ConnPlan plan = planHome.create();
The about calls should work for IBM Workload Scheduler distributed.
For IBM Workload Scheduler z/OS the JNDI name and the class names are different:
final InitialContext initialContext = new InitialContext();
String engineName = "XXXX";
Object obj = initialContext.lookup("ejb/com/ibm/tws/zconn/plan/ZConnPlanHome");
ZConnPlanHome planHome = (ZConnPlanHome)PortableRemoteObject.narrow(obj, ZConnPlanHome.class);
ZConnPlan plan = planHome.create(engineName);
User credentials are propagated from the client to the engine, the client need to be authenticated otherwise the engine will reject the request.
If you're trying to access an EJB from a POJO class, then there is nothing more simple than lookup+narrow. However, if the POJO is included in an application (EAR or WAR), then you could declare and lookup an EJB reference (java:comp/ejb/myEJB), and then the container would perform the narrow rather than your code. If you change your code to be a managed class like a servlet, another EJB, or a CDI bean, then you could use #EJB injection, and then you would not even need a lookup.

Resources