I want to list all my kafka consumers and inspect their state, group ...
Before I was using just spring-kafka so I did the following and it works
private final KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
var listenerContainers = kafkaListenerEndpointRegistry.getAllListenerContainers();
listenerContainers.forEach(mlc -> {
var isRunning = mlc.isRunning();
var group = mlc.getGroupId()
// other checks
But now when I am using spring-cloud-stream with kafka, listenerContainers is an empty list !
How can I do the same with SCS ?
With Spring Cloud Stream, add a ListenerContainerCustomizer bean and capture each listener container (e.g. store them in a list).
List<AbstractMessageListenerContainer<?, ?>> containers = new ArrayList<>();
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> this.containers.add(container);
}
Related
I am trying to use Serilog with Application Insights sink for logging purposes. I can see the logs in Search bar in Azure Portal (Application Insights) but same logs are not visible if we view the timeline of events in Failures or Performance Tab. Thanks
Below is the code am using for registering Logger in FunctionStartup, which then gets injected in Function for logging:
var logger = new LoggerConfiguration()
.Enrich.FromLogContext()
.Enrich.WithProperty("ApplicationName", "testApp")
.Enrich.WithProperty("Environment", "Dev")
.WriteTo.ApplicationInsights(GetTelemetryClient("Instrumentationkey"), TelemetryConverter.Traces)
.CreateLogger();
builder.Services.AddSingleton<ILogger>(logger);
Telementory Client is getting fetched from a helper method:
public static TelemetryClient GetTelemetryClient(string key)
{
var teleConfig = new TelemetryConfiguration { InstrumentationKey = key };
var teleClient = new TelemetryClient(teleConfig);
return teleClient;
}
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingExcludedTypes": "Request",
"samplingSettings": {
"isEnabled": true
}
}
}
}
I got your mean, and pls allow me to sum up my testing result here.
First, the failure blade is not designed for providing a timeline which used to trace the details(what happened before the exception take place), but to show all the exceptions, how often the error happened, how many users be affected, etc, it's more likely stand in a high place to see the whole program.
And to achieve your goal, I think you can use this kql query in the Logs blade or watching it in transaction blade.
union traces, requests,exceptions
| where operation_Id == "178845c426975d4eb96ba5f7b5f376e1"
Basically, we may add many logs in the executing chain, e.g. in the controller, log the input parameter, then log the result of data combining or formatting, log the exception information in catch, so here's my testing code. I can't see any other information in failure blade as you, but in the transaction blade, I can see the timeline.
public class HelloController : Controller
{
public string greet(string name)
{
Log.Verbose("come to greet function");
Log.Debug("serilog_debug_info");
Log.Information("greet name input " + name);
int count = int.Parse(name);
Log.Warning("enter greet name is : {0}", count);
return "hello " + name;
}
}
And we can easily find that, the whole chain shares the same operationId, and via all these logs, we can pinpoint the wrong line code. By the way, if I surround the code with try/catch, exception won't be captured in the failure blade.
==================================
Using Serilog integrate app insights, we need to send serilog to application insights, and we will see lots of Traces in transaction search, so it's better to made the MinimumLevel to be information and higher. The sreenshot below is my log details, and we can also use kql query by operationId to see the whole chain.
You can easily solve this by following the solution provided by Azure Application Insights on their GitHub repo, as per this Github Issue, you can either use the DI to configure TelemetryConfiguration, i.e
services.Configure<TelemetryConfiguration>(
(o) => {
o.InstrumentationKey = "123";
o.TelemetryInitializers.Add(new OperationCorrelationTelemetryInitializer());
});
or you can configure it manually like this:
var config = TelemetryConfiguration.CreateDefault();
var client = new TelemetryClient(config);
So in your code, you have to change your GetTelemetryClient from
public static TelemetryClient GetTelemetryClient(string key)
{
var teleConfig = new TelemetryConfiguration { InstrumentationKey = key };
var teleClient = new TelemetryClient(teleConfig);
return teleClient;
}
to this
public static TelemetryClient GetTelemetryClient(string key)
{
var teleConfig = TelemetryConfiguration.CreateDefault();
var teleClient = new TelemetryClient(teleConfig);
return teleClient;
}
In order to use logging using Telemetry Configuration as mentioned in the answer above for Azure Functions, we just need to update the function as in below snippet and on deployment it should fetch Instrumentation key itself
public static TelemetryClient GetTelemetryClient()
{
var teleConfig = TelemetryConfiguration.CreateDefault();
var teleClient = new TelemetryClient(teleConfig);
return teleClient;
}
But to run both locally and after deployment on Azure. We need to add something like this in function Startup and get rid of the Function above.
builder.Services.Configure<TelemetryConfiguration>((o) =>
{
o.InstrumentationKey = "KEY";
o.TelemetryInitializers.Add(new OperationCorrelationTelemetryInitializer());
});
builder.Services.AddSingleton<ILogger>(sp =>
{
var logger = new LoggerConfiguration()
.Enrich.FromLogContext()
.Enrich.WithProperty("ApplicationName", "TEST")
.Enrich.WithProperty("Environment", "DEV")
.WriteTo.ApplicationInsights(
sp.GetRequiredService<TelemetryConfiguration>(), TelemetryConverter.Traces).CreateLogger();
return logger;
});
After wards we just need to use typical DI in our classes/azure function to use ILogger
public class Test{
public ILogger _log;
public void Test(ILogger log){
_log=log;
}
}
How can I initialize a node with data?
Let's take the bootcamp's application as an example. There you can issue tokens to other parties.
I want to extend that, and check if the sending node, has the tokens in the first place. Only if he has the tokens, he can give them to another party.
The problem is that the sender doesn't have any tokens. How can I set a specific amount of tokens to the sender? Is there any other method besides self-issuing the tokens first?
There is no built-in way to initialise the node with certain transactions already completed.
Instead, you'd have to write a small client that you'd execute after creating the node to automatically perform the transaction(s) you want. In the case of the Bootcamp CorDapp, you might write something like:
public class Client {
private static final Logger logger = LoggerFactory.getLogger(Client.class);
public static void main(String[] args) throws ExecutionException, InterruptedException {
// Create an RPC connection to the node.
if (args.length != 3) throw new IllegalArgumentException("Usage: Client <node address> <rpc username> <rpc password>");
final NetworkHostAndPort nodeAddress = parse(args[0]);
final String rpcUsername = args[1];
final String rpcPassword = args[2];
final CordaRPCClient client = new CordaRPCClient(nodeAddress);
final CordaRPCOps proxy = client.start(rpcUsername, rpcPassword).getProxy();
// Issue the tokens.
Party owner = proxy.nodeInfo().getLegalIdentities().get(0);
int amount = 100;
proxy.startFlowDynamic(TokenIssueFlow.class, owner, amount).getReturnValue().get();
}
}
I'm starting with REDIS and the StackExchange Redis client. I'm wondering if I'm getting the best performance for getting multiple items at once from REDIS.
Situation:
I have an ASP.NET MVC web application that shows a personal calendar on the dashboard of the user. Because the dashboard is the landing page it's heavily used.
To show the calendar items, I first get all calendar item ID's for that particular month:
RedisManager.RedisDb.StringGet("calendaritems_2016_8");
// this returns JSON Serialized List<int>
Then, for each calendar item id I build a list of corresponding cache keys:
"CalendarItemCache_1"
"CalendarItemCache_2"
"CalendarItemCache_3"
etc.
With this collection I reach out to REDIS with a generic function:
var multipleItems = CacheHelper.GetMultiple<CalendarItemCache>(cacheKeys);
That's implemented like:
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
var taskList = new List<Task>();
var returnList = new ConcurrentBag<T>();
foreach (var key in keys)
{
Task<T> stringGetAsync = RedisManager.RedisDb.StringGetAsync(key)
.ContinueWith(task =>
{
if (!string.IsNullOrWhiteSpace(task.Result))
{
var deserializeFromJson = CurrentSerializer.Serializer.DeserializeFromJson<T>(task.Result);
returnList.Add(deserializeFromJson);
return deserializeFromJson;
}
else
{
return null;
}
});
taskList.Add(stringGetAsync);
}
Task[] tasks = taskList.ToArray();
Task.WaitAll(tasks);
return returnList.ToList();
}
Am I implementing pipelining correct? The REDIS CLI monitor shows:
1472728336.718370 [0 127.0.0.1:50335] "GET" "CalendarItemCache_1"
1472728336.718389 [0 127.0.0.1:50335] "GET" "CalendarItemCache_2"
etc.
I'm expecting some kind of MGET command.
Many thanks in advance.
I noticed an overload method for StringGet that accepts a RedisKey[]. Using this, I see a MGET command in the monitor.
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
List<RedisKey> list = new List<RedisKey>(keys.Count);
foreach (var key in keys)
{
list.Add(key);
}
RedisValue[] result = RedisManager.RedisDb.StringGet(list.ToArray());
var redisValues = result.Where(x=>x.HasValue);
var multiple = redisValues.Select(x => DeserializeFromJson<T>(x)).ToList();
return multiple;
}
I've been working on coming with a SignalR Unit testing framework using Moq.
I have been able things to get reasonably well with the 1 group - 1 client (connection) scenario.
How do I set up Moq so I can:
1) Add/remove multiple clients from the same group?
2) Add/remove multiple groups on the same mocked hub?
I'm relatively new to the world of Moq and SignalR combination.
Thanks in advance,
JohnB
Here is an example testing adding a client to multipe groups using Moq and xUnit.net:
[Fact]
public async Task MyHubAddsConnectionToTheCorrectGroups()
{
// Arrange
var groupManagerMock = new Mock<IGroupManager>();
var connectionId = Guid.NewGuid().ToString();
var groupsJoined = new List<string>();
groupManagerMock.Setup(g => g.Add(connectionId, It.IsAny<string>()))
.Returns(Task.FromResult<object>(null))
.Callback<string, string>((cid, groupToJoin) =>
groupsJoined.Add(groupToJoin));
var myHub = new MyHub();
myHub.Groups = groupManagerMock.Object;
myHub.Context = new HubCallerContext(request: null,
connectionId: connectionId);
// Act
await myHub.AddToGroups();
// Assert
groupManagerMock.VerifyAll();
Assert.Equal(3, groupsJoined.Count);
Assert.Contains("group1", groupsJoined);
Assert.Contains("group2", groupsJoined);
Assert.Contains("group3", groupsJoined);
}
public class MyHub : Hub
{
public async Task AddToGroups()
{
await Groups.Add(Context.ConnectionId, "group1");
await Groups.Add(Context.ConnectionId, "group2");
await Groups.Add(Context.ConnectionId, "group3");
}
}
The basic idea is to define a Callback along with your Setup that stores arguments important to your test inside a collection. You can then use the collection verify that the method you mocked was called the right number of times with the right arguments. I don't verify the order of the calls to Groups.Add in my example test, but you can test that as well.
This pattern extends pretty trivially to testing the adding/removing of multiple clients. Basically, you would just need a second collection to store the connectionId arguments passed to Groups.Add.
I am attempting to write some unit tests for a class I am writing in Flex 4.5.1 using FlexUnit 4 and Mockolate for my testing and mocking framework respectively. I am using as3-signals for my custom events.
The functionality that I am writing and testing is a wrapper class (QueryQueue) around the QueryTask class within the ArcGIS API for Flex. This enables me to easily queue up multiple query tasks for execution. My wrapper, QueryQueue will dispatch a completed event when all the query responses have been processed.
The interface is very simple.
public interface IQueryQueue
{
function get inProgress():Boolean;
function get count():int;
function get completed():ISignal;
function get canceled():ISignal;
function add(query:Query, url:String, token:Object = null):void;
function cancel():void;
function execute():void;
}
Here is an example usage:
public function exampleUsage():void
{
var queryQueue:IQueryQueue = new QueryQueue(new QueryTaskFactory());
queryQueue.completed.add(onCompleted);
queryQueue.canceled.add(onCanceled);
var query1:Query = new Query();
var query2:Query = new Query();
// set query parameters
queryQueue.add(query1, url1);
queryQueue.add(query2, url2);
queryQueue.execute();
}
public function onCompleted(sender:Object, event:QueryQueueCompletedEventArgs)
{
// do stuff with the the processed results
}
public function onCanceled(sender:Object, event:QueryQueueCanceledEventArgs)
{
// handle the canceled event
}
For my tests I am currently mocking the QueryTaskFactory and QueryTask objects. Simple tests such as ensuring that queries are added to the queue relatively straight forward.
[Test(Description="Tests adding valid QueryTasks to the QueryQueue.")]
public function addsQuerys():void
{
var queryTaskFactory:QueryTaskFactory = nice(QueryTaskFactory);
var queryQueue:IQueryQueue = new QueryQueue(queryTaskFactory);
assertThat(queryQueue.inProgress, isFalse());
assertThat(queryQueue.count, equalTo(0));
var query1:Query = new Query();
queryQueue.add(query1, "http://gisinc.com");
assertThat(queryQueue.inProgress, isFalse());
assertThat(queryQueue.count, equalTo(1));
var query2:Query = new Query();
queryQueue.add(query2, "http://gisinc.com");
assertThat(queryQueue.inProgress, isFalse());
assertThat(queryQueue.count, equalTo(2));
var query3:Query = new Query();
queryQueue.add(query3, "http://gisinc.com");
assertThat(queryQueue.inProgress, isFalse());
assertThat(queryQueue.count, equalTo(3));
}
However, I want to be able to test the execute method as well. This method should execute all the queries added to the queue. When all the query results have been processed the completed event is dispatched. The test should ensure that:
execute is called on each query once and only once
inProgress = true while the results have not been processed
inProgress = false when the results have been processed
completed is dispatched when the results have been processed
canceled is never called (for valid queries)
The processing done within the queue correctly processes and packages the query results
So far I can write tests for items 1 through 5 thanks in large part to the answer provided by weltraumpirat. My execute test now currently looks like this.
[Test(async, description="Tests that all queryies in the queue are executed and the completed signal is fired")]
public function executesAllQueriesInQueue():void
{
// Setup test objects and mocks
var query:Query = new Query();
var mockedQueryTask:QueryTask = nice(QueryTask);
var mockedQueryTaskFactory:QueryTaskFactory = nice(QueryTaskFactory);
// Setup expectations
expect(mockedQueryTaskFactory.createQueryTask("http://test.com")).returns(mockedQueryTask);
expect(mockedQueryTask.execute(query, null)).once();
// Setup handlers for expected and not expected signals (events)
var queryQueue:IQueryQueue = new QueryQueue(mockedQueryTaskFactory);
handleSignal(this, queryQueue.completed, verifyOnCompleted, 500, null);
registerFailureSignal(this, queryQueue.canceled);
// Do it
queryQueue.add(query, "http://test.com");
queryQueue.execute();
// Test that things went according to plan
assertThat(queryQueue.inProgress, isTrue());
verify(mockedQueryTask);
verify(mockedQueryTaskFactory);
function verifyOnCompleted(event:SignalAsyncEvent, passThroughData:Object):void
{
assertThat(queryQueue.inProgress, isFalse());
}
}
The QueryQueue.execute method looks like this.
public function execute():void
{
_inProgress = true;
for each(var queryObject:QueryObject in _queryTasks)
{
var queryTask:QueryTask = _queryTaskFactory.createQueryTask(queryObject.url);
var asyncToken:AsyncToken = queryTask.execute(queryObject.query);
var asyncResponder:AsyncResponder = new AsyncResponder(queryTaskResultHandler, queryTaskFaultHandler, queryObject.token);
asyncToken.addResponder(asyncResponder);
}
}
private function queryTaskResultHandler(result:Object, token:Object = null):void
{
// For each result collect the data and stuff it into a result collection
// to be sent via the completed signal when all querytask responses
// have been processed.
}
private function queryTaskFaultHandler(error:FaultEvent, token:Object = null):void
{
// For each error collect the error and stuff it into an error collection
// to be sent via the completed signal when all querytask responses
// have been processed.
}
For test #6 above what I want to be able to do is to test that the data that is returned in the queryTaskResultHandler and the queryTaskFaultHandler is properly processed.
That is, I do not dispatch a completed event until all the query responses have returned, including successful and failed result.
To test this process I think that I need to mock the data coming back in the result and fault handlers for each mocked query task.
So, how do I mock the data passed to a result handler created via an AsyncResponder using FlexUnit and mockolate.
You can mock any object or interface with mockolate. In my experience, it is best to set up a rule and mock like this:
[Rule]
public var rule : MockolateRule = new MockolateRule();
[Mock]
public var task : QueryTask;
Notice that you must instantiate the rule, but not the mock object.
You can then specify your expectations:
[Test]
public function myTest () : void {
mock( task ).method( "execute" ); // expects that the execute method be called
}
You can expect a bunch of things, such as parameters:
var responder:AsyncResponder = new AsyncResponder(resultHandler, faultHandler);
mock( task ).method( "execute" ).args( responder ); // expects a specific argument
Or make the object return specific values:
mock( queue ).method( "execute" ).returns( myReturnValue ); // actually returns the value(!)
Sending events from the mock object is as simple as calling dispatchEvent on it - since you're mocking the original class, it inherits all of its features, including EventDispatcher.
Now for your special case, it would to my mind be best to mock the use of all three external dependencies: Query, QueryTask and AsyncResponder, since it is not their functionality you are testing, but that of your Queue.
Since you are creating these objects within your queue, that makes it hard to mock them. In fact, you shouldn't really create anything directly in any class, unless there are no external dependencies! Instead, pass in a factory (you might want to use a dependency injection framework) for each of the objects you must create - you can then mock that factory in your test case, and have it return mock objects as needed:
public class QueryFactory {
public function createQuery (...args:*) : Query {
var query:Query = new Query();
(...) // use args array to configure query
return query;
}
}
public class AsyncResponderFactory {
public function createResponder( resultHandler:Function, faultHandler:Function ) : AsyncResponder {
return new AsyncResponder(resultHandler, faultHandler);
}
}
public class QueryTaskFactory {
public function createTask (url:String) : QueryTask {
return new QueryTask(url);
}
}
... in Queue:
(...)
public var queryFactory:QueryFactory;
public var responderFactory : AsyncResponderFactory;
public var taskFactory:QueryTaskFactory;
(...)
var query:Query = queryFactory.createQuery ( myArgs );
var responder:AsyncResponder = responderFactory.createResponder (resultHandler, faultHandler);
var task:QueryTask = taskFactory.createTask (url);
task.execute (query, responder);
(...)
...in your Test:
[Rule]
public var rule : MockolateRule = new MockolateRule();
[Mock]
public var queryFactory:QueryFactory;
public var query:Query; // no need to mock this - you are not calling any of its methods in Queue.
[Mock]
public var responderFactory:AsyncResponderFactory;
public var responder:AsyncResponder;
[Mock]
public var taskFactory:QueryTaskFactory;
[Mock]
public var task:QueryTask;
[Test]
public function myTest () : void {
query = new Query();
mock( queryFactory ).method( "createQuery ").args ( (...) ).returns( query ); // specify arguments instead of (...)!
responder = new AsyncResponder ();
mock( responderFactory ).method( "createResponder" ).args( isA(Function) , isA(Function) ).returns( responder ); // this will ensure that the handlers are really functions
queue.responderFactory = responderFactory;
mock( task ).method( "execute" ).args( query, responder );
mock( taskFactory ).method( "createTask" ).args( "http://myurl.com/" ).returns( task );
queue.taskFactory = taskFactory;
queue.doStuff(); // execute whatever the queue should actually do
}
Note that you must declare all mocks as public, and all of the expectations must be added before passing the mock object to its host , otherwise, mockolate cannot configure the proxy objects correctly.