How to make logs in application insight without using Task.Delay method? - azure-application-insights

using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApp8
{
class Program
{
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddApplicationInsightsTelemetryWorkerService("Application_Key");
static IServiceProvider serviceProvider = services.BuildServiceProvider();
static ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
static TelemetryClient telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
static void Main(string[] args)
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("1st");
hero();
logger.LogError("2nd");
telemetryClient.TrackTrace("Here is the error");
telemetryClient.Flush();
}
}
static void hero()
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("2nd");
telemetryClient.Flush();
}
}
}
}
I uploading this console application as my webjob to make a log in application insight. I am trying to avoid the use of task.delay() so that I can get real-time logging at perfect timing. I am uploading this webjob triggered manually, but I see no entry in my application insights. Could anyone help me out with this one?

Telemetry is not sent instantly. Telemetry items are batched and sent by the ApplicationInsights SDK. In Console apps, which exits right after calling Track() methods, telemetry may not be sent unless Flush() and Sleep/Delay is done before the app exits as shown in full example later in this article. Sleep is not required if you are using InMemoryChannel. There is an active issue regarding the need for Sleep which is tracked here: link.
So there are two types of channels: InMemoryChannel and ServerTelemetryChannel
For more details about the both the channels click on this link.
In my program to deal with the issue, I used InMemoryChannel. In the below code, I have shown a portion of code to show how I added it in my program.
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddSingleton(typeof(ITelemetryChannel), new InMemoryChannel())
.AddApplicationInsightsTelemetryWorkerService("Application_Key");
The Nuget Package I am using is Microsoft.ApplicationInsights.Channel

Thanks to # Peter Bons for your comment, Which helps to fix the problem.
The Flush() method in the Telemetry Client is used to flush the in-memory buffer when the application is shutting down. Normally, the SDK delivers data every 30 seconds or when the buffer is full (500 items), and there is no need to invoke the Flush() method manually for web applications unless the program is ready to be shut down.
The TelemetryClient object's Flush() method sends all of the data it presently holds in a buffer to the App Insights service.
Application Insights will transfer your data in batches in the background to make better use of the network.
In most cases, you won't need to call Flush(). However, if you know the process will leave after that point, you should execute Flush() to ensure that all of the data gets transmitted.
Here, I have added the Thread.Sleep(); call after the Flush Statement.
static void Main(string[] args)
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("1st");
hero();
logger.LogError("2nd");
telemetryClient.TrackTrace("Here is the error");
# Flush takes some times to memory buffer at the shutdown activity.
telemetryClient.Flush();
# By default Flush takes 30 Sec so you have to wait for 30 sec.
Thread.Sleep(5000);
}
}
static void hero()
{
using (telemetryClient.StartOperation<RequestTelemetry>("AppointmentPatientCommunication"))
{
logger.LogInformation("2nd");
# Flush takes some times to memory buffer at the shutdown activity.
telemetryClient.Flush();
# By default Flush takes 30 Sec so you have to wait for 30 sec.
Thread.Sleep(5000);
}
}
Results in AI:

Related

How to add multiple Bindable services to a grpc server builder?

I have the gRPC server code as below:
public void buildServer() {
List<BindableService> theServiceList = new ArrayList<BindableService>();
theServiceList.add(new CreateModuleContentService());
theServiceList.add(new RemoveModuleContentService());
ServerBuilder<?> sb = ServerBuilder.forPort(m_port);
for (BindableService aService : theServiceList) {
sb.addService(aService);
}
m_server = sb.build();
}
and client code as below:
public class JavaMainClass {
public static void main(String[] args) {
CreateModuleService createModuleService = new CreateModuleService();
ESDStandardResponse esdReponse = createModuleService.createAtomicBlock("8601934885970354030", "atm1");
RemoveModuleService moduleService = new RemoveModuleService();
moduleService.removeAtomicBlock("8601934885970354030", esdReponse.getId());
}
}
While I am running the client I am getting an exception as below:
Exception in thread "main" io.grpc.StatusRuntimeException: UNIMPLEMENTED: Method grpc.blocks.operations.ModuleContentServices/createAtomicBlock is unimplemented
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:139)
In the above server class, if I am commenting the line theServiceList.add(new RemoveModuleContentService()); then the CreateModuleContentService service is working fine, also without commenting all the services of RemoveModuleContentService class are working as expected, which means the problem is with the first service when another gets added.
Can someone please suggest how can I add two services to Server Builder.
A particular gRPC service can only be implemented once per server. Since the name of the gRPC service in the error message is ModuleContentServices, I'm assuming CreateModuleContentService and RemoveModuleContentService both extend ModuleContentServicesImplBase.
When you add the same service multiple times, the last one wins. The way the generated code works, every method of a service is registered even if you don't implement that particular method. Every service method defaults to a handler that simply returns "UNIMPLEMENTED: Method X is unimplemented". createAtomicBlock isn't implemented in RemoveModuleContentService, so it returns that error.
If you interact with the ServerServiceDefinition returned by bindService(), you can mix-and-match methods a bit more, but this is a more advanced API and is intended more for frameworks to use because it can become verbose to compose every application service individually.

How do I make a SignalR external reference to hub and not perform circular reference?

So, I'm trying to create a sample where there are the following components/features:
A hangfire server OWIN self-hosted from a Windows Service
SignalR notifications when jobs are completed
Github Project
I can get the tasks queued and performed, but I'm having a hard time sorting out how to then notify the clients (all currently, just until I get it working well) of when the task/job is completed.
My current issue is that I want the SignalR hub to be located in the "core" library SampleCore, but I don't see how to "register it" when starting the webapp SampleWeb. One way I've gotten around that is to create a hub class NotificationHubProxy that inherits the actual hub and that works fine for simple stuff (sending messages from one client to all).
In NotifyTaskComplete, I believe I can get the hub context and then send the message like so:
private void NotifyTaskComplete(int taskId)
{
try
{
var hubContext = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();
if (hubContext != null)
{
hubContext.Clients.All.sendMessage(string.Format("Task {0} completed.", taskId));
}
}
catch (Exception ex)
{
}
}
BUT, I can't do that if NotificationHubProxy is the class being used as it's part of the SampleWeb library and referencing it from SampleCore would lead to a circular reference.
I know the major issue is the hub in the external assembly, but I can't for the life of me find a relevant sample that's using SignalR or MVC5 or setup in this particular way.
Any ideas?
So, the solution was to do the following two things:
I had to use the SignalR .NET client from the SampleCore assembly to create a HubConnection, to create a HubProxy to "NotificationHub" and use that to Invoke the "SendMessage" method - like so:
private void NotifyTaskComplete(string hostUrl, int taskId)
{
var hubConnection = new HubConnection(hostUrl);
var hub = hubConnection.CreateHubProxy("NotificationHub");
hubConnection.Start().Wait();
hub.Invoke("SendMessage", taskId.ToString()).Wait();
}
BUT, as part of creating that HubConnection - I needed to know the url to the OWIN instance. I decided to pass that a parameter to the task, retrieving it like:
private string GetHostAddress()
{
var request = this.HttpContext.Request;
return string.Format("{0}://{1}", request.Url.Scheme, request.Url.Authority);
}
The solution to having a Hub located in an external assembly is that the assembly needs to be loaded before the SignalR routing is setup, like so:
AppDomain.CurrentDomain.Load(typeof(SampleCore.NotificationHub).Assembly.FullName);
app.MapSignalR();
This solution for this part came from here.

How to specify credentials from a Java Web Service in PTC Windchill PDMLink

I am currently investigating the possibility of using a Java Web Service (as described by the Info*Engine documentation of Windchill) in order to retrieve information regarding parts. I am using Windchill version 10.1.
I have successfully deployed a web service, which I consume in a .Net application. Calls which do not try to access Windchill information complete successfully. However, when trying to retrieve part information, I get a wt.method.AuthenticationException.
Here is the code that runs within the webService (The web service method simply calls this method)
public static String GetOnePart(String partNumber) throws WTException
{
WTPart part=null;
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
try {
QuerySpec qspec= new QuerySpec(WTPart.class);
qspec.appendWhere(new SearchCondition(WTPart.class,WTPart.NUMBER,SearchCondition.LIKE,partNumber),new int[]{0,1});
// This fails.
QueryResult qr=PersistenceHelper.manager.find((StatementSpec)qspec);
while(qr.hasMoreElements())
{
part=(WTPart) qr.nextElement();
partName = part.getName();
}
} catch (AuthenticationException e) {
// Exception caught here.
partName = e.toString();
}
return partName;
}
This code works in a command line application deployed on the server, but fails with a wt.method.AuthenticationException when performed from within the web service. I feel it fails because the use of RemoteMethodServer is not what I should be doing since the web service is within the MethodServer.
Anyhow, if anyone knows how to do this, it would be awesome.
A bonus question would be how to log from within the web service, and how to configure this logging.
Thank you.
You don't need to authenticate on the server side with this code
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
If you have followed the documentation (windchill help center), your web service should be something annotated with #WebServices and #WebMethod(operationName="getOnePart") and inherit com.ptc.jws.servlet.JaxWsService
Also you have to take care to the policy used during deployment.
The default ant script is configured with
security.policy=userNameAuthSymmetricKeys
So you need to manage it when you consume your ws with .Net.
For logging events, you just need to call the log4j logger instantiated by default with $log.debug("Hello")
You can't pre-authenticate server side.
You can write the auth into your client tho. Not sure what the .Net equivilent is, but this works for Java clients:
private static final String USERNAME = "admin";
private static final String PASSWORD = "password";
static {
java.net.Authenticator.setDefault(new java.net.Authenticator() {
#Override
protected java.net.PasswordAuthentication getPasswordAuthentication() {
return new java.net.PasswordAuthentication(USERNAME, PASSWORD.toCharArray());
}
});
}

Execute a server side program asynchronously with an asp.net mvc4 app

is it possible to execute a server side program and get the output asynchronously.
i have this code that doing the job but synchronously:
suppose a c# program "program.exe" like this :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace testconsole
{
class Program
{
static void Main(string[] args)
{
for (int k = 0; k < 10; k++ )Console.WriteLine(k);
}
}
}
some view in the asp.net app like this :
<script >
function go()
{
var options = {
url: '/excute',
type: 'GET',
dataType: 'json'
}
//make call
$.ajax(options)
.then(function (data) {
console.log(data);
});
}
</script>
<input type="submit" onclick="go();" value="Go">
and the excute controller looks like this :
namespace myApp.Controllers
{
public class ExecuteController : Controller
{
//
// GET: /Execute
[HttpGet]
public JsonResult Index()
{
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.FileName = "program.exe";
p.Start();
string output = p.StandardOutput.ReadToEnd();
p.WaitForExit();
return Json(new { op = output }, JsonRequestBehavior.AllowGet);
}
}
}
All this is working fine, But ... from the client have to wait till the end of the program to display its outputs, is there any way to get those outputs as soon as they r created?
im sur i need to make some changes in the controller to make it possible, but how ???
Asp.Net MVC has the concept of an Async Controller that is suited to perform long-running tasks. It will help you by not locking a thread while you wait for out program to execute.
But to do what you are after I think you need to create you own Http Handler (probaby by implementing the IHttpHandler interface) that wraps the process and returns the results incrementally. This will not be trivial to do, but it should be possible.
A third viable alternative might be to use SignalR. That would be a fun project, but would still require much work I think.
The problem is primarily with communication between the IIS host process and your external process. You would need to facilitate some sort channel of communication to send "progress" events from the console application into the ASP.NET application.
A WCF client sending information via named pipes to a service hosted in the ASP.NET application would enable you to send messages into the application. You would host the service when the request is made and dynamically generate the name of pipe as a way to correlate to the initial request.
Once you get the updates in the application, you could then use something like SignalR to allow you to push the information back up the client.
Im back finally with an answer (not perfect i suppose). I used SignalR to get this done.
i created a messenger program (with c#) that will be the bridge between an asp.net mvc4 application and any console program that displays outputs.
the messenger will execute the program , then redirect his outputs to be send trough SignalR to the client.
if you are interested i've created a repo at github for this ,check this code here. I hope it will help someone one day.
i will be happy to talk about this code with you.

Advantages/Disadvantages of increasing AppPool Timeout on Azure

I am just about to launch my ASP.NET MVC3 web app to production, however, as a complex app, it takes a LONG time to start up. Obviously, I don't want my users waiting over a minute for their first request to go through after the AppPool has timed out.
From my research, i've found that there are two ways to combat this:
Run a worker role or other process - which poll's the website every 19 minutes preventing the warm up.
Change the timeout from the default 20 minutes - To something much larger.
As Solution 2 seems like the better idea, i just wondered what the disadvantages would be of this, will I run out of memory etc.?
Thanks.
Could you use the auto-start feature of IIS? There is a post here that presents this idea.
You'd have IIS 7.5 and Win2k8 R2 with Azure OS family 2. You'd just need to be able to script/automate any setup steps and configuration.
I do this with a background thread that requests a keepalive URL every 15 minutes. Not only does this keep the app from going idle, but it also warms up the app right away anytime the web role or virtual machine restarts or is rebuilt.
This is all possible because Web Roles really are just Worker Roles that also do IIS stuff. So you can still use all the standard Worker Role startup hooks in a Web Role.
I got the idea from this blog post but tweaked the code to do a few extra warmup tasks.
First, I have a class that inherits from RoleEntryPoint (it does some other things besides this warm up task and I removed them for simplicity):
public class WebRole : RoleEntryPoint
{
// other unrelated member variables appear here...
private WarmUp _warmUp;
public override bool OnStart()
{
// other startup stuff appears here...
_warmUp = new WarmUp();
_warmUp.Start();
return base.OnStart();
}
}
All the actual warm up logic is in this WarmUp class. When it first runs it hits a handful of URLs on the local instance IP address (vs the public, load balanced hostname) to get things in memory so that the first people to use it get the fastest possible response time. Then, it loops and hits a single keepalive URL (again on the local role instance) that doesn't do any work and just serves to make sure that IIS doesn't shut down the application pool as idle.
public class WarmUp
{
private Thread worker;
public void Start()
{
worker = new Thread(new ThreadStart(Run));
worker.IsBackground = true;
worker.Start();
}
private void Run()
{
var endpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["http"]; // "http" has to match the endpointName in your ServiceDefinition.csdef file.
var pages = new string[]
{
"/",
"/help",
"/signin",
"/register",
"/faqs"
};
foreach (var page in pages)
{
try
{
var address = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
page);
var webClient = new WebClient();
webClient.DownloadString(address);
Debug.WriteLine(string.Format("Warmed {0}", address));
}
catch (Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
var keepalive = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
"/keepalive");
while (true)
{
try
{
var webClient = new WebClient();
webClient.DownloadString(keepalive);
Debug.WriteLine(string.Format("Pinged {0}", keepalive));
}
catch (Exception ex)
{
//absorb
}
Thread.Sleep(900000); // 15 minutes
}
}
}
Personally I'd change the timeout, but both should work: effectively they would both have the same effect of preventing the worker processes from shutting down.
I believe the timeout is there to avoid IIS retaining resources that aren't needed for servers with lots of Web sites that are lightly used. Given that heavily used sites (like this one!) don't shut down their worker processes I don't think you'll see any memory issues.

Resources