Ap insight Not loggin although Developer mode enabled - azure-application-insights

I have a block of code, where I have enabled the developer mode, but still the logs are not logging in the App insight.
static void Main(string[] args)
{
{
try
{
int a = 5;
int b = 0;
int c = a / b;
}
catch (Exception ex)
{
CreateLogAI(ex, "code");
}
}
}
public static void CreateLogAI(Exception ex, string CodeBlock)
{
TelemetryClient TeleClient = new TelemetryClient();
TelemetryConfiguration.Active.InstrumentationKey = "XXXX";
try
{
TeleClient.TrackException(ex);
TelemetryConfiguration.Active.TelemetryChannel.DeveloperMode = true;
}
catch (Exception exception)
{
throw exception;
}
finally { TeleClient.Flush(); }
}
So I referred number of article. Developer mode & the Flush should work. This is the sample code I have. What I am missing here?

Sleep after Flush(). Thread.Sleep(5000);

I just copy your code, and it works fine and logs can be seen in azure portal.
There are two steps you can use to check if logs are sent to appInsights:
1.After run your code in visual studio, please check the output window, see if this line of text is there, it starts with Application Insights Telemetry:, screenshot as below:
2.If you cannot see the message describe in step 1, please add System.Threading.Thread.Sleep(5000) before the Flush() method in finally{}, then check the output window again.
Here is the test result at my side, please note that it may take a few minutes for the logs to be shown in azure portal:

Related

.NET 6 Async Semaphore Error Under Mild Load

I'm working on a basic (non DB) connection pool which allows only 1 connection to be created per project.
The connection pool supports an async-task/threaded environment and therefor I have made use of a semaphore instead of a regular Lock.
I wrote a test, below, which is meant to stress test the connection pool.
The code works but under higher loads, the semaphore throws the following error
I can overcome this error by decreasing the load.
For example, increasing the _waitTimeMs to a higher number (i.e. 50ms or 100ms or 1000ms) or decreasing _numberOfTasks (i.e. to 5 or 3).
I should also mention that sometimes, it manages to run higher load tests without errors.
Is there a mistake or misconception in my code and/or use of semaphores?
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
internal class Program
{
static int _numberOfTasks = 50;
static int _waitTimeMs = 1;
static SemaphoreSlim _dictLock = new SemaphoreSlim(1, 1);
static ConcurrentDictionary<string, bool> _pool = new ConcurrentDictionary<string, bool>();
/// <summary>
/// Only 1 connection allowed per project.
/// We reuse connections if available in pool, otherwise we create 1 new connection.
/// </summary>
static async Task<string> GetConnection(string projId)
{
try
{
// Enter sema lock to prevent more than 1 connection
// from being added for the same project.
if (await _dictLock.WaitAsync(_waitTimeMs))
{
// Try retrieve connection from pool
if (_pool.TryGetValue(projId, out bool value))
{
if (value == false)
return "Exists but not connected yet.";
else
return "Success, exists and connected.";
}
// Else add connection to pool
else
{
_pool.TryAdd(projId, false);
// Simulate delay in establishing new connection
await Task.Delay(2);
_pool.TryUpdate(projId, true, false);
return "Created new connection successfully & added to pool.";
}
}
// Report failure to acquire lock in time.
else
return "Server busy. Please try again later.";
}
catch (Exception ex)
{
return "Error " + ex.Message;
}
finally
{
// Ensure our lock is released.
_dictLock.Release();
}
}
static async Task Main(string[] args)
{
if (true)
{
// Create a collection of the same tasks
List<Task> tasks = new List<Task>();
for (int i = 0; i < _numberOfTasks; i++)
{
// Each task will try to get an existing or create new connection to Project1
var t = new Task(async () => { Console.WriteLine(await GetConnection("Project1")); });
tasks.Add(t);
}
// Execute these tasks in parallel.
Parallel.ForEach<Task>(tasks, (t) => { t.Start(); });
Task.WaitAll(tasks.ToArray());
Console.WriteLine("Done");
Console.Read();
}
}
}
Is there a mistake or misconception in my code and/or use of semaphores?
There's a bug in your code, yes. If the WaitAsync returns false (indicating that the semaphore was not taken), then the semaphore is still released in the finally block.
If you must use a timeout with WaitAsync (which is highly unusual and questionable), then your code should only call Release if the semaphore was actually taken.

Unable to Access SQLite Data in MvvmCross ViewModel

Hello StackOverflow community,
I know there's a lot of code in this post, but I wanted to give you guys, the community as good of a picture as possible as to what is going on here so that maybe someone can help me figure out what my issue is.
Recently for a project I'm working on we've decided to upgrade from MvvmCross 5.7.0 to 6.2.2. I've managed to get our UWP app to successfully complete the initialization and setup process. The first viewmodel for which we register the app start also starts initializing. However, I'm finding that my vm initialization hangs at a particular line of code (shown in the code below). The weirdest part though is similar methods called in the app initialization code run perfectly fine without hanging/deadlock, so I'm not sure what's different Here's a simplified version of my viewmodel code to illustrate:
public class MyViewModel : BaseAuthenticatedTabBarViewModel, IMvxViewModel<int>
{
private int? _settingValue;
public override async Task Initialize()
{
//Some irrelevant initialization code
Exception e = null;
try
{
//This line of code never returns
_settingValue = _settingValue ?? await AppSettingService.GetSettingValue();
}
catch (Exception ex)
{
e = ex;
}
if (e != null)
{
await HandleCatastrophicError(e);
}
}
}
The AppSettingService.GetSettingValue() method looks like this:
public async Task<int?> GetCurrentEventId()
{
return await GetNullableIntSetting("SettingValue");
}
private static async Task<int?> GetNullableIntSetting(string key)
{
try
{
var setting = await SettingDataService.SettingByName(key);
if (setting != null)
{
return string.IsNullOrEmpty(setting.Value) ? (int?)null : Convert.ToInt32(setting.Value);
}
}
catch (Exception ex)
{
//Handle the exception
}
return null;
}
All the code for SettingDataService:
public class SettingDataService : DataService<SettingDataModel>, ISettingDataService
{
public async Task<SettingDataModel> SettingByName(string name)
{
try
{
var values = (await WhereAsync(e => e.Name == name));
return values.FirstOrDefault();
}
catch(Exception ex)
{
//Handle the exception
}
return null;
}
}
Finally, the implementation for WhereAsync() is in a class called DataService and is as follows:
public virtual async Task<IEnumerable<T>> WhereAsync(System.Linq.Expressions.Expression<Func<T, bool>> condition, SQLiteAsyncConnection connection = null)
{
return await (connection ?? await GetConnectionAsync())
.Table<T>()
.Where(condition)
.ToListAsync();
}
Thank you very much for your help in advance
Edit: Forgot to also add this crucial bit of code to help you guys even further:
protected async Task<SQLiteAsyncConnection> GetConnectionAsync()
{
SQLiteAsyncConnection connection = null;
while (true)
{
try
{
connection = Factory.Create(App.DatabaseName);
// This line of code is the culprit. For some reason this hangs and I can't figure out why.
await connection.CreateTableAsync<T>();
break;
}
catch (SQLiteException ex)
{
if (ex.Result != Result.CannotOpen && ex.Result != Result.Busy && ex.Result != Result.Locked)
{
throw;
}
}
await Task.Delay(20);
}
return connection;
}
I'm suspecting that you are calling Task.Wait or Task<T>.Result somewhere further up your call stack. Or if you're not doing it, MvvmCross is probably doing it for you. This will cause a deadlock when called from a UI context.
Personally, I prefer the approach that ViewModels should always be constructed synchronously, and cannot have an asynchronous "initialization". That is, they must construct themselves (synchronously) into a "loading" state, and this construction can kick off an asynchronous operation that will later update them into a "loaded" state. The synchronous-initialization pattern means there's never an unnecessary delay when changing views; your users may only see a spinner or a loading message, but at least they'll see something. See my article on async MVVM data binding for a pattern that helps with this, and note that there's a newer version of the helper types in that article.

Microsoft Custom Speech Service issue when using web socket url

so recently for a work project I've been playing around with speech to text models and in particular custom speech to text models. With a bit of mixing and matching examples I've managed to get a test application to talk to the normal Bing speech to text API. But when I attempt to use it with a custom speech instance only the HTTPS URL works. When I use any of the available long form web socket URLS the error An unhandled exception of type 'System.NullReferenceException' occurred in SpeechClient.dll occurs. This is a bit of a problem as that endpoint only supports 2 minutes of transcription, where as the websocket endpoint supports up to 10 minutes.
This https://learn.microsoft.com/en-us/azure/cognitive-services/custom-speech-service/customspeech-how-to-topics/cognitive-services-custom-speech-use-endpoint page here is what I'm going off of. It says that I should use a web socket url when creating the service, but that leads to the error above.
Here my test bed code for trying it out:
using System;
using Microsoft.CognitiveServices.SpeechRecognition;
using System.IO;
namespace ConsoleApp1
{
class Program
{
DataRecognitionClient dataClient;
static void Main(string[] args)
{
Program p = new Program();
p.Run(args);
}
void Run(string[] args)
{
try
{
// Works
//this.dataClient = SpeechRecognitionServiceFactory.CreateDataClient(SpeechRecognitionMode.LongDictation, "en-US", "Key");
// Works
//this.dataClient = SpeechRecognitionServiceFactory.CreateDataClient(SpeechRecognitionMode.LongDictation, "en-US",
// "Key", "Key",
// "https://Id.api.cris.ai/ws/cris/speech/recognize/continuous");
// Doesn't work
this.dataClient = SpeechRecognitionServiceFactory.CreateDataClient(SpeechRecognitionMode.LongDictation, "en-US",
"Key", "Key",
"wss://Id.api.cris.ai/ws/cris/speech/recognize/continuous");
this.dataClient.AuthenticationUri = "https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken";
this.dataClient.OnResponseReceived += this.ResponseHandler;
this.dataClient.OnConversationError += this.ErrorHandler;
this.dataClient.OnPartialResponseReceived += this.PartialHandler;
Console.WriteLine("Starting Transcription");
this.SendAudioHelper("Audio file path");
(new System.Threading.ManualResetEvent(false)).WaitOne();
} catch(Exception e)
{
Console.WriteLine(e);
}
}
private void SendAudioHelper(string wavFileName)
{
using (FileStream fileStream = new FileStream(wavFileName, FileMode.Open, FileAccess.Read))
{
// Note for wave files, we can just send data from the file right to the server.
// In the case you are not an audio file in wave format, and instead you have just
// raw data (for example audio coming over bluetooth), then before sending up any
// audio data, you must first send up an SpeechAudioFormat descriptor to describe
// the layout and format of your raw audio data via DataRecognitionClient's sendAudioFormat() method.
int bytesRead = 0;
byte[] buffer = new byte[1024];
try
{
do
{
// Get more Audio data to send into byte buffer.
bytesRead = fileStream.Read(buffer, 0, buffer.Length);
// Send of audio data to service.
this.dataClient.SendAudio(buffer, bytesRead);
}
while (bytesRead > 0);
}
finally
{
// We are done sending audio. Final recognition results will arrive in OnResponseReceived event call.
this.dataClient.EndAudio();
}
}
}
void ErrorHandler(object sender, SpeechErrorEventArgs e)
{
Console.WriteLine(e.SpeechErrorText);
}
void ResponseHandler(object sender, SpeechResponseEventArgs e)
{
if(e.PhraseResponse.RecognitionStatus == RecognitionStatus.EndOfDictation || e.PhraseResponse.RecognitionStatus == RecognitionStatus.DictationEndSilenceTimeout)
{
Console.WriteLine("Trnascription Over");
Console.ReadKey();
Environment.Exit(0);
}
for(int i = 0; i < e.PhraseResponse.Results.Length; i++)
{
Console.Write(e.PhraseResponse.Results[i].DisplayText);
}
Console.WriteLine();
}
void PartialHandler(object sender, PartialSpeechResponseEventArgs e)
{
}
}
}
Thanks in advance for any help.
so you are probably ok with using https ...
we are revisiting the SDKs right now (restructuring/reorganizing). I expect updates in the next couple of months.
Wolfgang
The new speech service SDK supports Custom Speech Service out-of-box. Please also check the samples RecognitionUsingCustomizedModelAsync() here for details.

how convert web browser control into web application, and some events

You people might be think this question is pretty simple but really I am structed here. Could you please give me a proper solution for this
I have a Windows sample like the following in button click event
AuthURI= "http://.......";
**Webbrowser1.Navigate(AuthURI);**
private void **Webbrowser1_DocumentCompleted**(object sender, WebBrowserDocumentCompletedEventArgs **e**)
{
try
{
if (e.Url.ToString().Contains("code="))
{
string[] responseOauth = Regex.Split(e.Url.ToString(), "&");
for (int i = 0; i < responseOauth.Count(); i++)
{
string[] nvPair = Regex.Split(responseOauth[i], "=");
drive.AccessCode = nvPair[1];
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
This is windows application code but now I want to change this application into web application
but I strucked WebBrowserDocumentCompletedEventArgs e how to pass this event like e.Url in my web application
what is replacement for this event I tried in google but exact result I didn't get. Please give me a proper solution for this

Using speech synthesizer in ASP.NET web application gets stuck

In an MVC web application I use the SpeechSynthesizer class to speak some text to a .wav file during a function called by a controller action handler that returns a view. The code executes, writes the file, and the action handle returns, but the development server usually, but not always, never comes back with the return page. This is the text-to-speech code:
string threadMessage = null;
bool returnValue = true;
var t = new System.Threading.Thread(() =>
{
try
{
SpeechEngine.SetOutputToWaveFile(wavFilePath);
SpeechEngine.Speak(text);
SpeechEngine.SetOutputToNull();
}
catch (Exception exception)
{
threadMessage = "Error doing text to speech to file: " + exception.Message;
returnValue = false;
}
});
t.Start();
t.Join();
if (!returnValue)
{
message = threadMessage;
return returnValue;
}
I saw a couple of posts for a similar problem in a service that advised doing the operation in a thread, hence the above thread.
Actually, using the SpeechSynthesizer for other things can hang as well. I had a page that just enumerated the voices, but it would get stuck as well. Since there is no user code in any of the threads if I pause the debugger, I have no clue how to debug it.
I've tried Dispose'ing the SpeechSynthesizer object afterwards, calling SetOutputToDefaultVoice, to no avail. I've tried it on both Windows 8.1 and Windows 8, running with the development server under the debugger, or running IIS Express separately.
Any ideas? Is there other information I could give that would be helpful?
Thanks.
-John
Try
Public void Speak(string wavFilePath, string text)
{
using (var synthesizer = new SpeechSynthesizer())
{
synthesizer.SetOutputToWaveFile(wavFilePath);
synthesizer.Speak(text);
return outputFile;
}
}
Task.Run(() => Speak("path", "text")).Result;
It worked for me in IIS Express

Resources