Create a correct wav audio format to send to Azure Cognitive Service Speech/Translation SDK width MediaRecorder and SignalR - signalr

Instead of directly using the Azure Cognitive Services JS SDK on my web page, I need to send my recorded sound to my server through SignalR to apply some logic and then send the audio to Translation SDK.
To stream the audio from the client I'm using JS MediaRecorder:
const blobToBase64 = function (blob, callback) {
const reader = new FileReader();
reader.onload = function () {
var dataUrl = reader.result;
var base64 = dataUrl.split(',')[1];
callback(base64);
};
reader.readAsDataURL(blob);
};
const startUplaoding = async () => {
const subject = new signalR.Subject();
const connection = new signalR.HubConnectionBuilder().withUrl("https://.../myHub").build()
await connection.start();
//stream to the hub function as chunks of bytes
await connection.send("UploadStream", subject);
navigator.mediaDevices.getUserMedia({
audio: {
//Azure Speech SDK supported format
channelCount: 1,
sampleSize: 16,
sampleRate: 16000
}, video: false
})
.then(stream => {
var audioTrack = stream.getAudioTracks()[0];
//making sure the constraints are in place
audioTrack.applyConstraints({
channelCount: 1,
sampleSize: 16,
sampleRate: 16000
})
mediaRecorder = new MediaRecorder(stream, {
mimeType: "audio/webm;codecs=pcm",
});
mediaRecorder.addEventListener("dataavailable", e => {
//convert blob to base64 to send to the SignalR as string
//Tried sending the blobk directly using "subject.next(e.data)" didn't work
blobToBase64(e.data, base64 => {
subject.next(base64);
})
})
//timeslice = 1000, send every second
mediaRecorder.start(1000)
mediaRecorder.addEventListener("stop", () => {
subject.complete()
});
});
}
On my SignalR hub, after receiving each chunk of data and converting it to an array of bytes, I send the chunk to the Azure Translation SDK using in-memory audioInputStream:
//SignalR Hub function
public async Task UploadStream(string sessionId, IAsyncEnumerable<string> stream)
{
await foreach (var base64Str in stream)
{
var chunk = Convert.FromBase64String(base64Str);
TranslationServiceInstance.PushAudio(chunk);
}
}
//TranslationService Class
public class TranslationService
{
private PushAudioInputStream audioInputStream { get; set; }
private AudioConfig audioConfig { get; set; }
private TranslationRecognizer recognizer { get; set; }
public event EventHandler<byte[]> AudioReceived;
public event EventHandler<string> TextReceived;
public TranslationService()
{
var translationConfig = SpeechTranslationConfig.FromSubscription("###", "REGION");
translationConfig.SetProperty(PropertyId.Speech_LogFilename, #$"PATH\log.txt");
translationConfig.SpeechRecognitionLanguage = "en-US";
translationConfig.AddTargetLanguage("fa");
translationConfig.VoiceName = "fa-IR-FaridNeural";
audioInputStream = AudioInputStream.CreatePushStream();
//been trying to convert to a correct format, but not sure what format the audio chunks are being sent
//audioInputStream = AudioInputStream.CreatePushStream(AudioStreamFormat.GetWaveFormatPCM(16000, 32, 1));
audioConfig = AudioConfig.FromStreamInput(audioInputStream);
recognizer = new TranslationRecognizer(translationConfig, audioConfig);
}
public async Task Start()
{
recognizer.Recognized += Recognizer_Recognized;
recognizer.Synthesizing += Recognizer_Synthesizing;
await recognizer.StartContinuousRecognitionAsync();
}
public void PushAudio(byte[] audioChunk)
{
//To make sure the audio chunk are being sent correctly
using (var fs = new FileStream(#$"PATH\sending.wav", FileMode.Append))
{
fs.Write(audioChunk, 0, audioChunk.Length);
}
audioInputStream.Write(audioChunk, audioChunk.Length);
}
private void Recognizer_Synthesizing(object sender, TranslationSynthesisEventArgs e)
{
var bytes = e.Result.GetAudio();
AudioReceived?.Invoke(this, bytes);
}
private void Recognizer_Recognized(object sender, TranslationRecognitionEventArgs e)
{
TextReceived?.Invoke(this, $"Recognizer_Recognized{e.Result.Text}");
File.AppendAllText(#"PATH\result.txt", e.Result.Text);
}
}
The issue is, the Recognizer doesn't detect the sound and complains about the audio format.
Questions:
How to make sure from the browser I’m sending the right format to the SignalR hub?
How do I know what the metadata of the stream is so I can either reject or convert it to the Speech SDK desired format?
My assumption was when sending raw data from the browser to the Recognizer it automatically converts it to the desired format but seems like I'm missing something.

Related

How to implement asynchronous data streaming in .Net Core Service Bus triggered Azure Function processing huge data not to get OutOfMemoryException?

I have a service bus triggered Azure Function which listens for messages containing just blob URL strings of JSON data which each one of them is at least 10MB.
Message queue is near real-time(If I use the correct term) so producers keep putting messaging to the queue with a frequency so there is always data in the queue to be processed.
I have designed a solution but it gets OutOfMemoryException most of the time. The steps involved in the current solution sequentially are:
Consume a message
Download the file from the URL within the consumed message to a temporary folder
Read the whole file as a string
Deserialize it to an object
Partition into the chunks to supply Mongo bulk upsert limit
Bulk upsert to Mongo
I have tried to solve OutOfMemoryException and I thought that it's because my function/consumer don't have the same pace with the producer, so I think that at the time t1 when it gets the first message and process it and then while it's upserting to the mongo the function keeps getting the messages and they accumulate in the memory and waiting to be upserted.
Is my reasoning right?
Thus I think that If I could implement a streaming solution starting from #3, reading from file by chunking and putting it to a stream then I would prevent the memory keep growing and reduce time also. I have mostly Java background and I somehow know that with custom iterator/spliterator/iterable it is possible to do streaming and asynchronous processing.
How can I do asynchronous data streaming with .Net Core in an Azure Function?
Are there other approaches to solve this problem?
namespace x.y.Z
{
public class MyFunction
{
//...
[FunctionName("my-func")]
public async Task Run([ServiceBusTrigger("my-topic", "my-subscription", Connection = "AzureServiceBus")] string message, ILogger log, ExecutionContext context)
{
var data = new PredictionMessage();
try
{
data = myPredictionService.genericDeserialize(message);
await myPredictionService.ValidateAsync(data);
await myPredictionService.AddAsync(data);
}
catch (Exception ex)
{
//...
}
}
}
}
public class PredictionMessage
{
public string BlobURL { get; set; }
}
namespace x.y.z.Prediction
{
public abstract class BasePredictionService<T> : IBasePredictionService<T> where T : PredictionMessage, new()
{
protected readonly ILogger log;
private static JsonSerializer serializer;
public BasePredictionService(ILogger<BasePredictionService<T>> log)
{
this.log = log;
serializer = new JsonSerializer();
}
public async Task ValidateAsync(T message)
{
//...
}
public T genericDeserialize(string message)
{
return JsonConvert.DeserializeObject<T>(message);
}
public virtual Task AddAsync(T message)
{
throw new System.NotImplementedException();
}
public async Task<string> SerializePredictionResult(T message)
{
var result = string.Empty;
using (WebClient client = new WebClient())
{
var tempPath = Path.Combine(Path.GetTempPath(), DateTime.Now.Ticks + ".json");
Uri srcPath = new Uri(message.BlobURL);
await client.DownloadFileTaskAsync(srcPath, tempPath);
using (FileStream fs = File.Open(tempPath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
using (BufferedStream bs = new BufferedStream(fs))
using (StreamReader sr = new StreamReader(bs))
{
result = sr.ReadToEnd();
}
}
Task.Run(() =>
{
File.Delete(tempPath);
});
return result;
}
}
protected TType StreamDataDeserialize<TType>(string streamResult)
{
var body = default(TType);
using (MemoryStream stream = new MemoryStream(Encoding.Default.GetBytes(streamResult)))
{
using (StreamReader streamReader = new StreamReader(stream))
{
body = (TType)serializer.Deserialize(streamReader, typeof(TType));
}
}
return body;
}
protected List<List<TType>> Split<TType>(List<TType> list, int chunkSize = 1000)
{
List<List<TType>> retVal = new List<List<TType>>();
while (list.Count > 0)
{
int count = list.Count > chunkSize ? chunkSize : list.Count;
retVal.Add(list.GetRange(0, count));
list.RemoveRange(0, count);
}
return retVal;
}
}
}
namespace x.y.z.Prediction
{
public class MyPredictionService : BasePredictionService<PredictionMessage>, IMyPredictionService
{
private readonly IMongoDBRepository<MyPrediction> repository;
public MyPredictionService(IMongoDBRepoFactory mongoDBRepoFactory, ILogger<MyPredictionService> log) : base(log)
{
repository = mongoDBRepoFactory.GetRepo<MyPrediction>();
}
public override async Task AddAsync(PredictionMessage message)
{
string streamResult = await base.SerializePredictionResult(message);
var body = base.StreamDataDeserialize<List<MyPrediction>>(streamResult);
if (body != null && body.Count > 0)
{
var chunkList = base.Split(body);
await BulkUpsertProcess(chunkList);
}
}
private async Task BulkUpsertProcess(List<List<MyPrediction>> chunkList)
{
foreach (var perChunk in chunkList)
{
var filterContainers = new List<IDictionary<string, object>>();
var updateContainer = new List<IDictionary<string, object>>();
foreach (var item in perChunk)
{
var filter = new Dictionary<string, object>();
var update = new Dictionary<string, object>();
filter.Add(/*...*/);
filterContainers.Add(filter);
update.Add(/*...*/);
updateContainer.Add(update);
}
await Task.Run(async () =>
{
await repository.BulkUpsertAsync(filterContainers, updateContainer);
});
}
}
}
}

How to save an image in SQLite in Xamarin Forms?

I have the following two methods that handles taking photos from a camera and picking photos from a library. They're both similar methods as at the end of each method, I get an ImageSource back from the Stream and I pass it onto another page which has an ImageSource binding ready to be set. These two method work perfectly. The next step now is to save the Image in SQLite so I can show the images in a ListView later on. My question for the XamGods (Xamarin Pros =), what is the best way to save image in SQLite in 2019? I have been in the forums for hours and I still don't have a tunnel vision on what I want to do. I can either
Convert Stream into an array of bytes to save in Sqlite.
Convert ImageSource into an array of bytes (messy/buggy).
Somehow retrieve the actual Image selected/taken and convert that into an array of bytes into SQLite
I'm sorry if my question is general, but Xamarin does not provide a clear-cut solution on how to save images in SQLite and you can only find bits and pieces of solutions throughout the forums listed below.
How to save and retrieve Image from Sqlite
Load Image from byte[] array.
Creating a byte array from a stream
Thank you in advance!
private async Task OnAddPhotoFromCameraSelected()
{
Console.WriteLine("OnAddPhotoFromCameraSelected");
var photo = await Plugin.Media.CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions() { });
var stream = photo.GetStream();
photo.Dispose();
if (stream != null)
{
ImageSource cameraPhotoImage = ImageSource.FromStream(() => stream);
var parms = new NavigationParameters();
parms.Add("image", cameraPhotoImage);
var result = await NavigationService.NavigateAsync("/AddInspectionPhotoPage?", parameters: parms);
if (!result.Success)
{
throw result.Exception;
}
}
}
private async Task OnAddPhotoFromLibrarySelected()
{
Console.WriteLine("OnAddPhotoFromLibrarySelected");
Stream stream = await DependencyService.Get<IPhotoPickerService>().GetImageStreamAsync();
if (stream != null)
{
ImageSource selectedImage = ImageSource.FromStream(() => stream);
var parms = new NavigationParameters();
parms.Add("image", selectedImage);
parms.Add("stream", stream);
var result = await NavigationService.NavigateAsync("/AddInspectionPhotoPage?", parameters: parms);
if (!result.Success)
{
throw result.Exception;
}
}
}
As Jason said that you can save image path into sqlite database, but if you still want to save byte[] into sqlite database, you need to convert stream into byte[] firstly:
private byte[] GetImageBytes(Stream stream)
{
byte[] ImageBytes;
using (var memoryStream = new System.IO.MemoryStream())
{
stream.CopyTo(memoryStream);
ImageBytes = memoryStream.ToArray();
}
return ImageBytes;
}
Then load byte[] from sqlite, converting into stream.
public Stream BytesToStream(byte[] bytes)
{
Stream stream = new MemoryStream(bytes);
return stream;
}
For simple sample, you can take a look:
Insert byte[] in sqlite:
private void insertdata()
{
var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "sqlite1.db3");
using (var con = new SQLiteConnection(path))
{
Image image = new Image();
image.Content = ConvertStreamtoByte();
var result = con.Insert(image);
sl.Children.Add(new Label() { Text = result > 0 ? "insert successful insert" : "fail insert" });
}
}
Loading image from sqlite:
private void getdata()
{
var path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "sqlite1.db3");
using (var con = new SQLiteConnection(path))
{
var image= con.Query<Image>("SELECT content FROM Image ;").FirstOrDefault();
if(image!=null)
{
byte[] b = image.Content;
Stream ms = new MemoryStream(b);
image1.Source = ImageSource.FromStream(() => ms);
}
}
}
Model:
public class Image
{
[PrimaryKey, AutoIncrement]
public int Id { get; set; }
public string FileName { get; set; }
public byte[] Content { get; set; }
}

Read Asp.Net Core Response body in ActionFilterAttribute

I'm using Asp.Net Core as a Rest Api Service.
I need access to request and response in ActionFilter. Actually, I found the request in OnActionExcecuted but I can't read the response result.
I'm trying to return value as follow:
[HttpGet]
[ProducesResponseType(typeof(ResponseType), (int)HttpStatusCode.OK)]
[Route("[action]")]
public async Task<IActionResult> Get(CancellationToken cancellationToken)
{
var model = await _responseServices.Get(cancellationToken);
return Ok(model);
}
And in ActionFilter OnExcecuted method as follow:
_request = context.HttpContext.Request.ReadAsString().Result;
_response = context.HttpContext.Response.ReadAsString().Result; //?
I'm trying to get the response in ReadAsString as an Extension method as follow:
public static async Task<string> ReadAsString(this HttpResponse response)
{
var initialBody = response.Body;
var buffer = new byte[Convert.ToInt32(response.ContentLength)];
await response.Body.ReadAsync(buffer, 0, buffer.Length);
var body = Encoding.UTF8.GetString(buffer);
response.Body = initialBody;
return body;
}
But, there is no result!
How I can get the response in OnActionExcecuted?
Thanks, everyone for taking the time to try and help explain
If you're logging for json result/ view result , you don't need to read the whole response stream. Simply serialize the context.Result:
public class MyFilterAttribute : ActionFilterAttribute
{
private ILogger<MyFilterAttribute> logger;
public MyFilterAttribute(ILogger<MyFilterAttribute> logger){
this.logger = logger;
}
public override void OnActionExecuted(ActionExecutedContext context)
{
var result = context.Result;
if (result is JsonResult json)
{
var x = json.Value;
var status = json.StatusCode;
this.logger.LogInformation(JsonConvert.SerializeObject(x));
}
if(result is ViewResult view){
// I think it's better to log ViewData instead of the finally rendered template string
var status = view.StatusCode;
var x = view.ViewData;
var name = view.ViewName;
this.logger.LogInformation(JsonConvert.SerializeObject(x));
}
else{
this.logger.LogInformation("...");
}
}
I know there is already an answer but I want to also add that the problem is the MVC pipeline has not populated the Response.Body when running an ActionFilter so you cannot access it. The Response.Body is populated by the MVC middleware.
If you want to read Response.Body then you need to create your own custom middleware to intercept the call when the Response object has been populated. There are numerous websites that can show you how to do this. One example is here.
As discussed in the other answer, if you want to do it in an ActionFilter you can use the context.Result to access the information.
For logging whole request and response in the ASP.NET Core filter pipeline you can use Result filter attribute
public class LogRequestResponseAttribute : TypeFilterAttribute
{
public LogRequestResponseAttribute() : base(typeof(LogRequestResponseImplementation)) { }
private class LogRequestResponseImplementation : IAsyncResultFilter
{
public async Task OnResultExecutionAsync(ResultExecutingContext context, ResultExecutionDelegate next)
{
var requestHeadersText = CommonLoggingTools.SerializeHeaders(context.HttpContext.Request.Headers);
Log.Information("requestHeaders: " + requestHeadersText);
var requestBodyText = await CommonLoggingTools.FormatRequestBody(context.HttpContext.Request);
Log.Information("requestBody: " + requestBodyText);
await next();
var responseHeadersText = CommonLoggingTools.SerializeHeaders(context.HttpContext.Response.Headers);
Log.Information("responseHeaders: " + responseHeadersText);
var responseBodyText = await CommonLoggingTools.FormatResponseBody(context.HttpContext.Response);
Log.Information("responseBody: " + responseBodyText);
}
}
}
In Startup.cs add
app.UseMiddleware<ResponseRewindMiddleware>();
services.AddScoped<LogRequestResponseAttribute>();
Somewhere add static class
public static class CommonLoggingTools
{
public static async Task<string> FormatRequestBody(HttpRequest request)
{
//This line allows us to set the reader for the request back at the beginning of its stream.
request.EnableRewind();
//We now need to read the request stream. First, we create a new byte[] with the same length as the request stream...
var buffer = new byte[Convert.ToInt32(request.ContentLength)];
//...Then we copy the entire request stream into the new buffer.
await request.Body.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false);
//We convert the byte[] into a string using UTF8 encoding...
var bodyAsText = Encoding.UTF8.GetString(buffer);
//..and finally, assign the read body back to the request body, which is allowed because of EnableRewind()
request.Body.Position = 0;
return $"{request.Scheme} {request.Host}{request.Path} {request.QueryString} {bodyAsText}";
}
public static async Task<string> FormatResponseBody(HttpResponse response)
{
//We need to read the response stream from the beginning...
response.Body.Seek(0, SeekOrigin.Begin);
//...and copy it into a string
string text = await new StreamReader(response.Body).ReadToEndAsync();
//We need to reset the reader for the response so that the client can read it.
response.Body.Seek(0, SeekOrigin.Begin);
response.Body.Position = 0;
//Return the string for the response, including the status code (e.g. 200, 404, 401, etc.)
return $"{response.StatusCode}: {text}";
}
public static string SerializeHeaders(IHeaderDictionary headers)
{
var dict = new Dictionary<string, string>();
foreach (var item in headers.ToList())
{
//if (item.Value != null)
//{
var header = string.Empty;
foreach (var value in item.Value)
{
header += value + " ";
}
// Trim the trailing space and add item to the dictionary
header = header.TrimEnd(" ".ToCharArray());
dict.Add(item.Key, header);
//}
}
return JsonConvert.SerializeObject(dict, Formatting.Indented);
}
}
public class ResponseRewindMiddleware {
private readonly RequestDelegate next;
public ResponseRewindMiddleware(RequestDelegate next) {
this.next = next;
}
public async Task Invoke(HttpContext context) {
Stream originalBody = context.Response.Body;
try {
using (var memStream = new MemoryStream()) {
context.Response.Body = memStream;
await next(context);
//memStream.Position = 0;
//string responseBody = new StreamReader(memStream).ReadToEnd();
memStream.Position = 0;
await memStream.CopyToAsync(originalBody);
}
} finally {
context.Response.Body = originalBody;
}
}
You can also do...
string response = "Hello";
if (result is ObjectResult objectResult)
{
var status = objectResult.StatusCode;
var value = objectResult.Value;
var stringResult = objectResult.ToString();
responce = (JsonConvert.SerializeObject(value));
}
I used this in a .net core app.
Hope it helps.

Making screen capture in xamarin.forms

Is there a package that does screen capture in xamarin.forms ?
I need also to capture google maps screen shots
Check out this blog post by Daniel Hindrikes.
I'm going to assume that you use a PCL for your shared code.
You will need to create an interface in your PCL. He calls it IScreenshotManager. The declaration looks like this:
public interface IScreenshotManager
{
Task<byte[]> CaptureAsync();
}
Now all platforms will have their own implementation for it.
For iOS;
public class ScreenshotManager : IScreenshotManager
{
public async System.Threading.Tasks.Task<byte[]> CaptureAsync()
{
var view = UIApplication.SharedApplication.KeyWindow.RootViewController.View;
UIGraphics.BeginImageContext(view.Frame.Size);
view.DrawViewHierarchy(view.Frame, true);
var image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
using(var imageData = image.AsPNG())
{
var bytes = new byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, bytes, 0, Convert.ToInt32(imageData.Length));
return bytes;
}
}
}
For Android:
public class ScreenshotManager : IScreenshotManager
{
public static Activity Activity { get; set; }
public async System.Threading.Tasks.Task<byte[]> CaptureAsync()
{
if(Activity == null)
{
throw new Exception("You have to set ScreenshotManager.Activity in your Android project");
}
var view = Activity.Window.DecorView;
view.DrawingCacheEnabled = true;
Bitmap bitmap = view.GetDrawingCache(true);
byte[] bitmapData;
using (var stream = new MemoryStream())
{
bitmap.Compress(Bitmap.CompressFormat.Png, 0, stream);
bitmapData = stream.ToArray();
}
return bitmapData;
}
}
And for Windows Phone:
public class ScreenshotManager : IScreenshotManager
{
public async Task<byte[]> CaptureAsync()
{
var rootFrame = Application.Current.RootVisual as PhoneApplicationFrame;
var screenImage = new WriteableBitmap((int)rootFrame.ActualWidth, (int)rootFrame.ActualHeight);
screenImage.Render(rootFrame, new MatrixTransform());
screenImage.Invalidate();
using (var stream = new MemoryStream())
{
screenImage.SaveJpeg(stream, screenImage.PixelWidth, screenImage.PixelHeight, 0, 100);
var bytes = stream.ToArray();
return bytes;
}
}
}
Don't forget to register your platform specific implementations with the attribute which registers it with the Dependency Service, like this:
[assembly: Xamarin.Forms.Dependency (typeof (ScreenshotManager))]
It goes above the namespace declaration.
Now from your shared code you would be able to get the byte[] of a screenshot with a call like this:
var screenshotBytes = DependencyService.Get<IScreenshotManager>().CaptureAsync();
You probably want to check if DependencyService.Get<IScreenshotManager>() isn't null before using it.
After that you can turn your byte[] into an image and do whatever you like with it!
Implementation for UWP
public async Task<byte[]> CaptureAsync()
{
//create and capture Window
var renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(Window.Current.Content);
var pixelpuffer = await renderTargetBitmap.GetPixelsAsync();
var logicalDpi = DisplayInformation.GetForCurrentView().LogicalDpi;
IRandomAccessStream stream = new InMemoryRandomAccessStream();
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
encoder.BitmapTransform.InterpolationMode = BitmapInterpolationMode.Fant;
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Straight, (uint)renderTargetBitmap.PixelWidth, (uint)renderTargetBitmap.PixelHeight, logicalDpi, logicalDpi, pixelpuffer.ToArray());
await encoder.FlushAsync();
byte[] resultingBuffer = new byte[stream.Size];
await stream.ReadAsync(resultingBuffer.AsBuffer(), (uint)resultingBuffer.Length, InputStreamOptions.None);
return resultingBuffer;
}

'Server side events' send with the ASP Web Api do not arrive?

I created a test source which should send a message to the client every x time. This is the ApiController:
public class TestSourceController : ApiController
{
private static readonly ConcurrentQueue<StreamWriter> ConnectedClients = new ConcurrentQueue<StreamWriter>();
[AllowAnonymous]
[Route("api/sources/test")]
public HttpResponseMessage Get()
{
var response = Request.CreateResponse();
response.Content = new PushStreamContent((Action<Stream, HttpContent, TransportContext>) OnStreamAvailable,
"text/event-stream");
return response;
}
private static void OnStreamAvailable(Stream stream, HttpContent headers, TransportContext context)
{
var clientStream = new StreamWriter(stream);
ConnectedClients.Enqueue(clientStream);
}
private static void DoThings()
{
const string outboundMessage = "Test";
foreach (var clientStream in ConnectedClients)
{
clientStream.WriteLine("data:" + JsonConvert.SerializeObject(outboundMessage));
clientStream.Flush();
}
}
}
The clientStream.Flush(); is called like expected and without exceptions.
I handle it in AngularJS like this:
$scope.handleServerCallback = function (data) {
console.log(data);
$scope.$apply(function() {
$scope.serverData = data;
});
};
$scope.listen = function () {
$scope.eventSource = new window.EventSource("http://localhost:18270/api/sources/test");
$scope.eventSource.onmessage = $scope.handleServerCallback;
$scope.eventSource.onopen = function() { console.log("Opened source"); };
$scope.eventSource.onerror = function (e) { console.error(e); };
};
$scope.listen();
My guess is it's a problem with the server since I can see the "EventStream" from the test call is empty in the chrome debugger.
Does anyone know how to make sure the messages arrive at the client?
The solution was quite easy, according to the spec every line has to end with "\n" and the very last line with "\n\n".
So:
clientStream.WriteLine("data:" + JsonConvert.SerializeObject(outboundMessage) + "\n\n");
Solves it.

Resources