Pause and Resume in Microphone: Windows phone 7 - asp.net

I am beginner in Windows phone. I created the recorder example and also executed successfully in windows phone 7. But I have to add the pause and resume functionality in my application.
Note: I used microphone for the recording.
How can I put push and resume functionality in microphone for recording?
OR give me any alternative solution for recording in windows phone.
here is my code..
Microphone mphone;
List<byte[]> memobuffercollection = new List<byte[]>();
DynamicSoundEffectInstance playback;
private void BtnRecords_Click(object sender, RoutedEventArgs e)
{
// Clear the collection for storing the buffers
memobuffercollection.Clear();
// Stop any playback in Progress
playback.Stop();
// Start Recording
mphone.Start();
BtnStop.Opacity = 1;
BtnRecords.Opacity = 0;
}
private void BtnStop_Click(object sender, RoutedEventArgs e)
{
StopRecording();
BtnStop.Opacity = 0;
BtnRecords.Opacity = 1;
}
void StopRecording()
{
// Get the last partial buffer
int sampleSize = mphone.GetSampleSizeInBytes(mphone.BufferDuration);
byte[] extraBuffer = new byte[sampleSize];
int extraBytes = mphone.GetData(extraBuffer);
// Stop Recording
mphone.Stop();
// Create MemoInfo object and add at top of collection
int totalSize = memobuffercollection.Count * sampleSize + extraBytes;
TimeSpan duration = mphone.GetSampleDuration(totalSize);
MemoInfo memoInfo = new MemoInfo(DateTime.UtcNow, totalSize, duration);
memofiles.Insert(0, memoInfo);
// Save Data in IsolatedStorage
using (IsolatedStorageFile storage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream stream = storage.CreateFile(memoInfo.FileName))
{
// Write buffers from collection
foreach (byte[] buffer in memobuffercollection)
stream.Write(buffer, 0, buffer.Length);
// Write partial buffer
stream.Write(extraBuffer, 0, extraBytes);
}
}
memosListBox.UpdateLayout();
memosListBox.ScrollIntoView(memoInfo);
}
And memoinfo is my class which is used for giving the title to recorded audio.

If you look into the documentation of Microphone, you will find out that there is no Pause() or Resume() method in Microphone class. Only playback has Pause & Resume features ( read this ).
The only way to pause & resume, is to stop recording, save the audio file, and record a new one when you "resume". Last, combine the audio files together to one.
Related Question (though it's for Windows Phone 8): how to enable pause and resume in audio recorder in windows phone 8?(Details Insde)

Related

Firebase Dynamic Links in Unity

Where should I put this chunk of code in order the listener function works every time I enter the app from the deep link?
Now in my unity mobile app I have this code in the initial load, however it does not work well.
The first case of entering the app from the deep link is not being handled. Only after initial load when I click the deep link my listener function works (as the listener is already set).
Is there any solution to this issue?
void Start()
{
DynamicLinks.DynamicLinkReceived += OnDynamicLink;
}
// Display the dynamic link received by the application.
void OnDynamicLink(object sender, EventArgs args)
{
var dynamicLinkEventArgs = args as ReceivedDynamicLinkEventArgs;
Debug.LogFormat("Received dynamic link {0}", dynamicLinkEventArgs.ReceivedDynamicLink.Url.OriginalString);
}
Test this, feel free to edit.
void Start()
{
StartCoroutine(WaitLoader()); //Loader
}
public void trueAwoken()
{
DynamicLinks.DynamicLinkReceived += OnDynamicLink;
}
public IEnumerator WaitLoader()
{
int i = 0;
while (i < 5) // Potential for true load state 5 (increase this 0-1000+?, it depends on your build?)
{
i++;
//Debug.Log("Waiting: " + i + "/" + Time.deltaTime);
yield return new WaitForFixedUpdate();
}
trueAwoken();
}

Serial Connection (Arduino --> Java)

this will be my first post and I will do my best to be clear and concise. I've checked some of the other posts on this forum but was unable to find a satisfactory answer.
My question pertains to the use of JavaFX and the jSSC(java simple serial connection) library. I've designed a very simple GUI application that will host four different charts. Two of the charts will display readings from temperature and solar sensors for the past hour, while the other two display that data over an extended period -- a 14-hour period. Eventually I would like to make that more flexible and set the application to "sleep" when the readings become roughly zero (night).
How can I stream data to display this data in real time?
After referencing several sources online and from "JavaFX 8 Intro. by Example", I've been able to construct most of the serial connection class. I'm having trouble processing the data readings, so that it can be displayed on the chart.
public class SerialComm implements SerialPortEventListener {
Date time = new Date();
SimpleDateFormat sdf = new SimpleDateFormat("mm");
boolean connected;
StringBuilder sb;
private SerialPort serialPort;
final StringProperty line = new SimpleStringProperty("");
//Not sure this is necessary
private static final String [] PORT_NAMES = {
"/dev/tty.usbmodem1411", // Mac OS X
"COM11", // Windows
};
//Baud rate of communication transfer with serial device
public static final int DATA_RATE = 9600;
//Create a connection with the serial device
public boolean connect() {
String [] ports = SerialPortList.getPortNames();
//First, Find an instance of serial port as set in PORT_NAMES.
for (String port : ports) {
System.out.print("Ports: " + port);
serialPort = new SerialPort(port);
}
if (serialPort == null) {
System.out.println("Could not find device.");
return false;
}
//Operation to perform is port is found
try {
// open serial port
if(serialPort.openPort()) {
System.out.println("Connected");
// set port parameters
serialPort.setParams(DATA_RATE,
SerialPort.DATABITS_8,
SerialPort.STOPBITS_1,
SerialPort.PARITY_NONE);
serialPort.setEventsMask(SerialPort.MASK_RXCHAR);
serialPort.addEventListener(event -> {
if(event.isRXCHAR()) {
try {
sb.append(serialPort.readString(event.getEventValue()));
String str = sb.toString();
if(str.endsWith("\r\n")) {
line.set(Long.toString(time.getTime()).concat(":").concat(
str.substring(0, str.indexOf("\r\n"))));
System.out.println("line" + line);
sb = new StringBuilder();
}
} catch (SerialPortException ex) {
Logger.getLogger(SerialComm.class.getName()).log(Level.SEVERE, null, ex); }
}
});
}
} catch (Exception e) {
System.out.println("ErrOr");
e.printStackTrace();
System.err.println(e.toString());
}
return serialPort != null;
}
#Override
public void serialEvent(SerialPortEvent spe) {
throw new UnsupportedOperationException("Not supported yet.");
}
public StringProperty getLine() {
return line;
}
}
Within the try block, I understand the port parameters, but the eventListener is where I am having difficulty. The significance of the stringbuilder is to append data the new data as it is read from the device.
How will I account for the two sensor readings? Would I do that by creating separate data rates to differentiate between the incoming data from each sensor??
I hope that this is clear and that I've provided enough information but not too much. Thank you for any assistance.
-------------------------------UPDATE--------------------------
Since your reply Jose, I've started to make the additions to my code. Adding the listener within the JavaFX class, I'm running into some issues. I keep getting a NullPointerException, which I believe is the String[]data not being initialized by any data from the SerialCommunication class.
serialPort.addEventListener(event -> {
if(event.isRXCHAR()) {
try {
sb.append(serialPort.readString(event.getEventValue()));
String str = sb.toString();
if(str.endsWith("\r\n")) {
line.set(Long.toString(time.getTime()).concat(":").concat(
str.substring(0, str.indexOf("\r\n"))));
System.out.println("line" + line);
sb = new StringBuilder();
}
} catch (SerialPortException ex) {
Logger.getLogger(SerialComm.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
}
} catch (Exception e) {
System.err.println(e.toString());
}
I'm adding the time to the data being read. As Jose mentioned below, I've added tags to the data variables within the arduino code, I'm using: Serial.print("Solar:"); Serial.println(solarData);
Rough code of the JavaFx listener:
serialPort.getLine().addListener((ov, t, t1) -> {
Platform.runLater(()-> {
String [] data = t1.split(":");
try {
//data[0] is the timestamp
//data[1] will contain the label printed by arduino "Solar: data"
switch (data[1]) {
case "Solar":
data[0].replace("Solar:" , "");
solarSeries.getData().add(new XYChart.Data(data[0], data[1]));
break;
case "Temperature":
temperatureSeries.getData().add(new XYChart.Data(data[0], data[1]));
break;
}
Is the reason this code has NullPointerException a result of the String [] data array being uninitialized?
Exception Error
Ports: /dev/tty.usbmodem1411Connected
Exception in thread "EventThread /dev/tty.usbmodem1411" java.lang.NullPointerException
at SerialComm.lambda$connect$0(SerialComm.java:61)
at SerialComm$$Lambda$1/1661773475.serialEvent(Unknown Source)
at jssc.SerialPort$LinuxEventThread.run(SerialPort.java:1299)
The SerialPortEventListener defined in the jssc library allows listening for serial port events. One of those events is the RXCHAR event, that occurs when the Arduino board is sending some data and some bytes are on the input buffer.
event.getEventValue() returns an int with the byte count, and serialPort.readString(event.getEventValue()) get the String format from those bytes.
Note that this method does not return a full line, so you need to listen to carriage return and line feed characters. Once you find "\r\n", you can get the line, and reset the StringBuilder for the next one:
sb.append(serialPort.readString(event.getEventValue()));
String str=sb.toString();
if(str.endsWith("\r\n")){
line.set(str.substring(0,str.indexOf("\r\n")));
sb=new StringBuilder();
}
where line is an observable String:
final StringProperty line=new SimpleStringProperty("");
On the Arduino side, if you want to send values from different sensors at different rates, I suggest you define on the Arduino sketch some identification string for each sensor, and you print for each value the id of its sensor.
For instance, these will be the readings you will get with the serial event listener:
ID1,val1
ID1,val2
ID2,val3
ID1,val4
ID3,val5
...
Finally, on the JavaFX thread, define a listener to changes in line and process the String to get the sensor and the value. Something like this:
serial.getLine().addListener(
(ObservableValue<? extends String> observable, String oldValue, String newValue) -> {
Platform.runLater(()->{
String[] data=newValue.split("\\,");
if(data[0].equals("ID1"){
// add to chart from sensor 1, value data[1];
} else if(data[0].equals("ID2"){
// add to chart from sensor 2, value data[1];
} else if(data[0].equals("ID3"){
// add to chart from sensor 3, value data[1];
}
});
});
Note you need to add Platform.runLater(), since the thread that gets the data from serial port and updates line is not on the JavaFX thread.
From my experience, on the Arduino side, add a comma or something to separate the different values when you print and when you receive that string in Java simply split that string by commas.
String[] stringSeparate = str.split(",");

Custome notification sound is playing only if i have breakpoint windows universal 8.1

Hi when push notification is received i am displaying toast notification. i have issue that is if i have break point in below function then and then only custom notification sound is playing else notification sound is not playing.I thought might be audio is not loading so i added Task.delay for 2 sec/5 sec but no luck. What could be the issue..
public static void AddTostNotification(String xmlDocument)
{
List<string> messageSection = PushNotificationHelper.GetMessageAndLandingPage(xmlDocument);
ToastTemplateType toastTemplate = ToastTemplateType.ToastText01;
XmlDocument toastXml = ToastNotificationManager.GetTemplateContent(toastTemplate);
XmlNodeList toastTextElements = toastXml.GetElementsByTagName("text");
toastTextElements[0].AppendChild(toastXml.CreateTextNode(messageSection[0]));
// toastTextElements[1].AppendChild(toastXml.CreateTextNode(message));
IXmlNode toastNode = toastXml.SelectSingleNode("/toast");
((XmlElement)toastNode).SetAttribute("launch", messageSection[1]);
XmlElement audio = toastXml.CreateElement("audio");
audio.SetAttribute("src", "ms-appx:///Assets/Guitar.wav");
toastNode.AppendChild(audio);
//launch tost immediatly
ToastNotification toast = new ToastNotification(toastXml);
ToastNotificationManager.CreateToastNotifier().Show(toast);
}
i got it, It was threading issue,
//var ignored = dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
//{
PushNotificationHelper.AddTostNotification(notificationContent);
//});

Encrypted mp3 streaming to Flash SWF

I have an asp.net web site that serves sample MP3 files to client Flash Players (SWF).
These files are downloadable by tons of download tools.
Although only registered members can access the high quality mp3 samples, my client wants to prevent these low quality MP3 files to be downloaded by download tools.
So I thought about this solution:
Convert these MP3 files to bytearrays on server side (ASP.NET)
Do some bitwise XOR operations (Simple encryption)
Write this array to aspx' responsestream
Modify Flash (.fla) to request to this new file/page/aspx
Do some bitwise XOR operations on Flash and convert it to the original MP3 as byte array. (Simple decryption)
Play the MP3
I was able to succeed till step 6. I cannot convert this byte array to a Sound object that Flash can play. I did a bit by bit comparison of the resulting array on the flash and the source array on ASP.NET. They are equal.
I'm open to completely different approaches. But I cannot use Flash Media Server. I need to be using Flash as3 and ASP.NET.
Also very important! The .mp3 must be downloaded/decrypted and played asynchronously (which I coud not succeed in doing)
I agree with Peter Elliot that authentication probably is the easiest way to restrict access to the files. However, if you still need to explore the route of encrypting the files, I thought I'd expand a bit on Alex Vlad's answer.
What you need to do in order to be able to stream the audio file, decrypt it on the fly, and play it asynchronously is to use the URLStream class (docs) in conjunction with the Sound class (docs) and keeping a buffer of the partially downloaded content.
Some pseudocode to illustrate:
class AsyncEncryptedSoundPlayer extends Sound {
var buffer:ByteArray;
var stream:URLStream;
var currSoundPosition:uint = 0;
public function AsyncEncryptedSoundPlayer(url:String) {
buffer = new ByteArray();
stream = new URLStream();
stream.addEventListener(ProgressEvent.PROGRESS, onProgress);
stream.load(new URLRequest(url));
addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleDataRequested);
}
function onProgress(e:ProgressEvent):void {
var tmpData:ByteArray;
stream.readBytes(tmpData, buffer.length, stream.bytesAvailable - buffer.length);
var decryptedData:ByteArray = decryptData(tmpData); // Decrypt loaded data
buffer.writeBytes(decryptedData, buffer.length, decryptedData.length); // Add decrypted data to buffer
}
function onSampleDataRequested(e:ProgressEvent):void {
// Feed samples from the buffer to the Sound instance
// You may have to pause the audio to increase the buffer it the download speed isn't high enough
event.data.writeBytes(buffer, currSoundPosition, 2048);
currSoundPosition += 2048;
}
function decryptedData(data:ByteArray):void {
// Returns decrypted data
}
}
This is obviously a very rough outline of a class, but I hope it will point you in the right direction.
#walkietokyo, thanks a lot for pointing me to the right direction. I succeeded in doing what I wanted. The keyword here was the loadCompressedDataFromByteArray function.
After tens of trial and errors I found out that loadCompressedDataFromByteArray was working in a differential manner.
It appends anything that it converts to the end of the sound object data.
Another issue: sound object doesn't continue playing the parts appended by loadCompressedDataFromByteArray after its play function is called.
So I implemented a sort of double buffering. Where I use 2 sound objects interchangeably.
My final (test) version is listed below. With the encryption (obfuscation) method I used (a simple XOR) no download manager or grabber or sniffer that I tested was able to play the Mp3s.
Flash (Client) side:
import flash.events.DataEvent;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.events.OutputProgressEvent;
import flash.events.ProgressEvent;
import flash.net.URLRequest;
import flash.net.URLStream;
import flash.utils.ByteArray;
import flashx.textLayout.formats.Float;
var buffer:ByteArray;
var stream:URLStream;
var bufferReadPosition:uint = 0;
var bufferWritePosition:uint = 0;
var url:String = "http://www.blablabla.com/MusicServer.aspx?" + (new Date());
buffer = new ByteArray();
stream = new URLStream();
stream.addEventListener(ProgressEvent.PROGRESS, onProgress);
stream.load(new URLRequest(url));
var s1:Sound = new Sound();
var s2:Sound = new Sound();
var channel1:SoundChannel;
var channel2:SoundChannel;
var pausePosition:int = 0;
var aSoundIsPlaying:Boolean = false;
var lastLoadedS1:Boolean = false;
var lastS1Length:int = 0;
var lastS2Length:int = 0;
function onProgress(e:ProgressEvent):void {
var tmpData:ByteArray = new ByteArray();
stream.readBytes(tmpData, 0, stream.bytesAvailable);
var decryptedData:ByteArray = decryptData(tmpData); // Decrypt loaded data
buffer.position = bufferWritePosition;
buffer.writeBytes(decryptedData, 0, decryptedData.length); // Add decrypted data to buffer
bufferWritePosition += decryptedData.length;
if(lastLoadedS1)
{
buffer.position = lastS2Length;
s2.loadCompressedDataFromByteArray(buffer, buffer.length - lastS2Length);
lastS2Length = buffer.length;
}
else
{
buffer.position = lastS1Length;
s1.loadCompressedDataFromByteArray(buffer, buffer.length - lastS1Length);
lastS1Length = buffer.length;
}
if(!aSoundIsPlaying)
{
DecidePlay();
}
}
function channel1Completed(e:Event):void
{
DecidePlay();
}
function channel2Completed(e:Event):void
{
DecidePlay();
}
function DecidePlay():void
{
aSoundIsPlaying = false;
if(lastLoadedS1)
{
channel1.stop();
if(s2.length - s1.length > 10000)
{
//At least a 10 second buffer
channel2 = s2.play(s1.length);
channel2.addEventListener(Event.SOUND_COMPLETE, channel2Completed);
lastLoadedS1 = false;
aSoundIsPlaying = true;
}
}
else
{
if(channel2 != null)
{
channel2.stop();
}
if(s1.length - s2.length > 10000)
{
//At least a 10 second buffer
channel1 = s1.play(s2.length);
channel1.addEventListener(Event.SOUND_COMPLETE, channel1Completed);
lastLoadedS1 = true;
aSoundIsPlaying = true;
}
}
}
function decryptData(data:ByteArray):ByteArray {
for(var i:int = 0;i<data.length;i++)
{
//Here put in your bitwise decryption code
}
return data;
}
ASP.NET server side (MusicServer.aspx):
protected void Page_Load(object sender, EventArgs e)
{
CopyStream(Mp3ToStream(Server.MapPath("blabla.mp3")), Response.OutputStream);
this.Response.AddHeader("Content-Disposition", "blabla.mp3");
this.Response.ContentType = "audio/mpeg";
this.Response.End();
}
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[32768];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
for (int i = 0; i < read; i++)
{
//Here put in your bitwise encryption code
}
output.Write(buffer, 0, read);
}
}
public Stream Mp3ToStream(string filePath)
{
using (FileStream fileStream = File.OpenRead(filePath))
{
MemoryStream memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
return memStream;
}
}
what might be simpler than encrypting the data coming back from your service is instead authenticating requests so that only your swf can request the files.
You can accomplish this in the same way that say, the Amazon APIs work: build a request that includes a number of parameters, including a timestamp. hash all of these arguments together in an HMAC (HMAC-SHA256 is available in the as3crypto library) along with a private key embedded in your swf. Your server end authenticates this request, ensuring that the hash is valid and that it is close enough to the timestamp. Any requests with a bad hash, or using a request with a timestamp too far in the past (replay attack) are denied.
This is certainly not perfect security. Any sufficiently motivated user could disassemble your swf and pull out your auth key, or grab the mp3 from their browser cache. But then again, any mechanism you are going to use will have those issues. This removes the overhead of having to encrypt and decrypt all of your files, instead moving the work over to the request generation phase.
Flash Sound supports only streaming mp3 playing that is you can play only mp3 by direct link. But you can send swf file with embeded mp3 withing it and this swf can be encrypted in the same way as you encrypt mp3.
as3 code for embedding and using mp3:
public class Sounds
{
[Embed(source="/../assets/sounds/sound1.mp3")]
private static const sound1:Class;
}
after loading this swf by the Loader you can access to the sound in this way:
var domain:ApplicationDomain = ApplicationDomain.currentDomain; // <-- ApplicationDomain where you load sounds.swf
var soundClass:Class = domain.getDefinition("Sounds_sound1");
var sound:Sound = new soundClass();
sound.play();
Be sure, that you do at least one of the follows:
give different names for sound class (sound1)
give different name for holder class (Sounds)
or load sound.swf into different application domains
to prevent class names overlapping.
Unfortunate this approach doesn't allow you to streaming play sound, you have to load whole swf, decrypt it and only after that you will be able to play sound.
please have a look here:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/events/SampleDataEvent.html
and here:
http://help.adobe.com/en_US/as3/dev/WSE523B839-C626-4983-B9C0-07CF1A087ED7.html

Microphone playback with SPEEX codec in Flex

I'm working on a project where the user have to record his/her voice, and submit it to server. But before submitting the user might need to play the recorded sound.
The application has a recording and playing capabilities with SPEEX codec. But what i found
strange and difficult is when i the user plays back the recorded audio, the playing speed is faster or slower than normal that it cannot be understood. As if its fast forwarding.
Here is the sample code:
private var mic:Microphone;
private var rec:ByteArray;
private var snd:Sound;
private var channel:SoundChannel;
protected function recBtn_clickHandler(event:MouseEvent):void
{
rec = new ByteArray();
mic = Microphone.getMicrophone();
mic.setLoopBack(false);
mic.setUseEchoSuppression(true);
mic.gain = 50;
mic.setSilenceLevel(5, 1000);
mic.codec = SoundCodec.SPEEX;
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, getMicAudio);
}
protected function plyBtn_clickHandler(event:MouseEvent):void
{
snd.addEventListener(SampleDataEvent.SAMPLE_DATA, playRecorded);
channel = snd.play();
}
private function getMicAudio(e:SampleDataEvent): void
{
rec.writeBytes(e.data);
}
private function playRecorded(e:SampleDataEvent): void
{
if (!rec.bytesAvailable > 0) return;
for (var i:int = 0; i < 2048; i++){
var sample:Number = 0;
if (rec.bytesAvailable > 0) sample = rec.readFloat();
for (var j:uint = 0; j < 6; j++) {
e.data.writeFloat(sample);
}
}
}
This scenario only happens when:
mic.codec = SoundCodec.SPEEX;
mic.rate = 16
I went through a lot of forums, but could not find any solution for Microphone playback with SPEEX codec or microphone.rate = 16;
In flash, a sound object plays at 44khz. Since you're sampling at 16khz, you're sending data through the SampleDataEvent Event handler 2.75 faster then you are getting that data.
That is, if you were sending it twice.
But you're actually attempting to solve this problem by writing 3 times faster than what you're recording. This is still not optimum, you'll get a slowed down version of the recording, just a bit, because you're now sending data as if it were recorded at 48 khz, yet you're sending it as 44khz.
There are only two things you can do, and I think you're doing them already.
either adjust how many writes you do per iteration in that for loop. or adjust the max increment(2048) to a higher number, but it can't exceed 8192, I believe.
I had the same problem when i recorded in speex.
e.data.writeFloat(sample);
e.data.writeFloat(sample);
e.data.writeFloat(sample);
e.data.writeFloat(sample);
if (i%3)
{
e.data.writeFloat(sample);
e.data.writeFloat(sample);
}

Resources