How do I use the MediaRecorder API on processed audio? - wav

I am taking microphone input and processing it to do a FFT on the data, but the specifics of that are irrelevant for this question. A rough overview of my current code:
const microphone = await navigator.mediaDevices.getUserMedia({video: false, audio: true});
const context = new AudioContext();
const stream = context.createMediaStreamSource(microphone);
const processor = context.createScriptProcessor(BUFFER_BYTES, 1, 1);
const analyser = context.createAnalyser();
// ...
stream.connect(analyser);
analyser.connect(processor);
processor.connect(context.destination);
I would also like to take this audio and record it into a .wav file. How can I do this? Is it possible to duplicate my microphone input stream such that I can process it via the nodes I am currently using, and record it via a MediaRecorder as well? Or can I simply add a MediaRecorder as a node in my audio pipeline?

You can use a MediaStreamAudioDestinationNode to get the processed audio as MediaStream again. It can be created like this:
const mediaStreamDestination = context.createMediaStreamDestination();
You can then connect your processor to the mediaStreamDestination as well.
processor.connect(mediaStreamDestination);
The stream provided by the mediaStreamDestination can then be used to create a MediaRecorder.
const mediaRecorder = new MediaRecorder(mediaStreamDestination.stream);
Unfortunately no browser supports recording wav files out of the box. But I created a package which can be used to "extend" the native MediaRecorder with custom codecs. It's called extendable-media-recorder. There is an example in the readme which shows how it can be used to record wav files.

Related

Difference between "Download to a stream" and "Download from a stream" in Azure blob storage [duplicate]

What is the difference between OpenReadAsync and DownloadToStreamAsync functions of CloudBlockBlob in the Azure blob storage? Searched in google but could not find an answer.
Both OpenReadAsync and DownloadToStreamAsync could initiate an asynchronous operation for you to retrieve the blob stream.
Based on my testing, you could have a better understanding of them by the following sections:
 
Basic Concepts
DownloadToStreamAsync:Initiates an asynchronous operation to download the contents of a blob to a stream.
OpenReadAsync:Initiates an asynchronous operation to download the contents of a blob to a stream.
 
Usage
a) DownloadToStreamAsync
Sample Code:
using (var fs = new FileStream(<yourLocalFilePath>, FileMode.Create))
{
await blob.DownloadToStreamAsync(fs);
}
 
b) OpenReadAsync
Sample Code:
//Set buffer for reading from a blob stream, the default value is 4MB.
blob.StreamMinimumReadSizeInBytes=10*1024*1024; //10MB
using (var blobStream = await blob.OpenReadAsync())
{
using (var fs = new FileStream(localFile, FileMode.Create))
{
await blobStream.CopyToAsync(fs);
}
}
Capturing Network requests via Fiddler
a) DownloadToStreamAsync
 
b) OpenReadAsync
 
According to the above, DownloadToStreamAsync just sends one get request for retrieving blob stream, while OpenReadAsync sends more than one request to retrieving blob stream based on the “Blob.StreamMinimumReadSizeInBytes” you have set or by default value.
The difference between DownloadToStreamAsync and OpenReadAsync is that DownloadToStreamAsync will download the contents of the blob to the stream before returning, but OpenReadAsync will not trigger a download until the stream is consumed.
For example, if using this to return a file stream from an ASP.NET core service, you should use OpenReadAsync and not DownloadToStreamAsync:
Example with DownloadToStreamAsync (not recommended in this case):
Stream target = new MemoryStream(); // Could be FileStream
await blob.DownloadToStreamAsync(target); // Returns when streaming (downloading) is finished. This requires the whole blob to be kept in memory before returning!
_logger.Log(LogLevel.Debug, $"DownloadToStreamAsync: Length: {target.Length} Position: {target.Position}"); // Output: DownloadToStreamAsync: Length: 517000 Position: 517000
target.Position = 0; // Rewind before returning Stream:
return File(target, contentType: blob.Properties.ContentType, fileDownloadName: blob.Name, lastModified: blob.Properties.LastModified, entityTag: null);
Example with OpenReadAsync (recommended in this case):
// Do NOT put the stream in a using (or close it), as this will close the stream before ASP.NET finish consuming it.
Stream blobStream = await blob.OpenReadAsync(); // Returns when the stream has been opened
_logger.Log(LogLevel.Debug, $"OpenReadAsync: Length: {blobStream.Length} Position: {blobStream.Position}"); // Output: OpenReadAsync: Length: 517000 Position: 0
return File(blobStream, contentType: blob.Properties.ContentType, fileDownloadName: blob.Name, lastModified: blob.Properties.LastModified, entityTag: null);
Answer from a member of Microsoft Azure (here):
The difference between DownloadStreamingAsync and OpenReadAsync is
that the former gives you a network stream (wrapped with few layers
but effectively think about it as network stream) which holds on to
single connection, the later on the other hand fetches payload in
chunks and buffers issuing multiple requests to fetch content. Picking
one over the other one depends on the scenario, i.e. if the consuming
code is fast and you have good broad network link to storage account
then former might be better choice as you avoid multiple req-res
exchanges but if the consumer is slow then later might be a good idea
as it releases a connection back to the pool right after reading and
buffering next chunk. We recommend to perf test your app with both to
reveal which is best choice if it's not obvious.
OpenReadAsync returns a Task<Stream> and you use it with an await.
sample test method
CloudBlobContainer container = GetRandomContainerReference();
try
{
await container.CreateAsync();
CloudBlockBlob blob = container.GetBlockBlobReference("blob1");
using (MemoryStream wholeBlob = new MemoryStream(buffer))
{
await blob.UploadFromStreamAsync(wholeBlob);
}
using (MemoryStream wholeBlob = new MemoryStream(buffer))
{
using (var blobStream = await blob.OpenReadAsync())
{
await TestHelper.AssertStreamsAreEqualAsync(wholeBlob, blobStream);
}
}
}
DownloadToStreamAsync is a virtual (can be overridden) method returning a task and takes stream object as input.
sample usage.
await blog.DownloadToStreamAsync(memoryStream);

Been trying to set Custom time on a file using Firebase?

I'm trying to set the Custom time attribute in firebase on the front end. Everything is possible to set, like contentDisposition, custom Metadata etc, just can't find any way or any info about setting Custom time.
You can see it referenced here https://cloud.google.com/storage/docs/metadata#custom-time
You can set the custom time on the file manually in the Storage cloud console, but even when you do and you load the file in firebase on the front end, it's missing from the returned object! (makes me feel like it's not possible to achieve this)
var storage = this.$firebase.app().storage("gs://my-files");
var storage2 = storage.ref().child(this.file);
//// Tried this
var md = {
customTime: now.$firebase.firestore.FieldValue.serverTimestamp()
};
//// & Tried this
var md = {
Custom-Time: now.$firebase.firestore.FieldValue.serverTimestamp()
};
storage2.updateMetadata(md).then((metadata) => {
console.log(metadata);
}).catch((err) => {
console.log(err);
});
The reason I ask is I'm trying to push back the lifecycle delete date (which will be based on the custom time) every time the file is loaded. Does anyone know the answer or an alternative way of doing it?
Thanks in advance
The CustomTime metadata is not possible to update using Firebase JavaScript SDK since it is not included in the file metadata properties list mentioned in the documentation. So even if you specify it as customTime: or Custom-Time: the updateMetadata() method does not perform any changes.
I suggest you as a better practice, set the CustomTime metadata from the cloud console and modify the CustomTimeBefore Lifecycle condition from the back-end each time you load the file using the addLifeCycleRule method of the GCP Node.js Client.
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
//Imports your Google Cloud Storage bucket
const myBucket = storage.bucket('my_bucket');
//-
// Delete object that has a customTime before 2021-05-25.
//-
myBucket.addLifecycleRule({
action: 'delete',
condition: {
customTimeBefore: new Date('2021-05-25')
}
}, function(err, apiResponse) {});

Saving google sheets data into Firebase using Google Apps script

I'm trying to have my google sheets synced with my firebase database. I'm not very experienced with javaScript, so is it possible using the below method? The idea is that it would automatically sync every time a new row gets created/updated/deleted. I know that I need the script files but not sure how to import them in the .gs file, so that's why it's in the html.
Many thanks!
translate.gs
function saveToFirebase() {
var config = {
apiKey: "MY_API_KEY",
authDomain: "MY_DOMAIN.firebaseapp.com",
databaseURL: "MY_DOMAIN.firebaseio.com",
projectId: "MY_DOMAIN",
storageBucket: "MY_DOMAIN.appspot.com",
messagingSenderId: "MESSAGE_ID"
};
firebase.initializeApp(config);
var database = firebase.database();
database.ref('food/' + MY_USER_UID).set({
name: "pizza funghi",
});
}
sidebar.html
<!DOCTYPE html>
<html>
<head>
<script src="https://www.gstatic.com/firebasejs/4.12.0/firebase-app.js"></script>
<script src="https://www.gstatic.com/firebasejs/4.12.0/firebase-auth.js"></script>
<script src="https://www.gstatic.com/firebasejs/4.12.0/firebase-database.js"></script>
</head>
<body>
</body>
</html>
There is a third-party libarary which integrates with Firebase's REST API. If you're comfortable using it, this becomes pretty straightforward.
First we'll need to create a tab to track changes. We need the identity of those who make changes, so we have to break this into two parts - a simple onEdit trigger which runs as the modifying user, and an installable trigger which I'll call uploadChanges. The latter is what talks to Firebase.
Create a tab called changes
Add a frozen row with the following headers:
Uploaded
User
Value
Install the third party Firebase library
Begin by clicking Resources > Libraries in the script editor, then pasting MYeP8ZEEt1ylVDxS7uyg9plDOcoke7-2l in the "Find a Library" box. Hit Save.
Opt for stability by choosing the latest public release, or choose the latest release (I chose latest while writing this).
Click OK
Now would be a good time to peruse the reference docs so you know what I'm up to in the below instructions :-)
Set up security (I'm assuming you want this script to run as you)
Make your Google account (which runs the script) be at least an Editor for your Firebase project.
Set the appropriate authorization scopes for your App Script project:
Go to File > Project Properties > Scopes in the App Script editor
Select View > Show manifest file (the manifest file is usually hidden by default)
Add https://www.googleapis.com/auth/userinfo.email and https://www.googleapis.com/auth/firebase.database to the oauthScopes array (add it if it's not already there)
Save the manifest file. Next time you run the script you'll get a pop-up asking about permissions.
The equivalent of your translate.gs above, which always just sets your food to 'pizza funghi`, would look like this:
function saveToFirebase() {
var dbUrl = "MY_DOMAIN.firebaseapp.com"; // Set appropriately
var token = ScriptApp.getOAuthToken(); // Depends on security setup above
var firebase = FirebaseApp.getDatabaseByUrl(dbUrl, token);
newData = {
name: "pizza funghi",
};
firebase.setData('food/' + MY_USER_UID, newData);
}
But you said you wanted to update Firebase on every save. To do this you really just want to rip off one of the various onEdit tutorials floating around the net. The resulting onEdit should look something like this:
function onEdit(e) {
// First get stuff about the edit.
// This approach only gets the top left cell of a multi-cell edit.
var editRange = e.range; // The edited range
var newValue = editRange.getValue();
// Next, who is the editor? Remove the `split` for full email.
var username = Session.getActiveUser().getEmail().split('#')[0];
if (username == '') {
username = SOME_REASONABLE_DEFAULT; // Or give up if you wish
}
// Finally save the change
SpreadsheetApp.getActiveSpreadsheet()
.getSheetByName('changes')
.appendRow([false, username, newValue]);
}
function uploadChanges() {
// Attach to Firebase
var dbUrl = "MY_DOMAIN.firebaseapp.com"; // Set appropriately
var token = ScriptApp.getOAuthToken(); // Depends on security setup above
var firebase = FirebaseApp.getDatabaseByUrl(dbUrl, token);
// Get content of changes tab
var changeSheet = SpreadsheetApp.getActiveSpreadsheet()
.getSheetByName('changes');
var changeData = changeSheet.getDataRange()
.getValues();
// Upload all new-to-us changes
for (var i = 1; i < changeData.length; i++) {
if (changeData[i][0]) {
continue; // We already uploaded this one
}
changeData[i][0] = true; // Optimistically assume we'll succeed
var newData = {
name: changeData[i][2]
};
var username = changeData[i][1];
firebase.setData('food/' + username, newData);
}
// Blanket update of change-data sheet to update upload status
changeSheet.getRange(1, 1, changeData.length, changeData[0].length)
.setValues(changeData);
}
Lastly, set up some triggers.
Choose Edit > Current Project's Triggers in the script editor
Add a new trigger for onEdit
Choose onEdit from the leftmost Run dropdown
Choose From spreadsheet in the Events dropdown
Then choose On edit in the rightmost dropdown
Add a new trigger for uploadChanges
Choose uploadChanges from the leftmost Run dropdown
Choose Time-driven from the Run dropdown
Set up a schedule that's appropriate to your needs
EDIT: My original script had you doing everything in onEdit, which tehhowch correctly points out won't work since we're talking to another service. I've updated to stage to a "changes" tab which I include in setup. My new approach maintains a perpetual record of old uploads; for performance you might instead choose to just clear the changes sheet once you've done the upload.

How do I properly download and save a list of images?

I have a list of image of URLs and would like to download and save each image. Unfortunately, I keep receiving an Out of Memory exception due to an exhausted heap space. The last attempt saved two images and then threw "Exhausted heap space, trying to allocate 33554464 bytes".
My code is shown below. The logic seems correct but I believe the asynchronous calls may be at fault. Is there some adjustment I should make to cause downloading to be sequential? Or is there another method I should be utilizing?
import 'package:http/http.dart' as http;
import 'dart:io';
main() {
// loc is a Set of valid URLs
// ...
loc.forEach(retrieveImage)
}
void retrieveImage(String location) {
Uri uri = Uri.parse(location);
String name = uri.pathSegments.last;
print("Reading $location");
http.readBytes(location).then((image) => saveImage(name, image));
}
void saveImage(String name, var data) {
new File("${name}")
..writeAsBytesSync(data);
print(name);
}
If you want to download them sequentially, you can switch to Future.forEach. This enumerators through a collection executing a function for each, but waiting for the Future that the function returns to complete before moving on to the next. It, in turn, returns a future that completes once the final iteration has completed.
Instead of
loc.forEach(retrieveImage);
use
Future.forEach(loc, retrieveImage);
and then ensure retrieveImage returns the future:
Future retrieveImage(String location) {
Uri uri = Uri.parse(location);
String name = uri.pathSegments.last;
print("Reading $location");
return http.readBytes(location).then((image) => saveImage(name, image));
}
If #DannyTuppeny s doesn't solve your problem you can increase the heap size.
I think this should be the flag that does it
old_gen_heap_size: 1024 (Max size of old gen heap size in MB,e.g: --old_gen_heap_size=1024 allows up to 1024MB old gen heap)
dart --old_gen_heap_size=1024 somefile.dart
or
export DART_VM_OPTIONS="--old_gen_heap_size=1024"
http://dartbug.com/13744 also mentions --new_gen_heap_size but dart --print-flags doesn't list it.
I have no idea if this is supported and what it does.
The problem I see in your code is that all images are started to download almost at once and while they are received they use heap memory. Also #DannyTupeny s code won't change that.
You can either limit the number of files downloaded concurrently by only invoking new requests when previous requests are finished or to use streams to write the data into the file while it is received so it doesn't need to be buffered in memory entirely.
I haven't done this myself yet and will not have time to look into it at least until sunday but maybe someone else can provide more details to such attempts.
To redirect incoming data directly to a file without buffering the entire file in memory this should work but I wasn't able to reproduce the out of memory problem so I can't say for sure.
import 'dart:io' as io;
import 'dart:async' as async;
import 'package:path/path.dart' as path;
import 'package:http/http.dart' as http;
var images = [
"https://c4.staticflickr.com/4/3880/15283361621_bc72a1fb29_z.jpg",
"https://c2.staticflickr.com/4/3923/15101476099_6e1087b76c_h.jpg",
"https://c2.staticflickr.com/4/3899/15288834802_073d2af478_z.jpg",
"https://c4.staticflickr.com/4/3880/15283361621_bc72a1fb29_z.jpg",
"https://c2.staticflickr.com/6/5575/15101869429_fa44a80e87_z.jpg",
"https://c1.staticflickr.com/3/2941/15100232360_03f3631c44_z.jpg",
"https://c1.staticflickr.com/3/2941/15269480156_a28e1c0dbb_b.jpg",
"https://c2.staticflickr.com/4/3907/15103503127_195ffcd5c0_z.jpg",
"https://c2.staticflickr.com/6/5595/15265903986_a3210505f4_c.jpg",
"https://c2.staticflickr.com/6/5567/15100857617_9926f2a189_z.jpg",
"https://c1.staticflickr.com/3/2941/15100542247_6e9c3f13ae_z.jpg",
"https://c2.staticflickr.com/4/3852/15099895539_cf43a904a5_z.jpg"
];
main() {
var futures = <async.Future>[];
images.forEach((url) {
futures.add(new http.Request('GET', Uri.parse(url))
.send().then((response) {
var f = new io.File(path.basename(url));
var sink = f.openWrite();
sink.addStream(response.stream)
.then((_) => sink.close());
}));
});
async.Future.wait(futures) // wait for all image downloads to be finished
.then((_) => print('done'));
}

Webaudio api: change the sample rate

Is it possible to change the sampling rate of the recorded wave file without using third-party software and websites , and in the js?
If the recorder.js set the frequency of 44100
worker.postMessage ({
      command: 'init',
      config: {
        sampleRate: 44100
      }
} ) ;
 is written with the same frequency , and if you reduce it to 22050 , the length of the file will be 2 times more recorded and will be slow to reproduce, while increasing the speed of playback , the recording will sound fine.Actually the question whether it is possible to change the sample rate already contain files and how to do it?
The only way I found so far is a small resample library xaudio.js, part of the speex.js library. Works pretty nice. I use it to convert audio from native format to 8Khz Mono.
For anyone interested... Because typed arrays are transferables, you can send them to a web worker, and down sample, then send it back or to a server or wherever.
//get audio from user and send it to a web worker
function recordUser(argument) {
//
var audioCtx = new AudioContext();
var worker = new Worker('downsampler.js');
// Create a ScriptProcessorNode with a bufferSize of 512 and a single input and no output channel
var scriptNode = audioCtx.createScriptProcessor(512, 1, 0);
console.log(scriptNode.bufferSize);
// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
var inputBuffer = audioProcessingEvent.inputBuffer;
console.log(inputBuffer.getChannelData(0));
worker.postMessage(inputBuffer.getChannelData(0));
}
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(mediaStream) {
var mediaStreamSource = audioCtx.createMediaStreamSource(mediaStream);
mediaStreamSource.connect(scriptNode);
})
.catch(function(err) { console.log(err.name + ": " + err.message); });
}
The web worker is something like this. If you want to send it to a server, use a websocket. Otherwise, use post message to transfer the data back to the client. You'll need to add an event listener client side as well, so search "mdn WebWorker" to read up on that.
//example worker that sends the data to both a web socket and back to the user
var ws = new WebSocket('ws://localhost:4321');
ws.binaryType = 'arraybuffer';
self.addEventListener('message', function(e) {
var data = e.data;
var sendMe = new Float32Array(data.length/16);
for(var i = 0; i * 16 < data.length; i++) {
sendMe[i] = data[i*16];
}
//send to server
ws.send(sendMe);
//or send back to user
self.postMessage(sendMe)
}, false);

Resources