Webaudio api: change the sample rate - audio-recording

Is it possible to change the sampling rate of the recorded wave file without using third-party software and websites , and in the js?
If the recorder.js set the frequency of 44100
worker.postMessage ({
      command: 'init',
      config: {
        sampleRate: 44100
      }
} ) ;
 is written with the same frequency , and if you reduce it to 22050 , the length of the file will be 2 times more recorded and will be slow to reproduce, while increasing the speed of playback , the recording will sound fine.Actually the question whether it is possible to change the sample rate already contain files and how to do it?

The only way I found so far is a small resample library xaudio.js, part of the speex.js library. Works pretty nice. I use it to convert audio from native format to 8Khz Mono.

For anyone interested... Because typed arrays are transferables, you can send them to a web worker, and down sample, then send it back or to a server or wherever.
//get audio from user and send it to a web worker
function recordUser(argument) {
//
var audioCtx = new AudioContext();
var worker = new Worker('downsampler.js');
// Create a ScriptProcessorNode with a bufferSize of 512 and a single input and no output channel
var scriptNode = audioCtx.createScriptProcessor(512, 1, 0);
console.log(scriptNode.bufferSize);
// Give the node a function to process audio events
scriptNode.onaudioprocess = function(audioProcessingEvent) {
var inputBuffer = audioProcessingEvent.inputBuffer;
console.log(inputBuffer.getChannelData(0));
worker.postMessage(inputBuffer.getChannelData(0));
}
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(mediaStream) {
var mediaStreamSource = audioCtx.createMediaStreamSource(mediaStream);
mediaStreamSource.connect(scriptNode);
})
.catch(function(err) { console.log(err.name + ": " + err.message); });
}
The web worker is something like this. If you want to send it to a server, use a websocket. Otherwise, use post message to transfer the data back to the client. You'll need to add an event listener client side as well, so search "mdn WebWorker" to read up on that.
//example worker that sends the data to both a web socket and back to the user
var ws = new WebSocket('ws://localhost:4321');
ws.binaryType = 'arraybuffer';
self.addEventListener('message', function(e) {
var data = e.data;
var sendMe = new Float32Array(data.length/16);
for(var i = 0; i * 16 < data.length; i++) {
sendMe[i] = data[i*16];
}
//send to server
ws.send(sendMe);
//or send back to user
self.postMessage(sendMe)
}, false);

Related

Server performance question about streaming from cosmos dB

I read the article here about IAsyncEnumerable, more specifically towards a Cosmos Db-datasource
public async IAsyncEnumerable<T> Get<T>(string containerName, string sqlQuery)
{
var container = GetContainer(containerName);
using FeedIterator<T> iterator = container.GetItemQueryIterator<T>(sqlQuery);
while (iterator.HasMoreResults)
{
foreach (var item in await iterator.ReadNextAsync())
{
yield return item;
}
}
}
I am wondering how the CosmosDB is handling this, compared to paging, lets say 100 documents at the time. We have had some "429 - Request rate too large"-errors in the past and I dont wish to create new ones.
So, how will this affect server load/performance.
I dont see a big difference from the servers perspective, between when client is streaming (and doing some quick checks), and old way, get all document and while (iterator.HasMoreResults) and collect the items in a list.
The SDK will retrieve batches of documents that can be adjusted in size using the QueryRequestOptions and changing the MaxItemCount (which defaults to 100 if not set). It has no option though to throttle the RU usage apart from it running into the 429 error and using the retry mechanism the SDK offers to retry a while later. Depending on how generous you set the retry mechanism it'll retry oft & long enough to get a proper response.
If you have a situation where you want to limit the RU usage for e.g. there's multiple processes using your cosmos and you don't want those to result in 429 errors you would have to write the logic yourself.
An example of how something like that could look:
var qry = container
.GetItemLinqQueryable<Item>(requestOptions: new() { MaxItemCount = 2000 })
.ToFeedIterator();
var results = new List<Item>();
var stopwatch = new Stopwatch();
var targetRuMsRate = 200d / 1000; //target 200RU/s
var previousElapsed = 0L;
var delay = 0;
stopwatch.Start();
var totalCharge = 0d;
while (qry.HasMoreResults)
{
if (delay > 0)
{
await Task.Delay(delay);
}
previousElapsed = stopwatch.ElapsedMilliseconds;
var response = await qry.ReadNextAsync();
var charge = response.RequestCharge;
var elapsed = stopwatch.ElapsedMilliseconds;
var delta = elapsed - previousElapsed;
delay = (int) ((charge - targetRuMsRate * delta) / targetRuMsRate);
foreach (var item in response)
{
results.Add(item);
}
}
Edit:
Internally the SDK will call the underlying Cosmos REST API. Once your code reaches the iterator.ReadNextSync() it will call the query documents method in the background. If you would dig into the source code or intercept the message send to HttpClient you can observe the resulting message which lacks the x-ms-max-item-count header that determines the number of the documents it'll try to retrieve (unless you have specified a MaxItemCount yourself). According to the Microsoft Docs it'll default to 100 if not set:
Query requests support pagination through the x-ms-max-item-count and x-ms-continuation request headers. The x-ms-max-item-count header specifies the maximum number of values that can be returned by the query execution. This can be between 1 and 1000, and is configured with a default of 100.

IngestFromStreamAsync method does not work

I manage to ingest data successfully using below code
var kcsbDM = new KustoConnectionStringBuilder(
"https://test123.southeastasia.kusto.windows.net",
"testdb")
.WithAadApplicationTokenAuthentication(acquireTokenTask.AccessToken);
using (var ingestClient = KustoIngestFactory.CreateDirectIngestClient(kcsbDM))
{
var ingestProps = new KustoQueuedIngestionProperties("testdb", "TraceLog");
ingestProps.ReportLevel = IngestionReportLevel.FailuresOnly;
ingestProps.ReportMethod = IngestionReportMethod.Queue;
ingestProps.Format = DataSourceFormat.json;
//generate datastream and columnmapping
ingestProps.IngestionMapping = new IngestionMapping() {
IngestionMappings = columnMappings };
var ingestionResult = ingestClient.IngestFromStream(memStream, ingestProps);
}
when I try to use QueuedClient and IngestFromStreamAsync, the code is executed successfully but no any data is ingested into database even after 30 minutes
var kcsbDM = new KustoConnectionStringBuilder(
"https://ingest-test123.southeastasia.kusto.windows.net",
"testdb")
.WithAadApplicationTokenAuthentication(acquireTokenTask.AccessToken);
using (var ingestClient = KustoIngestFactory.CreateQueuedIngestClient(kcsbDM))
{
var ingestProps = new KustoQueuedIngestionProperties("testdb", "TraceLog");
ingestProps.ReportLevel = IngestionReportLevel.FailuresOnly;
ingestProps.ReportMethod = IngestionReportMethod.Queue;
ingestProps.Format = DataSourceFormat.json;
//generate datastream and columnmapping
ingestProps.IngestionMapping = new IngestionMapping() {
IngestionMappings = columnMappings };
var ingestionResult = ingestClient.IngestFromStreamAsync(memStream, ingestProps);
}
Try running .show ingestion failures on "https://test123.southeastasia.kusto.windows.net" endpoint, see if there are ingestion error.
Also, you set Queue reporting method, you can get the detailed result by reading from the queue.
ingestProps.ReportLevel = IngestionReportLevel.FailuresOnly;
ingestProps.ReportMethod = IngestionReportMethod.Queue;
(On the first example you used KustoQueuedIngestionProperties, you should use KustoIngestionProperties. KustoQueuedIngestionProperties has additional properties that will be ignored by the ingest client, ReportLevel and ReportMethod for example)
Could you please change the line to:
var ingestionResult = await ingestClient.IngestFromStreamAsync(memStream, ingestProps);
Also please note that queued ingestion has a batching stage of up to 5 minutes before the data is actually ingested:
IngestionBatching policy
.show table ingestion batching policy
I find the reason finally, need to enable stream ingestion in the table:
.alter table TraceLog policy streamingingestion enable
See the Azure documentation for details.
enable streamingestion policy is actually only needed if
stream ingestion is turned on in the cluster (azure portal)
the code is using CreateManagedStreamingIngestClient
the ManagedStreamingIngestClient will first try stream ingesting the data, if it fails a few times, then it will use the QueuedClient
if the ingesting data is smaller, under 4MB, it's recommended to use this client.
if using QueuedClient, you can try
.show commands-and-queries | | where StartedOn > ago(20m) and Text contains "{YourTableName}" and CommandType =="DataIngestPull"
This can give you the command executed; however it could have latency > 5 mins
Finally, you can check the status with any client you use, do this
StreamDescription description = new StreamDescription
{
SourceId = Guid.NewGuid(),
Stream = dataStream
};
then you have the source id
ingesting by calling this:
var checker = await client.IngestFromStreamAsync(description, ingestProps);
after that, call
var statusCheck = checker.GetIngestionStatusBySourceId(description.sourceId.Value);
You can figure out the status of this ingestion job. It's better wrapped in a separate thread, so you can keep checking once a few seconds, for example.

Auto Sync google sheets to firebase without button

I used a tutorial to help me sync my sheets to firebase with the use of a SYNC button that activates the script. The SYNC button currently sits just in the middle of the spreadsheet. I want to sync the data from sheets automatically to firebase when there are changes made.
function getFirebaseUrl(jsonPath) {
return (
'https://no-excusas.firebaseio.com/' +
jsonPath +
'.json?auth=' +
secret
)
}
function syncMasterSheet(sheetHeaders, sheetData) {
/*
We make a PUT (update) request,
and send a JSON payload
More info on the REST API here : https://firebase.google.com/docs/database/rest/start
*/
const outputData = [];
for(i = 0; i < sheetData.length; i++) {
var row = sheetData[i];
var newRow = {};
for(j = 0; j < row.length; j++) {
newRow[sheetHeaders[j]] = row[j];
}
outputData.push(newRow);
}
var options = {
method: 'put',
contentType: 'application/json',
payload: JSON.stringify(outputData)
}
var fireBaseUrl = getFirebaseUrl("UsersSheets")
UrlFetchApp.fetch(fireBaseUrl, options)
}
function startSync() {
//Get the currently active sheet
var sheet = SpreadsheetApp.getActiveSheet()
//Get the number of rows and columns which contain some content
var [rows, columns] = [sheet.getLastRow(), sheet.getLastColumn()]
// Get the data contained in those rows and columns as a 2 dimensional array.
// Get the headers in a separate array.
var headers = sheet.getRange(1, 1, 1, columns).getValues()[0]; // [0] to unwrap the
outer array
var data = sheet.getRange(2, 1, rows - 1, columns).getValues(); // skipping the header
row means we need to reduce rows by 1.
//Use the syncMasterSheet function defined before to push this data to the "masterSheet"
key in the firebase database
syncMasterSheet(headers, data)
}
Normally, it would be ok to just define an onEdit function in your code, like this:
function onEdit(event) {
startSync();
}
However, because you are making external requests via UrlFetchApp.fetch(), this will fail with an error about not having the https://www.googleapis.com/auth/script.external_request permission (gobs more detail about trigger authorization here).
Instead, you need to manually create an installable trigger
This is reasonably straightforward. In the edit menu for your code, go to your project's triggers:
Then, select "add a trigger" and create the on edit trigger, like so:
You should think about if you really want this running on every edit as the requests could be quite large (as it syncs the entire sheet) and run frequently (as you edit), however.
When you make a change to a spreadsheet, its onEdit event fires. So that's where you'd trigger that save with something like this:
function onEdit(event) {
startSync();
}
But since onEdit fires for each edit, this may end up saving a lot more than really necessary. So you may want to debounce to only save after some inactivity.
Something like this:
var timer;
function onEdit(event) {
// if we're counting down, stop the timer
if (timer) clearTimeout(timer);
// starting syncing after 2 seconds
timer = setTimeout(function() {
startSync();
}, 2000);
}

Flutter connecting to multiple BLE devices Synchronously

I'm using flutter to work on an bluetooth low energy app, via the flutterBlue library, in which we are potentially connecting to multiple peripherals at the same time.
I am able to connect to multiple peripherals if I connect to them individually and send commands to all of them simultaneously.
For state management, my BluetoothHelper is the Model for my ScopedModel.
class BluetoothHelper extends Model {
bool isProcessing = false;
int val = 0;
FlutterBlue flutterBlue = FlutterBlue.instance; //bluetooth library instance
StreamSubscription scanSubscription;
Map<DeviceIdentifier, ScanResult> scanResults = new Map();
/// State
StreamSubscription stateSubscription;
BluetoothState state = BluetoothState.unknown;
/// Device
List<BluetoothDevice> devicesList = new List(); //todo
bool get isConnected => (deviceList.size != 0);
StreamSubscription deviceConnection;
StreamSubscription deviceStateSubscription;
List<BluetoothService> services = new List();
Map<Guid, StreamSubscription> valueChangedSubscriptions = {};
BluetoothDeviceState deviceState = BluetoothDeviceState.disconnected;
Future startScan(String uuid) async {
isProcessing = true;
if (val == 0) {
Future.delayed(Duration(milliseconds: 25), () => scanAndConnect(uuid));
val++;
} else {
Future.delayed(Duration(seconds: 4), () => scanAndConnect(uuid));
}
}
scanAndConnect(String uuid){
scanSubscription =
flutterBlue.scan(timeout: const Duration(seconds: 120), withServices: [
//new Guid('FB755D40-8DE5-481E-A369-21C0B3F39664')]
]).listen((scanResult) {
if (scanResult.device.id.toString() == uuid) {
scanResults[scanResult.device.id] = scanResult;
print("found! Attempting to connect" + scanResult.device.id.toString());
device = scanResult.device;
//connect(device);
connect(device);
}
}, onDone: stopScan);
}
Future connect(BluetoothDevice d) {
deviceConnection = flutterBlue.connect(d).listen(
null,
);
deviceStateSubscription = d.onStateChanged().listen((s) {
if (s == BluetoothDeviceState.connected) {
stopScan();
d.discoverServices().then((s) {
print("connected to ${device.id.toString()}");
services = s;
services.forEach((service) {
var characteristics = service.characteristics;
for (BluetoothCharacteristic c in characteristics) {
if (c.uuid.toString() == '') {//we look for the uuid we want to write to
String handshakeValue ; //value is initiliazed here in code
List<int> bytes = utf8.encode(handshakeValue);
d.writeCharacteristic(c, bytes,
type: CharacteristicWriteType.withResponse);
devicesList.add(d);
}
}
});
});
}
});
}
}
I am trying to loop throw all peripheral Unique Identifier (UID) and then have them connect one after the other programmatically.
This wasnt working out great. It would always end up connecting to the very last peripheral. Seems like the flutterblue instance can only scan for one uid at a time, and if it receives another request, it immediately drops the last request and moves to the new one.
I applied this same logic to the connection of an individual peripheral logic where I'd tap one peripheral and the second immediately and it'd connect to the second one. (I'm not currently blocking the UI or anything while the connection process takes place)
I need to wait till the first peripheral is connected before moving onto the next one.
The code above is the only way I've gotten my peripherals but there are huge problems with this code. It can currently only connect to 2 devices. It's using delays instead of callbacks to achieve connection by giving enough time for the scan and connect to happen before moving onto the second peripheral.
My first instinct was to make the convert the startScan and connect methods into async methods but this isnt working out well as I'd hope.
{await connect(device); } => gives "The built in Identifier "await" cant be used as a type. I could just be setting up the asyncs incorrectly.
I have looked around for alternatives and I've come upon Completers and Isolates. I'm not sure how relevant that might be.
UI SIDE :
I have the following method set for the ontap of a button wrapped within a scoped model descendant. This is going to reliably load peripheralUIDs list with a few uids and then connect to them one after the other.
connectAllPeripherals(BluetoothHelper model, List<String> peripheralUIDs) {
for(var uuid in peripheralUIDs) { //list of strings containing the uuids for the peripherals I want to connect to
model.startScan(uuid);
}
}
Don't know if this point is still an issue.
Assuming your issue hasn't since been fixed. I think the issue you have is trying to maintain the connections within Flutter (rather than just connecting multiple devices and letting Flutter_Blue/the hardware manage the connections).
I've got it happily connecting to multiple devices; after you've setup the instance maintaining a list of multiple device attributes.
i.e. I made a ble-device class which contained each of the following:
StreamSubscription deviceConnection;
StreamSubscription deviceStateSubscription;
List<BluetoothService> services = new List();
Map<Guid, StreamSubscription> valueChangedSubscriptions = {};
BluetoothDeviceState deviceState = BluetoothDeviceState.disconnected;
Maintaining a LinkedHashMap with a new object initialised from the class above for each device connected works nicely.
Other than that - Flutter_Blue will only allow 1 concurrent request call at a time (like reading a characteristic), but you can stack them pretty easily with
await
with the above, I'm able to poll multiple devices within a few milliseconds of each other.
Don't know if that helps - but with any luck, someone also coming across my problem will hit this and save some time.

WebRTC Peerconnection: Which IP flow of candidates set is used?

I am currently working on a monitoring tool for webrtc sessions investigating into the transferred SDP from caller to callee and vice versa. Unfortunately I cannot figure out which ip flow is really used since there are >10 candidate lines per session establishment and somehow the session is established after some candidates are pushed inside the PC.
Is there any way to figure out which flow is being used of the set of candidate flows?
I solved the issue by myself! :)
There is a function called peerConnection.getStats(callback);
This will give a lot of information of the ongoing peerconnection.
Example: http://webrtc.googlecode.com/svn/trunk/samples/js/demos/html/constraints-and-stats.html
W3C Standard Description: http://dev.w3.org/2011/webrtc/editor/webrtc.html#statistics-model
Bye
I wanted to find out the same thing, so wrote a small funtion which returns a promise which resolves to candidate details:
function getConnectionDetails(peerConnection){
var connectionDetails = {}; // the final result object.
if(window.chrome){ // checking if chrome
var reqFields = [ 'googLocalAddress',
'googLocalCandidateType',
'googRemoteAddress',
'googRemoteCandidateType'
];
return new Promise(function(resolve, reject){
peerConnection.getStats(function(stats){
var filtered = stats.result().filter(function(e){return e.id.indexOf('Conn-audio')==0 && e.stat('googActiveConnection')=='true'})[0];
if(!filtered) return reject('Something is wrong...');
reqFields.forEach(function(e){connectionDetails[e.replace('goog', '')] = filtered.stat(e)});
resolve(connectionDetails);
});
});
}else{ // assuming it is firefox
var stream = peerConnection.getLocalStreams()[0];
if(!stream || !stream.getTracks()[0]) stream = peerConnection.getRemoteStreams()[0];
if(!stream) Promise.reject('no stream found')
var track = stream.getTracks()[0];
if(!track) Promise.reject('No Media Tracks Found');
return peerConnection.getStats(track).then(function(stats){
var selectedCandidatePair = stats[Object.keys(stats).filter(function(key){return stats[key].selected})[0]]
, localICE = stats[selectedCandidatePair.localCandidateId]
, remoteICE = stats[selectedCandidatePair.remoteCandidateId];
connectionDetails.LocalAddress = [localICE.ipAddress, localICE.portNumber].join(':');
connectionDetails.RemoteAddress = [remoteICE.ipAddress, remoteICE.portNumber].join(':');
connectionDetails.LocalCandidateType = localICE.candidateType;
connectionDetails.RemoteCandidateType = remoteICE.candidateType;
return connectionDetails;
});
}
}

Resources