I use AudioContext's decodeAudioData on an mp3 file, which gives me an AudioBuffer. With this audio buffer, I go on to draw a waveform of this mp3 file on a canvas using the data returned by getChannelData().
Now I want to use the same code to draw a waveform of the audio data of a MediaStream, which means I need the same kind of input/data. I know a MediaStream contains real time information, but there must be a way to access each new data from the MediaStream as
a Float32Array containing the PCM data
which is what the AudioBuffer's getChannelData returns.
I tried to wrap the MediaStream with a MediaStreamAudioSourceNode and feed it into an AnalyserNode to use getFloatFrequencyData() (which returns a Float32Array), but I can tell the data is different from the data I get from getChannelData(). Maybe it isn't "PCM" data? How can I get "PCM" data?
Hopefully this is clear. Thanks for the help!
First, a note that AnalyserNode only samples the data occasionally, but won't process all of it. I think that matches your scenario well, but just know that if you need all the data (like, you're buffering up the audio), you will need to use ScriptProcessor instead today.
Presuming you just want samples of the data, you can use AnalyserNode, but you should call getFloatTimeDomainData(), not getFloatFrequencyData(). That will give you the PCM data (FrequencyData is giving you the FFT of the PCM data).
create MediaStreamDestination with audiocontext and then new MediaRecorder from the stream,
var options = {mimeType: 'audio/webm;codecs=pcm'};
mediaRecorder = new MediaRecorder(stream, options);
Related
In R, how can a data.frame be written to an in-memory raw byte vector in the feather format?
The arrow package has a write_feather function in which the destination can be an BufferedOutputStream, but the documentation doesn’t describe how to create such a stream or access its underlying buffer.
Other than that most other packages assume usage of a local file system rather than in-memory storage.
Thank you in advance for your consideration and response.
BufferOutputStream$create() is how you create one. You can pass that to write_feather(). If you want a raw R vector back, you can use write_to_raw(), which wraps that. See https://arrow.apache.org/docs/r/reference/write_to_raw.html for docs; there's a link there to the source if you want to see exactly what it's doing, in case you want to do something slightly differently.
I was able to create a network, train it and evaluate it using EncogModel. However, i would like to be able to save the network, training and weights, so that every time i run it, i dont have to train it. I found encog persistence, but I'm having a hard time putting encogmodel and percistence together. is there any sample codes available? If not, how could this be done?
Used:
SerializeObject.Save(string path, network);
and
EncogUtility.SaveEGB( FileInfo path , data);
EncogDirectoryPersistance.SaveObject(FileInfo f,network) seemed to not support NEAT, it kept returning an error.
I have 10+ files that I want to add to ArcMap then do some spatial analysis in an automated fashion. The files are in csv format which are located in one folder and named in order as "TTS11_path_points_1" to "TTS11_path_points_13". The steps are as follows:
Make XY event layer
Export the XY table to a point shapefile using the feature class to feature class tool
Project the shapefiles
Snap the points to another line shapfile
Make a Route layer - network analyst
Add locations to stops using the output of step 4
Solve to get routes between points based on a RouteName field
I tried to attach a snapshot of the model builder to show the steps visually but I don't have enough points to do so.
I have two problems:
How do I iterate this procedure over the number of files that I have?
How to make sure that every time the output has a different name so it doesn't overwrite the one form the previous iteration?
Your help is much appreciated.
Once you're satisfied with the way the model works on a single input CSV, you can batch the operation 10+ times, manually adjusting the input/output files. This easily addresses your second problem, since you're controlling the output name.
You can use an iterator in your ModelBuilder model -- specifically, Iterate Files. The iterator would be the first input to the model, and has two outputs: File (which you link to other tools), and Name. The latter is a variable which you can use in other tools to control their output -- for example, you can set the final output to C:\temp\out%Name% instead of just C:\temp\output. This can be a little trickier, but once it's in place it tends to work well.
For future reference, gis.stackexchange.com is likely to get you a faster response.
I'm in the process of evaluating how successful a script I wrote is and kind of a quick and dirty method I've employed is looking at the first few values and last few values of a single variable and doing a few calculations with them based on the same values in another netcdf file.
I know that there are better ways to approach this but again, this is a really quick and dirty method that has worked for me so far. My question though is that by looking at the raw data through ncdump, is there a way to tell which vertical layer that data belongs to? In my example, the file has 14 layers. I"m assuming that the first few values are a part of the surface layer and the last few values are a part of the top layer, but I suspect that this assumption is wrong, at least in part.
As a follow-up question, what would then be the easiest 'proper' way to tell what layer data belongs to? Thank you in advance!
ncview and NCO are both very powerful and quick command line operators to view data inside a netcdf file.
ncview: http://meteora.ucsd.edu/~pierce/ncview_home_page.html
NCO: http://nco.sourceforge.net/
You can easily show variables over all layers for example with
ncks -d layer,0,13 some_infile.nc
ncdump dumps the data with the last dimension varying fastest (http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/CDL-Syntax.html) so if 'layer' is the slowest/first dimension, the earlier values are all in the first layer, while the last few values are in the last layer.
As to whether the first layer is the top or bottom layer, you'd have to look to the 'layer' dimension and its data.
I have a bunch of binary data in N-byte chunks, where each chunk corresponds exactly to one row of a PyTables table.
Right now I am parsing each chunk into fields, writing them to the various fields in the table row, and appending them to the table.
But this seems a little silly since PyTables is going to convert my structured data back into a flat binary form for inclusion in an HDF5 file.
If I need to optimize the CPU time necessary to do this (my data comes in large bursts), is there a more efficient way to load the data into PyTables directly?
PyTables does not currently expose a 'raw' dump mechanism like you describe. However, you can fake it by using UInt8Atom and UInt8Col. You would do something like:
import tables as tb
f = tb.open_file('my_file.h5', 'w')
mytable = f.create_table('/', 'mytable', {'mycol': tb.UInt8Col(shape=(N,))})
mytable.append(myrow)
f.close()
This would likely get you the fastest I/O performance. However, you will miss out on the meaning of the various fields that are part of this binary chunk.
Arguably, raw dumping of the chunks/rows is not what you want to do anyway, which is why it is not explicitly supported. Internally HDF5 and PyTables handle many kinds of conversion for you. This includes but is not limited to things like endianness and thet platform specific feature. By managing the data types for you the resultant HDF5 file and data set cross platform. When you dump raw bytes in the manner you describe you short-circuit one of the main advantages of using HDF5/PyTables. If you do short-circuit, you have a high probability that the resulting file will look like garbage on anything but the original system that produced it.
So in summary, you should be converting the chunks to the appropriate data types in memory and then writing out. Yes this takes more processing power, time, etc. So in addition to being the right thing to do it will ultimately save you huge headaches down the road.