I am recording stereo audio from line-in of the desktop using Microsoft's Core Audio API. It records at 44100Hz, 32 bit. I want to know how the stereo data is recorded into the buffer, like is it first 32 bit is of one microphone and next 32 bit is of second microphone or something else?Here is the code I used to record audio
Typically the channels are interleaved. So your buffer should look like this: [left 16 bits][right 16 bits][left 16 bits][right 16 bits] ... [left 16 bits][right 16 bits].
Related
I have an RDM6300 RFID writer/reader. It can read RFID Tags and it sends the data via UART to a microcontroller. So far I worked with multiple Microcontrollers from which ones the STM32F04 had the most UART "ports" (8 transmitters and receivers). The Arduino has got a few, but it is not enough.
I want to have 25 RFID readers (that are reading almost at the same time), but I can't find a way to send data from all the readers to one microcontroller.
Is there a way how can I connect 25 readers to ONE microcontroller?
You have 25 things transmitting at 9600 bps. You have an MCU running at 180 MHz with 8 UARTS and lots of timer capture channels (32 channels, 30 of them usable on the 100 pin STM32F427VITx). 8 of the 25 inputs are taken care of by the UARTS, 17 needs to be processed by other means. Connect them to timer capture channels.
The MCU runs at 180 MHz, the inputs change state at 9600 Hz, that means 18750 clock cycles between events. Should be way more than enough to process all of them, if you don't use HAL.
read the timer status register, check for capture events and clear them
check pin state, low means start of a frame
store the capture register value for that channel
keep checking for capture events
if there is one, clear it
read the capture timestamp, subtract the stored value from step 3 from it
calculate the number of bits received with identical state
keep doing it until you have 9 bits (start bit + 8 data bits) and high input on the pin
Do the above in parallel for all 17 channels. You need a suitable prescaler for the timers so they won't overflow while reading a full frame (9*18750=168750 cycles)
I need to generate a 10 MBit RS485 (UART) data stream with an USB device. Until now, I thought that the FT2232H perfectly suit but as far as I understand, the FT2232H can only generate 12 MBit or 8 Mbit as the subdividers are limited for the main divider 0 and 1.
The USB device should enumerate at the PC as serial interface which supports a baudrate of 10 Mbit.
So any idea which chip is suited for the specific task?
Thanks.
Ok, I finally implemented a suitable solution.
Set the baud rate to 12 MBit and use 2 Stop and 1 parity bit.
Implement a 12 MBit RS422 receiver in a CPLD or FPGA and connect it directly with the FT2232H. The CPLD/FGPA removes one stop bit and the parity bit.
Output the received data in the CPLD/FPGA with an regular 8N1 10 MBit RS422 sender. In total, this must be done twice but with 12 and 10 MBit being switched.
Works like a charm and the CPLD can even be a QFN32 Lattice iCE40 CPLD.
I'm trying to understand how the MPEG-2 works, so I can finish a project which is based on it. First of all I want to understand the MPEG-2. I want to know why the total Mbit/s decreased very fast between 1995 and 2005? It went from 6 Mbit/s to 2 Mbit/s. How was this possible?
I'm learning about computer architecture and I know how a computer works when it executes a program. The thing that makes me confused is when the instruction length is longer than the width of the bus AND the instruction length is NOT the double of the bus width. Let's say we have 12 bit instructions and an 8 bit bus. What does the computer do? Does it:
Analyse the PC
Go to the address of the PC
Fetch 8 bits of the instruction
store 8 bits in instruction register
increase PC by 8 bits (???)
fetch the remaining 4 bits
fill the instruction register (which is 12 bits long?)
Well as you see I'm confused here. I suppose it's not like this, but I need to know in detail how it works and what the PC is after every step.
Would be very grateful for some help! Thanks in advance.
Normally, the smalls amount of memory that can be read or written is 1 byte, i.e. 8 bits. So if the CPU needs 12 bits only, it has to read two 8-bit bytes. From the 16 bits, the required 12 bits are extracted by hardware, and the remaining 4 bits are not used.
Since this is not so memory efficient, the instruction length of a CPU normally is a multiple of 8 bits, e.g. by packing operands directly into the instruction.
So your 7 steps in your example are right except step 6, in which 8 bits are fetched, of which only 4 would be used.
While running some tests with an FT232R USBtoRS232 Chip, which should be able to manage speeds up to 3 Mbaud, I have the problem that my actual speed is only around 38 kbaud or 3,8 KB/s.
I've searched the web, but I could not find any comparable data, to prove or disprove this limitation.
While I am looking further into this, I would like to know, if someone here has comparable data.
I tested with my own code and with this tool here:
http://www.aggsoft.com/com-port-stress-test.htm
Settings would be 115,200, 8N1, and 64 byte data-packet.
I would have expected results like these:
At 115200 baud -> effectively 11,520 byte/s or 11,52 KB/s
At 921600 baud -> 92,16 KB/s
I need to confirm a minimal speed of 11,2 KB/s, better speeds around 15-60 KB/s.
Based on the datasheet, this should be no problem - based on reality, I am stuck at 3,8 KB/s - for now at least.
Oh my, found a quite good hint - my transfer rate is highly dependent on the size of the packets. So, while using 64 byte packets, I end up with 3,8 KB/s, using 180 byte packets, it somewhat averages around 11,26 KB/s - and the main light went on, when I checked the speed for 1 byte packets -> around 64 byte/s!
Adding some math to it -> 11,52 KB/s divided by 180 equals to 64 byte/s. So basically the speed scales with the byte-size. Is this right? And why is that?
The results that you observe are because of the way serial over USB works. This is a USB 1.1 chip. The USB does transfers using packets and not a continuous stream as for example serial.
So your device will get a time sliced window and it is up to the driver to utilize this window effectively. When you set the packet size to 1 you can only transmit one byte per USB packet. To transmit the next byte you have to wait for your turn again.
Usually a USB device has a buffer on the device end where it can buffer the data between transfers and thus keep the output rate constant. You are under-flowing this buffer when you set packet size too low. The time slice on USB 1.1 is 10 ms which only gives you 100 transfers per second to be shared between all of the devices.
When you make a "send" call, all of your data will go out in one transfer to keep interactive applications working right. It is best to use the maximum transfer size to achieve best performance on USB devices. This is not always possible if you have interactive application, but mostly possible when you have a data transfer application.