Cannot record Bitrate-based audio using Windows media encoder - audio-recording

I want to record video from webcam and sound from microphone using Windows Media Encoder.
I set a profile to record audio using bitrate-based mode and video using CBR.
this.newProfile = new WMEncProfile2();
this.newProfile.ValidateMode = true;
this.newProfile.ContentType = 17;
this.newProfile.set_VBRMode(WMENC_SOURCE_TYPE.WMENC_AUDIO, 0, WMENC_PROFILE_VBR_MODE.WMENC_PVM_BITRATE_BASED);
this.newProfile.set_VBRMode(WMENC_SOURCE_TYPE.WMENC_VIDEO, 0, WMENC_PROFILE_VBR_MODE.WMENC_PVM_NONE);
newProfile.AddAudience(400000);
IWMEncAudienceObj audience = newProfile.get_Audience(0);
audience.set_VideoCodec(0, 3);
audience.set_VideoBitrate(0, 700000);
audience.set_VideoFPS(0, 25000);
audience.set_VideoKeyFrameDistance(0, 5000);
audience.set_VideoBufferSize(0, 3000);
audience.set_VideoWidth(0,800);
audience.set_VideoHeight(0, 600);
audience.set_AudioCodec(0, 1);
audience.SetAudioConfig(0, 2, 44100, 192000, 16);
this.newProfile.Validate();
However, the video output has no sound. It's just work in CBR mode.

I just konw that Bitrate-based VBR, encoder must start(encoder.Start();) twice. Now, I can record the sound.

Related

How to combine XT_DAC_Audio and A2DP BT-Sink on an ESP32?

I am trying to combine two libraries on an ESP32.
I can run each of them alone, but not together.
I want to make a BT-Speaker as a Gift for my gf.
I want the speaker to play an Audio-file and afterwards run the BT-Sink.
But everytime I try to enable both at once (or after each other), the audio file is not playing and the BT-Script starts running directly.
When I comment one of them out, the other one works.
I hope someone can help. I am sure there is an easy fix.
Here is my .ino File:
`
// Playing a digital WAV recording repeatadly using the XTronical DAC Audio library
// prints out to the serial monitor numbers counting up showing that the sound plays
// independently of the main loop
// See www.xtronical.com for write ups on sound, the hardware required and how to make
// the wav files and include them in your code
#include "BluetoothA2DPSink.h"
BluetoothA2DPSink a2dp_sink;
#include "SoundData.h"
#include "XT_DAC_Audio.h"
#include "Darthatmet.h"
XT_Wav_Class Darthatmet(Darth); // verlinkung zur Darthatmet.h
XT_Wav_Class ForceWithYou(Force);
XT_Sequence_Class Sequence;
XT_DAC_Audio_Class DacAudio(25,0); // Create the main player class object. Use GPIO 25, one of the 2 DAC pins and timer 0
void setup() {
Sequence.RepeatForever=false;
Sequence.AddPlayItem(&Darthatmet);
Sequence.AddPlayItem(&ForceWithYou);
DacAudio.Play(&Sequence);
const i2s_config_t i2s_config = {
.mode = (i2s_mode_t) (I2S_MODE_MASTER | I2S_MODE_TX | I2S_MODE_DAC_BUILT_IN),
.sample_rate = 44100, // corrected by info from bluetooth
.bits_per_sample = (i2s_bits_per_sample_t) 16, // the DAC module will only take the 8bits from MSB
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT,
.communication_format = (i2s_comm_format_t)I2S_COMM_FORMAT_STAND_MSB,
.intr_alloc_flags = 0, // default interrupt priority
.dma_buf_count = 8,
.dma_buf_len = 64,
.use_apll = false
};
a2dp_sink.set_i2s_config(i2s_config);
a2dp_sink.start("DarthVader-BT");
}
void loop() {
DacAudio.FillBuffer(); // Fill the sound buffer with data
}
`
I allready tried putting stuff inside the loop or using a true/false statement.
But it didn't work or i am just not educated enough with arduinoc-ode to do it :(
Ok, i found the solution :)
After some hours of testing I did it with "millis()".
It can be used to have a timer which runs in the "background".
I did it like this and it works like intended!
It powers the ESP32 and plays the "Startup-Sound" which is made out of 2 Audio-snippets in HEX Format.
while doing that it checks how much time in milliseconds has past since powerup.
I know my Startup-Sound is ruffly 5 seconds long. So i set the "timer" to 5,5 Seconds.
after it reached the 5,5 Seconds it starts the BT-Sink AND sets a flag named "btstarted". The flag needs to be false to trigger the BT-Sink start.
At first i did not used this flag. The Result was that i could see the BT-Device but can't connect cause it kept restarting after 5,5 seconds.
With this flag, it will only start once per powerup. Which is nice :)
Next step are programmable LEDs (ws2812b) and 3D-Design/Printing the Housing :)
Here is my code:
#include "BluetoothA2DPSink.h"
#include "SoundData.h"
#include "XT_DAC_Audio.h"
#include "Darthatmet.h"
BluetoothA2DPSink a2dp_sink;
XT_Wav_Class Darthatmet(Darth); // verlinkung zur Darthatmet.h
XT_Wav_Class ForceWithYou(Force);
XT_Sequence_Class Sequence;
XT_DAC_Audio_Class DacAudio(25,0); // Create the main player class object. Use GPIO 25, one of the 2 DAC pins and timer 0
bool btstarted = false;
void setup() {
Sequence.RepeatForever=false;
Sequence.AddPlayItem(&Darthatmet);
Sequence.AddPlayItem(&ForceWithYou);
DacAudio.Play(&Sequence);
const i2s_config_t i2s_config = {
.mode = (i2s_mode_t) (I2S_MODE_MASTER | I2S_MODE_TX | I2S_MODE_DAC_BUILT_IN),
.sample_rate = 44100, // corrected by info from bluetooth
.bits_per_sample = (i2s_bits_per_sample_t) 16, // the DAC module will only take the 8bits from MSB
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT,
.communication_format = (i2s_comm_format_t)I2S_COMM_FORMAT_STAND_MSB,
.intr_alloc_flags = 0, // default interrupt priority
.dma_buf_count = 8,
.dma_buf_len = 64,
.use_apll = false
};
a2dp_sink.set_i2s_config(i2s_config);
//a2dp_sink.set_mono_downmix(true);
}
void loop() {
DacAudio.FillBuffer(); // Fill the sound buffer with data, start Playback
unsigned long zeit = millis();
if (zeit >= 5500 and btstarted == false) {
a2dp_sink.start("DarthVaderrr");
btstarted = true;
}
}

Video broadcast using NDI SDK 4.5 in iOS 13 not working. Receiver in LAN does not receive any video packets

I have been trying to use NDI SDK 4.5, in a Objective-C iOS-13 app, to broadcast camera capture from iPhone device.
My sample code is in public Github repo: https://github.com/bharatbiswal/CameraExampleObjectiveC
Following is how I send CMSampleBufferRef sampleBuffer:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = VIDEO_CAPTURE_WIDTH;
video_frame.yres = VIDEO_CAPTURE_HEIGHT;
video_frame.FourCC = NDIlib_FourCC_type_UYVY; // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
video_frame.line_stride_in_bytes = VIDEO_CAPTURE_WIDTH * VIDEO_CAPTURE_PIXEL_SIZE;
video_frame.p_data = CVPixelBufferGetBaseAddress(pixelBuffer);
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame);
I have been using "NewTek NDI Video Monitor" to receive the video from network. However, even though it shows as source, the video does not play.
Has anyone used NDI SDK in iOS to build broadcast sender or receiver functionalities? Please help.
You should use kCVPixelFormatType_32BGRA in video settings. And NDIlib_FourCC_type_BGRA as FourCC in NDIlib_video_frame_v2_t.
Are you sure about your VIDEO_CAPTURE_PIXEL_SIZE ?
When I worked with NDI on macos I had the same black screen problem and it was due to a wrong line stride.
Maybe this can help : https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow?language=objc ?
Also it seems the pixel formats from core video and NDI don't match.
On the core video side you are using Bi-Planar Y'CbCr 8-bit 4:2:0, and on the NDI side you are using NDIlib_FourCC_type_UYVY which is Y'CbCr 4:2:2.
I cannot find any Bi-Planar Y'CbCr 8-bit 4:2:0 pixel format on the NDI side.
You may have more luck using the following combination:
core video: https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8planarfullrange?language=objc
NDI: NDIlib_FourCC_type_YV12
Hope this helps!
In my experience, you have two mistake. To use CVPixelBuffer's CVPixelBufferGetBaseAddress, the CVPixelBufferLockBaseAddress method must be called first. Otherwise, it returns a null pointer.
https://developer.apple.com/documentation/corevideo/1457128-cvpixelbufferlockbaseaddress?language=objc
Secondly, NDI does not support YUV420 biplanar. (The default format for iOS cameras.) More precisely, NDI only accepts one data pointer. In other words, you have to merge the biplanar memory areas into one, and then pass it in NV12 format. See the NDI document for details.
So your code should look like this: And if sending asynchronously instead of NDIlib_send_send_video_v2, a strong reference to the transferred memory area must be maintained until the transfer operation by the NDI library is completed.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NDIlib_FourCC_video_type_e ndiVideoFormat;
uint8_t* pixelData;
int stride;
if (pixelFormat == kCVPixelFormatType_32BGRA) {
ndiVideoFormat = NDIlib_FourCC_type_BGRA;
pixelData = (uint8_t*)CVPixelBufferGetBaseAddress(pixelBuffer); // Or copy for asynchronous transmit.
stride = width * 4;
} else if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
ndiVideoFormat = NDIlib_FourCC_type_NV12;
pixelData = (uint8_t*)malloc(width * height * 1.5);
uint8_t* yPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int yPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int ySize = yPlaneBytesPerRow * height;
uint8_t* uvPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
int uvPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int uvSize = uvPlaneBytesPerRow * height;
stride = yPlaneBytesPerRow;
memcpy(pixelData, yPlane, ySize);
memcpy(pixelData + ySize, uvPlane, uvSize);
} else {
return;
}
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = width;
video_frame.yres = height;
video_frame.FourCC = ndiVideoFormat;
video_frame.line_stride_in_bytes = stride;
video_frame.p_data = pixelData;
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame); // synchronous sending.
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// For synchrnous sending case. Free data or use pre-allocated memory.
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
free(pixelData);
}

JavaFx media bytes

I want to create a soundwave in my java programm from an mp3 file. I researched and found out, that for wav-files I need to use the AudioInputStream and calculate an byte array... From mp3-File I am using JavaFX media and media-player. Are the bytes from the Inputstream the same like from the Javafx media.getSource().getBytes(); ? An AudioInputStream cant read mp3...
Or how am I supposed to get the values for an mp3 file for soundwave?
Byte from AudioInputStream:
AudioInputStream audioInputStream;
try {
audioInputStream = AudioSystem.getAudioInputStream(next);
int frameLength = (int) audioInputStream.getFrameLength();
int frameSize = (int) audioInputStream.getFormat().getFrameSize();
byte[] bytes = new byte[frameLength * frameSize];
g2.setColor(Color.MAGENTA);
for(int p = 0; p < bytes.length; p++){
g2.fillRect(20 + (p * 3), 50, 2, bytes[p]);
}
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
And from JavaFX:
Media media;
MediaPlayer player;
media = new Media("blablafile");
player = new Mediaplayer(media);
byte[] bytes = media.getSource().getBytes();
The JavaFX Media API does not provide much low-level support as of Java 10. It seems to be designed with only the necessary features to play media, not manipulate it significantly.
That being said, you might want to look at AudioSpectrumListener. I can't promise it will give you what you want (I'm not familiar with computer-audio concepts) but it may allow you to create your sound-wave; at least a crude representation.
You use an AudioSpectrumListener with a MediaPlayer using the corresponding property.
If your calculations don't have to be in real time then you can do them ahead of time using:
byte[] bytes = URI.create(media.getSource()).toURL().openStream().readAllBytes();
Note that if the media is remote, however, that you will end up downloading the bytes twice; once to get the bytes for your sound-wave and again when actually playing the media with a MediaPlayer.
Also, you'll want to do the above on a background thread and not the JavaFX Application thread to avoid the possibility of freezing the UI.

Add two media sources in topology and display mix video - Windows Media Foundation

I am exploring windows media foundation.
I want to mix and display two video stream in one window.
I am following few of the sample provided by MS.
I am trying to Add multiple media sources to topology, I want to add two media files to topology.
As per the below link, I am following code to add media source in topology:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms701605(v=vs.85).aspx
Below is the code to add source node to the topology:
HRESULT hr = pPD->GetStreamDescriptorByIndex(iStream, &fSelected, &pSD);
if (FAILED(hr))
{
goto done;
}
if (fSelected)
{
// Create the media sink activation object.
hr = CreateMediaSinkActivate(pSD, hVideoWnd, &pSinkActivate);
if (FAILED(hr))
{
goto done;
}
// Add a source node for this stream.
hr = AddSourceNode(pTopology, pSource, pPD, pSD, &pSourceNode);
if (FAILED(hr))
{
goto done;
}
// Create the output node for the renderer.
hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);
if (FAILED(hr))
{
goto done;
}
// Connect the source node to the output node.
hr = pSourceNode->ConnectOutput(0, pOutputNode, 0);
}
But I am not able to add multiple media source in my topology.
My single file playback is working properly but I am not able to mix and display two files.
I recommend split your task on five step:
1. write code for playing ONE video file. On MSDN there is example code: How to Play Media Files with Media Foundation.
2. research code of WORKABLE player for point, where created MediaSource from video file path(URL).
3. create TWO MediaSource from two video file paths(URL).
4. by function MFCreateAggregateSource create from TWO MediaSource - ONE MediaSource and retuurn MediaSource from method player HRESULT CreateMediaSource(PCWSTR sURL, IMFMediaSource **ppSource)
5. call 'hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);' twice: 'hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);' for first video stream and hr = AddOutputNode(pTopology, pSinkActivate, 1, &pOutputNode); for second video stream.
Regards.
P.S. If you will use two video with audio streams, then You will have FOUR streams in Aggregate MediaSource - it could need FIND the stream ID for video stream.
P.S.S It is not easy recommend from viewing only demo code, but in CreateMediaSinkActivate you will find code hr = MFCreateVideoRendererActivate(hVideoWindow, &pActivate);. In your code, you must create this Activate before:
// For each stream, create the topology nodes and add them to the topology.
for (DWORD i = 0; i < cSourceStreams; i++)
{
hr = AddBranchToPartialTopology(pTopology, pSource, pPD, i, hVideoWnd);
if (FAILED(hr))
{
goto done;
}
}
then set this crated Activate as argument for AddBranchToPartialTopology
for example:
hr = MFCreateVideoRendererActivate(hVideoWindow, &pVideoRendererActivate);
// For each stream, create the topology nodes and add them to the topology.
for (DWORD i = 0; i < cSourceStreams; i++)
{
hr = AddBranchToPartialTopology(pTopology, pSource, pPD, i, pVideoRendererActivate);
if (FAILED(hr))
{
goto done;
}
}
In AddBranchToPartialTopologyyou must write something like this:
HRESULT AddBranchToPartialTopology(
IMFTopology *pTopology, // Topology.
IMFMediaSource *pSource, // Media source.
IMFPresentationDescriptor *pPD, // Presentation descriptor.
DWORD iStream, // Stream index.
IMFActivate* aVideoRendererActivate) // VideoRenderer for video playback.
{
IMFStreamDescriptor *pSD = NULL;
IMFActivate *pSinkActivate = NULL;
IMFTopologyNode *pSourceNode = NULL;
IMFTopologyNode *pOutputNode = NULL;
BOOL fSelected = FALSE;
HRESULT hr = pPD->GetStreamDescriptorByIndex(iStream, &fSelected, &pSD);
if (FAILED(hr))
{
goto done;
}
DWORD iStreamID = 0;
if (fSelected)
{
// Create the media sink activation object.
hr = CreateMediaSinkActivate(pSD, iStreamID, aVideoRendererActivate, &pSinkActivate);
In 'CreateMediaSinkActivate' you must write something like this:
DWORD globalVideoIndex = 0;
HRESULT CreateMediaSinkActivate(
IMFStreamDescriptor *pSourceSD, // Pointer to the stream descriptor.
DWORD& iStreamID, // ctream index
IMFActivate *pVideoRendererActivate, // Handle to the video renderer activate.
IMFActivate **ppActivate
)
{
IMFMediaTypeHandler *pHandler = NULL;
IMFActivate *pActivate = NULL;
// Get the media type handler for the stream.
HRESULT hr = pSourceSD->GetMediaTypeHandler(&pHandler);
if (FAILED(hr))
{
goto done;
}
// Get the major media type.
GUID guidMajorType;
hr = pHandler->GetMajorType(&guidMajorType);
if (FAILED(hr))
{
goto done;
}
// Create an IMFActivate object for the renderer, based on the media type.
if (MFMediaType_Audio == guidMajorType)
{
// Create the audio renderer.
hr = MFCreateAudioRendererActivate(&pActivate);
}
else if (MFMediaType_Video == guidMajorType)
{
// Share the video renderer.
hr = pVideoRendererActivate->QueryInterface(IID_PPV_ARG(pActivate))
iStreamID = globalVideoIndex++;
}
else
{
// Unknown stream type.
hr = E_FAIL;
// Optionally, you could deselect this stream instead of failing.
}
if (FAILED(hr))
{
goto done;
}
// Return IMFActivate pointer to caller.
*ppActivate = pActivate;
(*ppActivate)->AddRef();
done:
SafeRelease(&pHandler);
SafeRelease(&pActivate);
return hr;
}
In AddBranchToPartialTopology you must write:
// Create the output node for the renderer.
hr = AddOutputNode(pTopology, pSinkActivate, iStreamID, &pOutputNode);
if (FAILED(hr))
{
goto done;
}
for Audio streams iStreamID will zero, but for video stream it will increment from global variable globalVideoIndex.
Idea is that code create Activate for video renderer BEFORE create Topology - it is OK. Then this ONE video renderer activate share by ref pointer between ALL video streams in MediaSource by checking condition if (MFMediaType_Video == guidMajorType). Each VIDEO stream has got unique id from 0 by incrementing global variable globalVideoIndex++ - this id set in method hr = AddOutputNode(pTopology, pSinkActivate, iStreamID, &pOutputNode);. As a result, all video streams will be drawn by ONE video renderer, and video stream with iStreamID is 0 will reference background, while other video streams will additional.

Sending Bitmap data over winsock? Winapi

I am tring to send a screenshot of a desktop over winsock.
As such, there are four tasks:
Save bitmap to buffer
Write data across wire using a socket
Read data from wire using a socket
Load a bitmap from a buffer
I have saved the bitmap to a char array using GetDIBits.
Writing the data to the server, I have done but I have questions.
For writing data over from server to the client, do I need to use only 1 recv() call (I am using TCP), or do i need to split it up into multiple parts? Ive read that TCP is stream concept and that I wouldnt have to worry about packets because that is abstracted for me?
How would I go about loading the information from GetDIBits into a bitmap and displaying it on the main window?
I am guessing I have to use SetDIBits, but into which device contexts do i use?
The Server screenshot capturer is here:
HDC handle_ScreenDC = GetDC(NULL);
HDC handle_MemoryDC = CreateCompatibleDC(handle_ScreenDC);
BITMAP bitmap;
int x = GetDeviceCaps(handle_ScreenDC, HORZRES);
int y = GetDeviceCaps(handle_ScreenDC, VERTRES);
HBITMAP handle_Bitmap = CreateCompatibleBitmap(handle_ScreenDC, x, y);
SelectObject(handle_MemoryDC, handle_Bitmap);
BitBlt(handle_MemoryDC, 0, 0, x, y, handle_ScreenDC, 0, 0, SRCCOPY);
GetObject(handle_Bitmap, sizeof(BITMAP), &bitmap);
BITMAPFILEHEADER bmfHeader;
BITMAPINFOHEADER bi;
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = bitmap.bmWidth;
bi.biHeight = bitmap.bmHeight;
bi.biPlanes = 1;
bi.biBitCount = 16;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
//std::cout<< bitmap.bmWidth;
DWORD dwBmpSize =((bitmap.bmWidth * bi.biBitCount + 5) / 32) * 4 * bitmap.bmHeight;
//int i = bitmap.bmWidth;
//DWORD dwBmpSize = 99;
HANDLE hDIB = GlobalAlloc(GHND, dwBmpSize);
char* bufptr = (char *)GlobalLock(hDIB);
GetDIBits(handle_ScreenDC, handle_Bitmap, 0, (UINT)bitmap.bmHeight, bufptr, (BITMAPINFO *)&bi, DIB_RGB_COLORS);
send(clientsock, bufptr , GlobalSize((char *)GlobalLock(hDIB)), 0);
/*Do i need to packetize/split it up? Or 1 send() is good for the matching Recv on the client?*/
/*I am assuming i must send bi structure over winsock also correct?*/
And The receiveing client code:
case WM_PAINT:{
//Im a Gdi beginner so I dont have a clue what im doing here as far as blitting the recved bits, this is just some stuff i tried myself before asking for help
PAINTSTRUCT paintstruct;
HDC handle_WindowDC = BeginPaint(hwnd, &paintstruct);
handle_MemoryDC = CreateCompatibleDC(handle_WindowDC);
handle_Bitmap = CreateCompatibleBitmap(handle_WindowDC, 640, 360);
std::cout << SetDIBits(handle_MemoryDC, handle_Bitmap, 0, bi.biHeight, buffer, (BITMAPINFO *)&bi, DIB_RGB_COLORS);
SelectObject(handle_MemoryDC, handle_Bitmap);
StretchBlt(handle_WindowDC, 50, 50, 640, 360, handle_MemoryDC, 0, 0, x, y, SRCCOPY);
EndPaint(hwnd, &paintstruct);
}
Sockets do have limited buffer sizes at both ends, typically around 4000 bytes. So if you dump a large block of data (like a full screendump) in one call to a non-blocking send, you will likely get errors, and you will need to manage your own buffers, calling multiple sends. However, if you are using non-blocking socket, you should be OK, as send() will simply block until all the data is sent.
On the receiving side, it is a similar case - a blocking receive can just keep waiting until it has the full data size that you asked for, but a non-blocking receive will return with whatever data is available at that time, which will result in the data filtering through bit by bit, and you will need to reassemble the data from multiple recv() calls.
I have heard of issues with sending really large blocks of data in one hit, so if you are sending 5 megabytes of data in one hit, be aware there might be other issues coming into play as well.

Resources