Why Direct2D only runs at an FPS of 30 after unplugging the external power supply of my laptop? - frame-rate

I am new to Direct2D and recently I've found a wierd problem. When external power supply is plugged, my program runs at a steady FPS of 60, which I know may be a result of VSync; but after unplugging the external power supply for a while, my program drops to a steady FPS of 30 (I outputed the time interval between every two renderings and it shows 32ms).
And even if I plugged the power supply again, it still remains at 30 FPS until I reboot the computer.
Is it because my laptop shut down something when powered by battery which cuts the FPS to its half? If true, what can I do for it?
My laptop's OS is Windows 8.1.
Here are some code that may be helpful.
HRESULT hr = S_OK;
if (!m_pRenderTarget)
{
RECT rc;
GetClientRect(m_hwnd, &rc);
D2D1_SIZE_U size = D2D1::SizeU(rc.right - rc.left, rc.bottom - rc.top);
D2D1_HWND_RENDER_TARGET_PROPERTIES render_target_properties = D2D1::HwndRenderTargetProperties(m_hwnd, size);
//render_target_properties.presentOptions = D2D1_PRESENT_OPTIONS_IMMEDIATELY;
//↑ Tried this and it doesn't work, don't know why
// Create a Direct2D render target.
hr = m_pDirect2dFactory->CreateHwndRenderTarget(
D2D1::RenderTargetProperties(),
render_target_properties,
&m_pRenderTarget
);
}
return hr;
The mainloop looks like this:
while(msg.message!=WM_QUIT)
{
if(PeekMessage(&msg,0,0,0,PM_REMOVE))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
now_time = timeGetTime();
if(now_time - last_time >= 1000/MAX_FPS)
{
OutputDebugPrintf("%lf\n", now_time - last_time);
application->Update(now_time - last_time);
application->OnRender();
last_time = now_time;
}
}
I'm sure it takes little time in Update() and it can run at a steady frame rate of 60, so there seems to be no problem in OnRender().
Thank you!

I wasn't aware of this myself but you already guessed right: many (all?) laptops drop their refresh rate when running on battery. See google. And if vsync runs slower, so will your loop. There appear to be solutions to disable this, depending on your hardware (e.g. for Intel).

Bit late but your problem has nothing the do with the code i think. Many laptops with an nvidia video card have a system called "Battery Boost". When enabled, your can drag a slider to an fps count. When charging the battery games get vsynced to 60 fps. When not charging games get vsynced to the fps indicated by the slider:
With the top-right toggle you can disable this feature.
This menu can be found in the nvidia experience program -> settings -> games.

Related

HC-SR04P reading errors

In my project I need to measure distances up to 3-4m, so I am using a HC-SR04P sensor hooked up to an ESP32 dev board.
The code is written without any third-party library (was inspired by a very simple HC-SR04 arduino library, though), in plain C, within a project created from the ESP32 eclipse IDF plugin; no extra libraries or arduino code; just the RTOS.
Everything works fine when the device boots and measurements are pretty accurate, but after a while (can't say exactly what triggers this), the sensor/devboard circuit (can't say which) starts behaving strangely : after the TRIG pulse, the ECHO pin does not go HIGH within a reasonable 1s timeout, and no measurement is performed.
Once this happens, no new measurement is performed again unless reboot/power on; it looks like something happens and somehow there is a faulty state either for the sensor or within the communication code.
A couple of observations :
sensor is the right version to be powered at 3.3V.
HC-SR04P uses GPIO2 and GPIO4 for TRIG and ECHO.
measurements are not required to be frequent, hence the 30s timer for the measurement task.
at power on, everything works fine.
after reset by dev board micro-switch, everything works correctly again.
when timeout occurs, re-init the sensor (settings up GPIOs, etc.), but nothing happens; still timeouts.
For reference, the timing function is below (the HCSR04_Info struct holds only pin and measurement data); it is called from a timed task every 30s.
uint32_t hcsr04_timing(HCSR04_Info* pDevice)
{
// TRIG pulse for 10ms
gpio_set_level(pDevice->trig, 1);
ets_delay_us(10);
gpio_set_level(pDevice->trig, 0);
pDevice->startMicros = esp_timer_get_time();
// wait for the echo pin HIGH or timeout
while ((!gpio_get_level(pDevice->echo)) && (esp_timer_get_time() - pDevice->startMicros) <= pDevice->timeout);
if (!gpio_get_level(pDevice->echo)) {
pDevice->status = STATUS_OFFLINE;
ESP_LOGE(TAG, "hcsr04_timing timeout (1)");
return 0;
}
pDevice->startMicros = esp_timer_get_time();
// wait for the echo pin LOW or timeout
while ((gpio_get_level(pDevice->echo)) && (esp_timer_get_time() - pDevice->startMicros) <= pDevice->timeout);
if (gpio_get_level(pDevice->echo)) {
pDevice->status = STATUS_OFFLINE;
ESP_LOGE(TAG, "hcsr04_timing timeout (2)");
return 0;
}
pDevice->status = STATUS_ONLINE;
pDevice->endMicros = esp_timer_get_time();
return pDevice->endMicros - pDevice->startMicros;
}
Any help is appreciated. Thank you.
This does not generate a pulse of 10 ms; it's 10 us. Probably takes your device into an undetermined state eventually.
// TRIG pulse for 10ms
gpio_set_level(pDevice->trig, 1);
ets_delay_us(10);
gpio_set_level(pDevice->trig, 0);
The comment in the header file where ets_delay_us() is defined says: In FreeRTOS task, please call FreeRTOS apis.
Anyway, use delay(10) if in Arduino-land; or vTaskDelay(pdMS_TO_TICKS(10)) if in FreeRTOS-land.
Following up on campescassiano suggestions on overflow, the solution finally presented itself. Not really an overflow in the exact sense of the problem, but closely related.
It's finally a stupid bug in the code, so please close or delete the question if appropriate.
The problem was that pDevice->startMicros was defined as an uint32_t (probably because of a copy/paste or bad habit error), while esp_timer_get_time() returns microseconds as an uint64_t.
So it 'overflows' at about 1h 11m 34s (which is about 232 microseconds) after boot, and timeout calculations become off since (esp_timer_get_time() - pDevice->startMicros) will obviously be an uint64_t.
Because of that (esp_timer_get_time() - pDevice->startMicros) <= pDevice->timeout will always be false after 1h 11m 34s, so the loop breaks before getting an ECHO input.

flutter audio play delay

I am using the audioplayers package to play my mp3 audio files that are stored in firebase cloud storage. There is a significant delay for both Android and iOS and only just slightly faster in Android. I have since moved all my audio sound files to local asset.
AudioPlayer audioPlayer = AudioPlayer(mode: PlayerMode.LOW_LATENCY);
play(String url) async {
int result = await audioPlayer.play(url);
if (result == 1) {
// success
print('success');
}
}
Just a few days ago, I tested with the audio player in iOS Swift and play some audio files from firebase cloud storage but I did't encounter any significant delay due to buffering and it was a lot faster.
I need to find a way to get around this as I have many audio files and they need to be stored in the network. Anyone of you have encountered similar issues and do you have any good suggestions?
Update
Made this second PR that addresses few shortcomings of the first original PR. Both are merged into master branch of audioplayers.
My PR changes are:
playbackRate is always used in playImmediatelyAtRate instead of constant values -- initially set by the library to _defaultPlaybackRate i.e. 1.0
playImmediatelyAtRate is added to resume method as well, not just play
Original Solution
This is the final code that helped solving the audio play delay for the OP:
in play & resume method
AVPlayer *player = playerInfo[#"player"];
float playbackRate = [playerInfo[#"rate"] floatValue];
if (#available(iOS 10.0, *)) {
[player playImmediatelyAtRate:playbackRate];
} else {
[player play];
}
So calling [player playImmediatelyAtRate:playImmediatelyAtRate:playbackRate] instead of [player play]; seems to fix the issue.
So far it hasn't been merged into the pub and is still an open the first incomplete PR has been merged, second PR as well.
Original comment:
There's this open pull request that should fix delay on iOS. that hasn't reached release version. Also there's this discussion on big initial lag.

Sim800L lag/delay before incoming calls are visible to arduino

I use SIM800L GSM module to detect incoming calls and generally it works fine. The only problem is that sometimes it takes up to 8 RINGS before the GSM module tells arduino that someone is calling (before RING appears on the serial connection). It looks like a GSM Network congestion but I do not have such issues with normal calls (I mean calls between people). It happens to often - so it cannot be network/Provider overload. Does anybody else had such a problem?
ISP/Provider: Plus GSM in Poland
I don't put any code, because the problem is in different layer I think
sorry that I didn't answer earlier. I've tested it and it turned out that in bare minimum code it worked OK! I mean, I can see 'RING' on the serial monitor immediately after dialing the number. So it's not a hardware issue!
//bare minimum code:
void loop() {
if(serialSIM800.available()){
Serial.write(serialSIM800.read());
}
if(Serial.available()){
serialSIM800.write(Serial.read());
}
}
In my real code I need to compare calling number with the trusted list. To do that I saved all trusted numbers in the contact list on the sim card (with the common prefix name 'mytrusted'). So, in the main loop there's if statement:
while(mySerial.available()){
incomingByte = mySerial.read();
inputString += incomingByte;
}
if (inputString.indexOf("mytrusted") > 0){
isTrusted = 1;
Serial.println("A TRUSTED NUMBER IS CALLING");
}
After adding this "if condition" Arduino sometimes recognize trusted number after 1'st call, and sometimes after 4'th or 5'th. I'm not suspecting the if statement itself , but the preceding while loop, where incoming bytes are combined into one string.
Any ideas, what can be improved in this simply code?
It seems, I found workaround for my problem. I just send a simple 'AT' command every 20 seconds to SIM800L (it replies with 'OK' ). I use timer to count this 20 seconds interval (instead of simply delay function)
TimerObject *timer2 = new TimerObject(20000); //AT command interval
....
timer2->setOnTimer(&SendATCMD);
....
void SendATCMD () {
mySerial.println("AT");
timer2->Stop();
timer2->Start();
}
With this simple modification Arduino always sees incoming call immediately (after 1 ring)

How do I hold a specific position using Zaber Console Script?

I am trying to write a simple script using Zaber Console.
I basically have to move my robot arm to a certain position (i.e. 43.9mm) hold the position for 10 minutes and go back to the home position.
I found all the command for moving (fast/slow and with a certain acceleration) but I can't undestand how to tell the machine to stay at 43.9mm poistion for 10 minutes.
Any suggestions ?
I am coding in "this language":
if(PortFacade.Port.IsAsciiMode)
{
Conversation.Request("move abs", 881890);
Conversation.PollUntilIdle();
}
else
{
Conversation.Request(Command.MoveAbsolute, 881890);
}
Thanks a lot.
Riccardo
For your reference, if you are coding through the script editor in Zaber Console, we offer a scripting page which covers C#, Javascript, VP, as well as Python. You can find the scripting page here: http://www.zaber.com/wiki/Software/Zaber_Console/Scripting
The language in your script is using C#, and a quick program to execute what you'd like to do can be written like this:
#template(simple)
var device1 = PortFacade.GetConversation(1); // This is assuming your device
// is device 1 in the chain.
// The device list in Zaber Console will let you know the device number.
// Alternatively, you can use the renumber command to change the device number.
device1.Request("move abs 100000"); //the data value for 43.9 mm will vary
// from device to device. The formula would be 43.9[mm]/ Microstep size[mm] = Data value
// The microstep size can be found on the product page at www.Zaber.com, or
// email Contact#Zaber.com
Sleep(5000); //Sleep is in milliseconds
device1.Request("move abs 0");
If you have any questions, please don't hesitate to email Contact#Zaber.com.
Regards,
Albert

Get total Latency - UDP Audio Communication

Okay i am currently trying to make a Voice chat software using NAudio and c#.
But i currently have a problem, latency seems to bet worse and worse the longer the application runs.
Now, i am a total beginner, so i have no idea what can be the cause of it.
But to troubleshoot, i would like to know if i can get the total latency to see how much it adds over time.
Total Latency = Input buffer + network latency + output buffer (and more if there is any, i am using UDP).
So if i have something like:
Label.text = TotalLatency();
It will get updated all the time.
while (!bStop)
{
byte[] datanbefore = waveStream.GetBuffer();
autoResetEvent.WaitOne();
waveStream.Position = 0;
captureBuffer.Read(offset, waveStream, halfBuffer, LockFlag.None);
readFirstBufferPart = !readFirstBufferPart;
offset = readFirstBufferPart ? 0 : halfBuffer;
//TODO: Fix this ugly way of initializing differently.
//Mute Mic when button is checked
if (MuteMic.Checked)
{
waveStream = new MemoryStream(halfBuffer);
}
byte[] datanaudio = waveStream.GetBuffer();
udpClient.Send(datanaudio, datanaudio.Length, otherPartyIP.Address.ToString(), 5550);
}
So here is the sending part. I am not really sure how the buffering works, as i started the application using a free sample, and have been changing it here and there, but some parts still remain, but i think that buffer can be improved though.
while (!bStop)
{
//Receive data.
byte[] byteData = udpClient.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
Here is the Receive part, and it´s much simpler, it just get´s the data from the UDP, ass it to a buffer and play it.
You can work out roughly the input and output latency by knowing the buffer sizes of WaveIn and WaveOut. By default in NAudio they are each 100ms.
For network latency, you could try timestamping your audio packets although the clocks of both machines would need to be in sync.

Resources