I get "Buffered data was truncated after reaching the output size limit.", my session still running - bert-language-model

I'm running code to predict and show the result. After a number of iteration and time, I got message "Buffered data was truncated after reaching the output size limit." and my session still running in the cell code.
This is the snippet image of what I got
I have read some information telling that the machine keeps running the program in the background and process the output without display in out colab's page. I've just faced this issue for the first time and didn't setting a code for saving the output in a file. Is there any solution to save the output from the background after our program finished the running?

Related

QTextBrowser insertPlainText() insert much data causes NO RESPONSE

I use insertPlainText() to insert data to QTextBrowser in the slot function, but It seems result in lag even no response along with data increasement. But when I add '\n' at the end of data to simulate the append(), the lag phenomenon disappeared. But I don't want to add a new line, how to solve this problem?
I tried to use qApp->processEvents() after the insertPlainText(), but it cause crash.
I tried to start a timer to run qApp->processEvents() to refresh the UI, but it didn't solve the problem.
Should I start a new thread to receive serial port data? But the inserted data(I mean received data) size is not big, but the total data size in the browser is big. Receive data may not cost a lot of time.
insertPlainText() performed not well in my machine(i7,16G). It will take about 100ms to insert data when the total data length is about 4096 bytes. I tried the QScintilla open source widget which is better but still not perfect. So I think maybe it's wrong thoughts to use insertPlainText().
I changed my thoughts. I use QByteArray to store all data and use setText() to display the recent 4096 bytes. It looks like I divide the data into many pages and display the recent page. This method solved the problem of how to store the much data. But there is another little problem is that 4096 bytes data can not fill up my screen when I maximize my application. It's not looking good but more data will result in slow response because the app has high data refresh frequency.

Kill turtle which causes runtime error

I'm curious as to whether there is a way of reporting the turtle which causes a runtime error?
I have a model which includes many agents and will run fine for hours, however sometimes a runtime error will occur. I have tried a few different things to fix it but always seems like an error occurs to the point I can't spare the time to fix it due to deadlines.
As the occurrence is so rare the easiest solution is to just write in the command center ask turtle X [die] after which I click GO and the problem is 'fixed'.
However I was wondering if anyone knew of a way to kill the turtle producing the error automatically every time a runtime error occurred to save me entering this manually.

is this the result of a partial image transfer?

I have code that generates thumbnails from JPEGs. It pulls an image from S3 and then generates the thumbs.
One in about every 3000 files ends up looking like this. It happens in batches. The high res looks like this and they're all resized down to low res. It does not fail on resize. I can go to my S3 bucket and see that the original file is indeed intact.
I had this code written in Ruby and ported it all over to clojure hoping it would just fix my issue but it's still happening.
What would result in a JPEG that looks like this?
I'm using standard image copying code like so
(with-open [in (clojure.java.io/input-stream uri)
out (clojure.java.io/output-stream file)]
(clojure.java.io/copy in out))
Would there be any way to detect the transfer didn't go well in clojure? Imagemagick? Any other command line tool?
My guess is it is one of 2 possible issues (you know your code, so you can probably rule one out quickly):
You are running out of memory. If the whole batch of processing is happening at once, the first few are probably not being released until the whole process is completed.
You are running out of time. You may be reaching your maximum execution time for the script.
Implementing some logging as the batches are processed could tell you when the issue happens and what the overall state is at that moment.

The operation could not be performed because the filter is in the wrong state GetCurrentBuffer

The operation could not be performed because the filter is in the wrong state
I am getting this error when attemting to run hr = m_pGrabber->GetCurrentBuffer(&cbBuffer, NULL);.
Strange part is - it initially worked when I stopped the graph, now it fails on running or stopped graph.
So - what state it should be in??
The sample grabber code in MSDN I copied does not say if the graph should be stopped or running to get the buffer size - but the way it is presented the graph is running. I assume the graph should be running to fill the buffer, but I am not getting pass the sizing the buffer.
The graph is OK, all filters are conncted and renders as required, in may app and in GraphEdit.
I am trying to save the captured still frame into bitmap file so I need the capured data in the buffer.
Buffering and GetCurrentBuffer expose you a copy of last known media sample. Hence, you might hit conditions "no media sample available yet to copy from" and "last known media sample is released due to transition to stopped state". In both cases the request in question might fail. Copy data from SampleCB instead of buffered mode and this is going to be one hundred percent reliable.
See also: ISampleGrabber::GetCurrentBuffer() returning VFW_E_WRONG_STATE
Using GetCurrentBuffer is a bad idea in most cases. Proper way to use sample grabber is by setting your callback and receiving data in SampleCB.

FlexPrintJob pause Flex code execution

when using FlexPrintJob, after calling start(), a OS system print interface will appear, and at the same time Flex code execution will be paused, and it will remain paused until user finished interaction with the OS print dialog. the problem is I do have data from server, and the connection will time out within certain period, so is there any walk around to not pause the Flex code execution while OS print dialog is popped up. Thanks.
From the doc for FlexPrintJob:
You use the FlexPrintJob class to print a dynamically rendered document that you format specifically for printing.
This makes me wonder if you couldn't (essentially) fork off a second page from the browser that contains your print job and do the printing from there. This would leave your original page still running. In my flex apps I do this via PHP (create additional pages for printing and such). Example here.
Otherwise: you should finish all the server data d/l before starting the print job to avoid this issue.
Flex is only just recently starting to add multi-threading. It's adding worker threads of a sort but this won't help what you're asking for.

Resources