QNetworkAccessManager timeout - qt

Presently I am working on an application which sends and receives file from remote server. To do network operation I am using QNetworkAccessManager.
To upload a file I am using QNetworkAccessManager::put() and to download I am using QNetworkAccessManager::get() functions.
While uploading a file I will initialize a timer with time out of 15 sec. if I upload a small file it will complete it within the time out period. But if I try to upload a file which is very large in size get time out. So how to decide time out for uploading of large file.
Same in case of downloading of a large file. I get file in chunk by chunk in readyread() signal. Here also if I download a large file I get time out. So how to decide time out for uploading of large file.

Use the QNetworkReply::uploadProgress() (or downloadProgress) signal to alert you that the operation is progressing. Then, set a timer for 15 seconds after the last uploadProgress/downloadProgress notification (with the timer started when the download/upload commenced.) If the download ever stops, you can cancel the operation 15 seconds after the last update.

Related

Download CSV in Shiny app every 24 hours & display download time

I have a CSV that I want to download. I do not want it to download every time a user joins or uses the app.
I want to run the code every 24 hours and also display any of 1) timer since last download 2) timer until next download 3) timestamp of last download
Below is what I have right now, which works, but will probably cause unnecessary downloads. Is doing something with invalidatelater going to work or is there a better way?
CSV.Path <- "https://oracleselixir-downloadable-match-data.s3-us-west-2.amazonaws.com/2021_LoL_esports_match_data_from_OraclesElixir_20210404.csv"
download.file(CSV.Path, "lol2021")
lol2021 <- read.csv("lol2021")
There are two ways to approach this:
Check to see if it should be downloaded when the app starts; if the file is more recent than 24h, do not re-download it. This can be resolved fairly easily with:
fileage <- difftime(Sys.time(), file.info("data")["mtime"][[1]], units = "day")
if (is.na(fileage) || fileage > 1) {
CSV.Path <- "https://oracleselixir-downloadable-match-data.s3-us-west-2.amazonaws.com/2021_LoL_esports_match_data_from_OraclesElixir_20210404.csv"
download.file(CSV.Path, "lol2021")
}
lol2021 <- read.csv("lol2021")
(The is.na is there in case the file does not exist.)
One complicating factor with this is that two simultaneous users might attempt to download it at the same time. There should likely be some mutex file-access control here if that is a possibility.
Make sure this script is run every 24h, regardless of what users are or are not using the app. On what type of server are you running this app? Something like shiny-server does not do cron-like running, I believe, and you might not be able to guarantee that the app is "awake" every 24h. RStudio Connect does allow scheduled jobs, which might be a consideration for you.
Lacking that, if you have good access to the server, you might just add it as a cron job using Rscript or similar to download and overwrite the file.
Note about mutex file access: many networked filesystems (common in cloud and server architectures) do not guarantee file locking. A common technique is to download into a temporary file and then move (or copy) this temp file into the "real" file name in one step. This guards against the possibility that one process is reading from the file while another process is writing to it ... partial-file reads will be a frustrating and difficult-to-reproduce bug.

NSTextView freezing my app when adding a lot of data asyncronously

I'm building a simple talker/listener app that receives OSC data through UDP. I'm using OSCKit pod which itself uses CocoaAsyncSocket library for the internal UDP communication.

When I'm listening to a particular port to receive data from another OSC capable software, I log the received commands to a NSTextView. The problem is that sometimes, I receive thousands of messages in a very short period of time (EDIT: I just added a counter to see how many messages I'm receiving. I got over 14000 in just a few seconds and that is only a single moving object in my software). There is no way to predict when this is gonna happen so I cannot lock the textStorage object of the NSTextView to keep it from sending all its notifications to update the UI. The data is processed through a delegate callback function.

So how would you go around that limitation?
///Handle incoming OSC messages
func handle(_ message: OSCMessage!) {
print("OSC Message: \(message)")
let targetPath = message.address
let args = message.arguments
let msgAsString = "Path: \"\(targetPath)\"\nArguments: \n\(args)\n\n"
print(msgAsString)
oscLogView.string?.append(msgAsString)
oscLogView.scrollToEndOfDocument(self)
}
As you can see here (this is the callback function) I'm updating the TextView directly from the callback (both adding data and scrolling to the end), every time a message is received. This is where Instruments tell me the slow down happens and the append is the slowest one. I didn't go further than that in the analysis, but it certainly is due to the fact that it tries to do a visual update, which takes a lot more time than parsing 32bits of data, and when it's finished it receives another update right away from the server's buffer.
Could I send that call to the background thread? I don't feel like filling up the background thread with visual updates is such a great idea. Maybe growing my own string buffer and flushing it to the TextView every now and then with a timer?
I want to give this a console feel, but a console that freezes is not a console.
Here is a link to the project on github. the pods are all there and configured with cocoapods, so just open the workspace. You guys might not have anything to generate that much OSC traffic, but if you really feel like digging in, you can get IanniX, which is an open-source sequencer/trajectory automator that can generate OSC and a lot of it. I've just downloaded it and I'll build a quick project that should send enough data to freeze the app and I'll add it to the repo if anybody want to give it a shot.
I append the incoming data to a buffer variable and I use a timer that flushes that buffer to the textview every 0.2 seconds. The update cycle of the textview is way too slow to handle the amount of incoming data so unloding the network callback to a timer let the server process the data instead of being stopped every 32bits.
If anybody come up with a more elegant method, I'm open minded.

When I get directoryChanged signal from QFileSystemWatecher the file newly added is not completed to read

I want to be notified when a file is added to "/test". So I used QFilesystemWatcher's directoryChanged signal. But when "cp aa.txt /test" I got directoryChanged signal and there when I read aa.txt I had incomplete aa.txt.
In this case how can I know the file is completed to read?
FYI, I can't use fileChanged signal since don't know exact file name.
Unfortunately, there's no way to know this in general, without some cooperation from the process that writes to the file. The writing process would need to lock the file for exclusive access, and the reading process would need to keep trying to open the file for reading until it succeeded - when the writing process has dropped the lock.
All that the directoryChanged signal tells you is what it says on the box: the directory has changed, or in this case, there's a new entry in the directory. This is completely separate from what's represented by that entry - what the contents of the file are.
The filesystem watcher is only a half of what's needed here, and this is not an issue with Qt, but with the processes. Remember that you're trying to cooperate with the writer.
As a workaround, if you have some way of validating the file contents, you can do the same while reading and validating the file: keep retrying the read, with some delay until the validation succeeds. To avoid runaway resource use, it may be worthwhile for the delays to form an exponential back-off.

Using SignalR to display row by row processing of Excel fileUpload

I am trying to figure out how can i use FileUpload along with signalR where i can start processing the uploaded Excel file row by row(without waiting for the file to be fully uploaded).
So i have a large(could be upto 2GB, but consider on average to be 100 mb) Excel file being uploaded, i want to start display the progress in Percentage as well as display all the rows that were processed and if any error occurred during the processing of that row.
Any links to an article will be appreciated.
I have created a decoupled message bus proxy (Eventaggregator proxy) for SignalR.
This fits your use case perfectly, in your case I would fire events while processsing the file. This will be automatically forwarded to the cients, you can also constraint so that only the user that uploaded the file will see events generated by that file upload.
Please check this blog post I made for a insight into the library
http://andersmalmgren.com/2014/05/27/client-server-event-aggregation-with-signalr/
Demo
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4

Split sqllite file into chunks for appcfg.py

I have a 750MB sql3 file that I want to load into appcfg.py, a program that can restore appengine data. It's taking forever to load in there. Is there a way I could split it into smaller, totally-separate chunks, to be loaded independantly?
I don't need to run queries across the data, or maintain any other kind of relationship. I just need to copy a list of the records to my appengine app.
Elaboration:
I'm trying to restore a 750 MB sql3 file I got from
appcfg.py download_data --appl=myapp --url=https://myapp.appspot.com/remote_path --file=backup.sql3
Now, I'm trying to restore the file with
appcfg.py upload_data --appl=restoreapp --url=https://restoreapp.appspot.com/remote_api --file=backup.sql3
I also set some parameters tweaking the default limits.
This prints out some initial logging information, repeating the parameters, etc. Then nothing happens for about 45 minutes, except that python takes about 50% cpu for the duration. Then, finally, it starts to upload to appengine.
From there, it seems to work. But, if there's an error in the transmission, I have to wait the 45 minutes again, even after specifying the progress database. That's why I'm looking for a way to split up the file, or something.
FWIW, both the original app and the restore app use the Java sdk

Resources