I am new to Qt, and I was learning on its Getting Started Page. I want to know what does the following statements mean and why are they required?
In Open function:
if (!file.open(QIODevice::ReadOnly)) {
QMessageBox::critical(this, tr("Error"), tr("Could not open file"));
return;
}
Also in Save function:
if (!file.open(QIODevice::WriteOnly)) {
// error message
}
I was unable to run these functions without these lines. I tried reading about Error Handling in the documentation but couldn't exactly find what do these statements mean.
You can open files for reading and for writing. Using QIODevice::WriteOnly or QIODevice::ReadOnly flags you are specifying mode in which you will open particular file.
But, why does it matter?
Suppose you have one file opened in several instances of different programs, and that there is no such thing as specifying file mode. Now, if every files are reading file - since they all have different pointer to the current position in the file - this is not a problem - since all programs will get the latest and correct information from file. But, if only one programs write something into file - your data will be inconsistent, so other programs will potentially read wrong data.
Intuitive approach would be to send a message to all programs that are attached on this file, so they could update themselves. But - what to do if the file is deleted? Or if there is no possibility to set the proper position in the new data? Also, every program now needs to have interface in order to be notified, and the whole message passing idea can be very slow (aside that it doesn't work).
So - there is simply consensus made - multiple programs can open file for reading - as they will all have the same and consistent data. But, if the only one program is signaling the operating system that it wants to gain write permissions - the file must not be opened in any program - nor for reading - nor for writing! Depending on the implementation, operating system may block the caller until all files are closed, or it can simply ignore the call and send the error information to caller - which is often a better idea, as the program (or the user) can block itself and try again later, or it can simply ask user to save into another destination, or it can send us creepy error message - but it will not be able to write into file.
Last paragraph is describing what is known as multiple readers-single writer technique, so you may want to look it up on the internet or concurrency classes textbooks.
Related
I would like to access the history of what have been typed in the source panel in RStudio.
I'm interested in the way we learn and type code. Three things I would like to analyse are: i) the way a single person type code, ii) how different persons type code, iii) the way a beginner improve typing.
Grabbing the history of commands is quite satisfying as first attempt in this way but I would like to reach a finer granularity and thus access the successive changes, within a single line in a way.
So, to be clear, I'm neither looking for the history of commands or for a diff between different versions of and .R file.
What I would like to access is really the successive alterations to the source panel that are visible when you recursively press Ctrl+Z. I do not know if there is a more accurate word for what I describe, but again what I'm interested in is how bits of code are added/moved/deleted/corrected/improved in the source panel but not necessary passed to the Console and thus absent from the history of command.
This must be somewhere/somehow saved by RStudio as it is accessible by the later. This may be saved in a quite hidden/private/memory/process/... way and I have a very vague idea of how a GUI works. I do not know it if would be easily accessible, then programmaticaly analyzed, typically if we could save a file from it. Timestamps would be the cherry on top but I would be happy without.
Do you have idea how to access this history?
RStudio's source panel is essentially a view to an Ace Editor. As such you'd need to access the editor session's editSession and use getDocument or getWordRange along with the undo of the editSession's undoManager instance.
I don't think you'll be doing that from within RStudio without hacking on the RStudio code unless the RStudio Addin api is made to pass-thru editor events in the future.
It might be easier to write a session recorder as changes are made rather than try to mess with the undo history. I imagine you could write an Addin that calls a javascript to communicate over the existing RStudio port using the Ace Editor's events (ie. onChange).
As #GegznaV said, RStudio saves code history to a ".RHistory" file. It's in the "Documents" folder on my computer. It's probably the same folder if you're using Windows. If you do not know the location, you can find the file by searching.
It also allows saving RStudio history to a file manually. There is a "Save" button in the History panel. So you can also get a timestamp. You can ask different users to save their code history after they have finished writing code. It may be indirectly useful to your research.
So I have ffmpeg writing its progress to a text file, and I need to read the new values (lines) from the said file. How should I approach this using Qt classes in order to minimize the amount of code I have to write?
I don't even have an idea where to start, other than doing ugly things like seeking to the end, storing this pos, then seeking to the end again a bit later and comparing the new pos to the previous one. It's unclear to me if QTextStream can be used here or not, for instance.
I used Win32 API own interface for the file system notification some time ago and that worked 100% reliably. Modern OSes provide us with notifications for the file change. And Qt incorporates such functionality as well. Specifically for the purpose of tracking the file changes I would use QFileSystem::fileChanged signal to start the slot myFileReadNextBuffer() method only in case if the file was changed. But then you would still want to evaluate how many bytes were added by subtracting the previous from the new file length. And there is also relative question here: How to know when and which files are changed in windows filesystem with winapi.
If the file is only growing:
Whether the file is text-based or not I would open it in shared mode and read to the end and read more till the end when the notification received.
I need very simple text file logging. I'll only append lines to it. never change existing ones nor delete them. If it would be XML file it would be easier to bind to grids to view them. but question remains for both text files and xml files as they are in file system.
in web server there will be file locking while appending log entries. and maybe also while reading them. So this method has to be thread safe. At the same moment multiple instances can write date to file.
I know there are some third party tools like serilog etc but I want to know:
how can I append (not change) lines to text file (or xml file) without concerning about file locks ?
if I read xml file to dataset, add a new row to it and save it as xml I would use other entries made by other instances.
if I open a text file with streamwriter and append a line to it, other instances would get lock error.
I get the list of logs from admin panel again, file will be locked and instances wouldn't append logs.
any ideas ?
After long reserch hours and experiments I found out that using Nlog is the best option for me. most important thing is people who use it are very happy. I created small example page that writes a log everytime it called and tested it. I have a multithreaded application that calls this sample page again and again. If was fast enough so I could not see the counting numbers of threads. no problem raised so far.
So, I'll stick to Nlog.
best.
I am attempting to read data files stored as a .txt, some of which are very large (>1 GB). It seems that every time QFile attempts to use the .open() method on files larger than 600MB, it freezes and crashes. Is there a better way to open large files than QFile? None of the code below the if (_file.open(QIODevice::ReadOnly)) line shown below executes, so I believe the crash occurs where the open method is called.
I understand from answers to similar questions that reading in large text files is not a great way to handle huge amounts of data, but unfortunately these are log files that I have no control over. I need to be able to read these files OR elegantly handle/ignore an oversized file, but I can't find information on how to detect the maximum read size. I would rather not have to manually open and split these files in a text editor, as I have about a terabyte of these to process and manually splitting could lead to loss of important information. I am not overly concerned with the responsiveness of this program, and any method used to open files can sit and think for quite awhile, as this program will be used for data processing not any kind of user interaction.
Thanks for your help
Code:
void FileRead::openNewFile()
{
if(_listOfFiles.size()>0)
{
_file.setFileName(_listOfFiles.at(0));
if (_file.open(QIODevice::ReadOnly)) //file opened successfully
{
_file.reset();
emit fileOpened();
emit fileOpened(_file.fileName());
qDebug()<<"File Opened";
qDebug()<<_file.fileName();
}
else
{
qDebug()<<"Unable to open file";
qDebug()<<_listOfFiles;
_listOfFiles.removeAt(0);
emit fileSent();
}
}
else
{
qDebug()<<"All files processed";
}
}
I think you're re-using a QFile that's already open, and this might be problematic.
The call to reset() is pointless - you've just opened the file, it is reset by definition.
You have not provided a backtrace of where exactly does your code crash. I cannot reproduce your problem - I have a 16GB sparse file that I can open, read from, and close, successfully, on both Qt 4.8 and Qt 5.2, on both Windows 7 and OS X.
If you write a minimal test case for this (a stand-alone application that does nothing but opens the file, reads a few bytes from it, and closes it), you'll likely find that it doesn't crash - the problem is elsewhere in your code.
I'm writing a multipart downloader in Qt. Multiple QNetWorkRequest with http header "Range" are used to download a file. Now I write data received in each part(QNetworkReply) to file.part1, file.part2, etc.
Is it possible to write data to the same file simultaneously? Do I have to implement a lock for it and which is the best way to save data in my application?
Any suggestions?
Why not just merge the file parts when you are finished? You can write the parts to a QFile easily. Perhaps something about the approach or the data keeps you from doing this, but if you can, it's probably the approach I would take before dealing with treating a QFile as a shared resource.
If you want multiple concurrent replies to be able to write to and access the QFile, then yes, the QFile becomes a shared resource. As far as I know, you're going to need to lock it. At that point, you have several options. You can have the slot(s) handling the replies attempt to acquire a QSemaphore, you can use QMutex and QMutexLocker if you'd prefer to lock on a mutex. You could treat it as a multiple producer (the various QNetworkReplys) single consumer (whatever is writing to the file) problem (here's a Stack Overflow post that provides some useful links) if you want to go that route. In short, there are numerous approaches here, all of which I think are more of a hassle than simply merging the file.part's at the end if you're able to go that route.
In terms of merging to a single QFile concurrently, there may be an easier Qt way of doing it, but I've never found it. Perhaps someone else can chime in if such a method exists.
I'm not sure what you mean by "which is the best way to save data in my application?" Are you referring to saving application specific settings in a persistent manner? If so, look into QSettings. If you're referring to saving the data you're downloading, I'd probably write it to a QFile, just like you appear to be doing. Although it's hard to know for sure without knowing more about what you're downloading.