I am using qtcreator locking for a way to save my data
if i restart my app it continue from last value not restart to 0
its only part of the code
void Form::on_pushButton_clicked()
{
counter++;
}
You can either save it to a file of your choice: Writing Integers to a .txt file in c++
Or you can use Qt Settings file: https://doc.qt.io/qt-5/qsettings.html
Or if you start to have more information and you also what to save relationships between your data, you can start looking at Databases. A simple and local DBMS is Sqlite: https://www.sqlite.org/index.html
Related
Hello I wanted to ask if, to import the .sql update (after a git pull) I have to assemble and merge with the bash file (app/db_assembler) or if it's ok if I just launch the worldserver.exe and he will do it
Thanks
Short answer
No, the worldserver process will NOT update your database.
You need to use the DB-assembler bash script, as the instructions say.
More details
This is different than in TrinityCore, where it is a feature of the worldserver process to update the database.
In AzerothCore this task is a responsability of an external script, written in bash, the DB-assembler.
The advantage of having an external script to do this task instead of the worldserver is:
You don't need to compile and run the worldserver if you only need to create the database (useful when using or developing tools that only need the DBs)
The DB assembler is able to generate a unique SQL update file per each DB (by merging all the single SQL update files), which can be useful for debugging or development purposes
In general, it is better to delegate different software components for different tasks, instead of having a monolith doing everything
You can also make your own merge script and apply manually. Or just merge with the db_assembler.sh then apply manually.
Else refer to Francesco's answer
I am using the R shiny package to build a web interface for my executable program. The web interface provides user input and shows output.
On the server background, the R script formats user inputs and saves them to a local input file. Then R calls the system command to run the executable program.
My concern is that if multiple users run the web app at the same time, it is possible that the input file generated by the first user will be overwritten by the second user's input before it is read by the executable program.
One way to solve the conflict is to ask R to create a temporary folder and generate/run the input file under that folder for each user. But I'd like to know whether there is a better or automatic way to resolve this potential conflict with shiny. For example, if use shiny fileInputs, the uploaded files are automatically stored in a temporary folder.
Update
Thanks for the advice.#Symbolix and #Mike Wise
I read the persistent data storage article before but I don't think it is exactly what I wanted. Maybe my understanding is not correct. I end up with creating a temporary folder and run my executable from there.
Well the thing is that when the program execute the query to copy a table to a file .CSV. Qt show me the next error.
"ERROR: syntax error at end of input
LINE 1: EXECUTE
Here are the code of the export action:
QSqlQuery qry;
qry.prepare("copy inventory to './inventory.csv'");
if(qry.exec()){
qDebug()<<"Succes";
}else{
qDebug()<<qry.lastError().text();
}
The version of qt is 5.4, used postgresql 9.3 and driver PQSQL working fine just can execute another's query very well like select.
Thanks.
You mentioned that you're using Qt's SQL interface and its PostgreSQL driver.
While Qt's PostgreSQL driver is built on top of PostgreSQL's standard client library libpq, as far as I can tell it does not offer support for lots of the functionality of libpq. In particular, there appears to be no way to access support for the COPY protocol, nor for LISTENing for asynchronous notifications.
You will have to:
libpq directly to COPY ... FROM STDIN
or use regular INSERT statements via Qt; or
Transfer the CSV input to the server, then use COPY ... FROM '/path/on/server' to read the input from a file on the server that the PostgreSQL database is running on, readable by the user the PostgreSQL database runs as.
(You could also submit a patch to Qt to add support for the COPY protocol, which shouldn't be too hard to implement, but is perhaps not the best choice if you're asking this.)
Using COPY needs superuser rights . Do not confuse with \COPY of
PostgreSQL
COPY TO requires absolute path to the output file. If 1st point is
considered, try removing the ./ in your output file name
You can refer to related posts:
post1
and
post2
(Forgive me since I am not use to posting here. Will do my best)
I am working on an iOS application that will be using a pre-populated database. In my workspace, I have a project for the iOS app and a separate command line tool project to populate the database. I have dragged the database file from the "/Library/Application Support" folder into my iOS project (without using the "Copy Items If Needed"). So when there is a change in the data or additional data is required, I can just run the command line tool to pre-populate the data. From there I will remove the app from the simulator and do a clean. When I run the app, I would think everything would be ok.
It was driving me crazy for the longest but sometimes, I don't see the changes reflected after I remove the app, clean the project and then run. It seems the only way I can get this to work is, after I run the command line tool to pre-populate the database, I have to open the database file using Base, SQLiteStudio or Firefox's sqlite add-on. Once I do that, it seems to work.
When I look in finder, I do see the files .sqlite, sqlite-shm and sqlite-wal. Before opening the database file, I see that the wal file is the biggest (for now it's 2mb). Once I open the file using Base for example, and then close it, the sqlite file is now 2mb.
When the command line is about to finish, I have tried running PRAGMA statements on the file (vacuum, wal-checkpoint) but those did not work. What am I missing here. I also tried using NSManagedObjectContext.MR_defaultContext.saveToPersistentStoreAndWait
I am using the following code to setup and save.
MagicalRecord.setupCoreDataStackWithAutoMigratingSqliteStoreNamed("callithome.sqlite")
MagicalRecord.saveUsingCurrentThreadContextWithBlockAndWait({(context)->Void in
println("Data saved, I hope")})
MagicalRecord.cleanUp()
Any help would be appreciated
You need to enable the DELETE pragma mode for your sqlite store. This will not create the -wal and -shm files. This will make it easier for you to have a single file for use at your pre-populated data store. You will need to use MagicalRecord 3.0 with some of the extra abilities to add custom options to your stores if you'd like o go that route. However, you can still add a store to a coordinator and have that store configured with the proper pragma option as well. That is, MagicalRecord is not necessary if you use normal Core Data.
I have a program that monitors certain files for change. As soon as the file gets updated, the file is processed. So far I've come up with this general approach of handing "real time analysis" in R. I was hoping you guys have other approaches. Maybe we can discuss their advantages/disadvantages.
monitor <- TRUE
start.state <- file.info$mtime # modification time of the file when initiating
while(monitor) {
change.state <- file.info$mtime
if(start.state < change.state) {
#process
} else {
print("Nothing new.")
}
Sys.sleep(sleep.time)
}
Similar to the suggestion to use a system API, this can be also done using qtbase which will be a cross-platform means from within R:
dir_to_watch <- "/tmp"
library(qtbase)
fsw <- Qt$QFileSystemWatcher()
fsw$addPath(dir_to_watch)
id <- qconnect(fsw, "directoryChanged", function(path) {
message(sprintf("directory %s has changed", path))
})
cat("abc", file="/tmp/deleteme.txt")
If your system provides an API for monitoring filesystem changes, then you should use that. I believe Macs come with this. Not sure about other platforms though.
Edit:
A quick goog gave me:
Linux - http://wiki.linuxquestions.org/wiki/FAM
Win32 - http://msdn.microsoft.com/en-us/library/aa364417(VS.85).aspx
Obviously, these APIs will eliminate any polling that you require. On the other hand, they may not always be available.
Java has this: http://jnotify.sourceforge.net/ and http://java.sun.com/developer/technicalArticles/javase/nio/#6
I have a hack in mind: you can setup a CRON job/Scheduled task to run R script every n seconds (or whatever). R script checks the file hash, and if hashes don't match, runs the analysis. You can use digest::digest function, just check out the manual.
If you have lots of files that you want to monitor, then R may be too slow for this purpose. Go to your c: or / dir and see how long it takes to do file.info(dir(recursive = TRUE)). A dos or bash script may be quicker.
Otherwise, the code looks fine.
You could use the tclTaskSchedule function in the tcltk2 package to set up a function that checks for updates and runs your code. This would then be run on a regular basis (you set the timing) but would still allow you to use your R session.
I'll offer another solution for windows that I have been using in a production environment that works perfectly and that I find very easy to set up and, under the hood it basically accesses the system API for monitoring folder changes as others have mentioned, but all the "hard work" is taken care of for you. I use a freely available piece of software called Folder Monitor by Nodesoft and well described here. Once you execute this program, it appears in your system tray and from there you can specify a given directory to monitor. When files are written to the directory (or changed or modified - there are a few options from which you can choose), the program executes any program you like. I simply link the program to a windows batch that that calls my R Script. So for example, I have Folder Monitor set up to monitor a "\myservername\DropOff" UNC path for any new data files written to it. When Folder Monitor detects new files, it executes RunBatch.bat file that simply runs an R script (see here for information on setting that up) that validates the format of the expected file based on an expected naming convention for files received and then it unzips and processes the data, creating a dataframe and ultimately loads that into a SQL Server Database. It just doesn't get any easier.
One note if you decide to use this solution: take a look at the optional delay execution parameter, which might be important if files take a while to copy into the target directory from the source location.