I building a GUI in Qt Creator and I want to read a CSV.
So far I can read the CSV as a HTTP Request and I can display it as a text with the following function:
//read CSV Datei
function readTextFile(filename){
var xhr = new XMLHttpRequest;
xhr.open("GET", filename); // set Method and File
xhr.onreadystatechange = function() {
if(xhr.readyState === XMLHttpRequest.DONE){ // if request_status == DONE
var response = xhr.responseText;
screen.liste = response
console.log(response);
}
}
xhr.send(); // begin the request
}
In the following I am trying to find the individual entries of the "array".
Is there a way to split this list in individual strings?
The List has 50 rows and 18 columns and the entries in one row are separated by ';'
For example here are the first two rows:
P22;P64;P99;P20;P88;P18;50;90;80;90;40;0;10;0;40;80;60;20
P51;P44;P57;P46;P96;P10;20;40;50;80;20;60;50;80;0;30;10;50
...
Welcome to SO!
The String QML type extends the JS String object (see at https://doc.qt.io/qt-5/qml-qtqml-string.html#details), so you can just use the split() method to get the needed tokens in an array, which can be indexed with [].
Related
I used a tutorial to help me sync my sheets to firebase with the use of a SYNC button that activates the script. The SYNC button currently sits just in the middle of the spreadsheet. I want to sync the data from sheets automatically to firebase when there are changes made.
function getFirebaseUrl(jsonPath) {
return (
'https://no-excusas.firebaseio.com/' +
jsonPath +
'.json?auth=' +
secret
)
}
function syncMasterSheet(sheetHeaders, sheetData) {
/*
We make a PUT (update) request,
and send a JSON payload
More info on the REST API here : https://firebase.google.com/docs/database/rest/start
*/
const outputData = [];
for(i = 0; i < sheetData.length; i++) {
var row = sheetData[i];
var newRow = {};
for(j = 0; j < row.length; j++) {
newRow[sheetHeaders[j]] = row[j];
}
outputData.push(newRow);
}
var options = {
method: 'put',
contentType: 'application/json',
payload: JSON.stringify(outputData)
}
var fireBaseUrl = getFirebaseUrl("UsersSheets")
UrlFetchApp.fetch(fireBaseUrl, options)
}
function startSync() {
//Get the currently active sheet
var sheet = SpreadsheetApp.getActiveSheet()
//Get the number of rows and columns which contain some content
var [rows, columns] = [sheet.getLastRow(), sheet.getLastColumn()]
// Get the data contained in those rows and columns as a 2 dimensional array.
// Get the headers in a separate array.
var headers = sheet.getRange(1, 1, 1, columns).getValues()[0]; // [0] to unwrap the
outer array
var data = sheet.getRange(2, 1, rows - 1, columns).getValues(); // skipping the header
row means we need to reduce rows by 1.
//Use the syncMasterSheet function defined before to push this data to the "masterSheet"
key in the firebase database
syncMasterSheet(headers, data)
}
Normally, it would be ok to just define an onEdit function in your code, like this:
function onEdit(event) {
startSync();
}
However, because you are making external requests via UrlFetchApp.fetch(), this will fail with an error about not having the https://www.googleapis.com/auth/script.external_request permission (gobs more detail about trigger authorization here).
Instead, you need to manually create an installable trigger
This is reasonably straightforward. In the edit menu for your code, go to your project's triggers:
Then, select "add a trigger" and create the on edit trigger, like so:
You should think about if you really want this running on every edit as the requests could be quite large (as it syncs the entire sheet) and run frequently (as you edit), however.
When you make a change to a spreadsheet, its onEdit event fires. So that's where you'd trigger that save with something like this:
function onEdit(event) {
startSync();
}
But since onEdit fires for each edit, this may end up saving a lot more than really necessary. So you may want to debounce to only save after some inactivity.
Something like this:
var timer;
function onEdit(event) {
// if we're counting down, stop the timer
if (timer) clearTimeout(timer);
// starting syncing after 2 seconds
timer = setTimeout(function() {
startSync();
}, 2000);
}
I'm working with Fullcalendar and I'm trying to get resources as function
resources: function(callback){
var manageEvent = new ManageEvent();
var request = manageEvent.getEmployees();
request.always(function (param) {
//location.reload();
var list = [];
var emp;
for (var elem in param) {
emp = param[elem];
list.push({
'id': emp['cp_collaboratore'],
'title': emp['cognome_col']
});
}
var t = JSON.stringify(list);
callback(t);
});
request.catch(function (param) {
alert('errore');
});
},
I checked the variable 't' through log and it shows the following result:
[{"id":"1","title":"name_1"},{"id":"2","title":"name_2"},{"id":"3","title":"name_3"},{"id":"5","title":"name_4"},{"id":"9","title":"name_5"}]
but it don't works and shows the following error message:
Uncaught TypeError: resourceInputs.map is not a function
at ResourceManager.setResources
You just need to write
callback(list);
t in your code is a string, because you converted your list array into a string using JSON.stringify(). But fullCalendar expects an actual array, not a string. It can't run functions or read individual properties from a string.
You can remove the line var t = JSON.stringify(list); completely, it's not needed.
Generally the only reason you'd use stringify() is if you wanted to log the value to your console for debugging, or convert the array into JSON if you wanted to send it somewhere else using AJAX. It makes no sense to pass arrays and objects around inside JavaScript as serialised strings, when you can just use the objects themselves.
So here I have my code
notifull.get('/getNote', (request, response) => {
// START Get variables
var requestData = {
// location properties
subject: request.query.subject,
category: request.query.category,
subcategory: request.query.subcategory,
// easy way to get full reference string
referenceString: function() {
return `${this.subject}/${this.category}/${this.subcategory}`;
},
// pagination properties
pagePosition: Number(request.query.pagePosition),
// easy way to get limit number
paginationNumber: function() {
return (this.pagePosition - 1) * 2;
}
};
// DEBUG_PURPOSES response.send(requestData.referenceString());
// END Get variables
// START Construct index
var first = admin.firestore().collection(requestData.referenceString())
.orderBy("upvotes")
.limit(requestData.paginationNumber());
// DEBUG_PURPOSES response.send(first)
// END Construct index
// START Paginate
return first.get().then(function (documentSnapshots) {
// Get the last visible document
var lastVisible = documentSnapshots.docs[documentSnapshots.docs.length-1];
console.log("last", lastVisible);
// Construct a new query starting at this document,
// get the next 25 cities.
var next = admin.firestore().collection(requestData.referenceString())
.orderBy("upvotes")
.startAfter(lastVisible)
.limit(2);
response.send(next)
});
});
As you can see, I am attempting to Paginate using Cloud Firestore. If you'll notice, I've divided the code into sections, and the previous testing shows me the Construct Index and Get variables sections work. However, when I pulled the Paginate example from Firebase's own docs, adapted it to my code, and then tried to run, I was met with this error.
Cannot encode type ([object Object]) to a Firestore Value
UPDATE: After more testing, it seems that if I remove the startAfter line, it works fine.
According to firebase documentation, Objects of custom classes are not supported so in some languages like PHP, C++, Python and Node.js.
PLEASE HERE AS A REFERENCE.
https://firebase.google.com/docs/firestore/manage-data/add-data#custom_objects
So, I can advise to encode your objects to json and be decoding them to your custom classes when you download them from firebases.
I have an API that currently receives JSON calls that I push to files (800KB-1MB) (1 for each call), and would like to have an hourly task that takes all of the JSON files in the last hour and combines them into a single file as to make it better to do daily/monthly analytics on.
Each file consists of a collection of data, so in the format of [ object {property: value, ... ]. Due to this, I cannot do simple concatenation as it'll no longer be valid JSON (nor add a comma then the file will be a collection of collections). I would like to keep the memory foot-print as low as possible, so I was looking at the following example and just pushing each file to the stream (deserializing the file using JsonConvert.DeserializeObject(fileContent); however, by doing this, I end up with a collection of collection as well. I have also tried using a JArray instead of the JsonConvert, pushing to a list outside of the foreach with but provides the same result. If I move the Serialize call outside the ForEach, it does work; however, I am worried about holding the 4-6GB worth of items in memory.
In summary, I'm ending up with [ [ object {property: value, ... ],... [ object {property: value, ... ]] where my desired output would be [ object {property: value (file1), ... object {property: value (fileN) ].
using (FileStream fs = File.Open(#"C:\Users\Public\Documents\combined.json", FileMode.CreateNew))
{
using (StreamWriter sw = new StreamWriter(fs))
{
using (JsonWriter jw = new JsonTextWriter(sw))
{
jw.Formatting = Formatting.None;
JArray list = new JArray();
JsonSerializer serializer = new JsonSerializer();
foreach (IListBlobItem blob in blobContainer.ListBlobs(prefix: "SharePointBlobs/"))
{
if (blob.GetType() == typeof(CloudBlockBlob))
{
var blockBlob = (CloudBlockBlob)blob;
var content = blockBlob.DownloadText();
var deserialized = JArray.Parse(content);
//deserialized = JsonConvert.DeserializeObject(content);
list.Merge(deserialized);
serializer.Serialize(jw, list);
}
else
{
Console.WriteLine("Non-Block-Blob: " + blob.StorageUri);
}
}
}
}
}
In this situation, to keep your processing and memory footprints low, I think I would just concatenate the files one after the other even though it results in technically invalid JSON. To deserialize the combined file later, you can take advantage of the SupportMultipleContent setting on the JsonTextReader class and process the object collections through a stream as if they were one whole collection. See this answer for an example of how to do this.
So i am using the sql.js library i.e. the port of sqlite in javascript which can be found here https://github.com/kripken/sql.js.
This is my code to open and read the database that comes from a flat file store locally.
First the file a local file is selected via this HTML
<input type="file" id="input" onchange="handleFiles(this.files)">
The js code behind the scenes is as follows,
function handleFiles(files) {
var file = files[0];
var reader = new FileReader();
reader.readAsBinaryString(file);
openDbOnFileLoad(reader);
function openDbOnFileLoad(reader){
setTimeout(function () {
if(reader.readyState == reader.DONE) {
//console.log(reader.result);
db = SQL.open(bin2Array(reader.result));
execute("SELECT * FROM table");
} else {
//console.log("Waiting for loading...");
openDbOnFileLoad(reader);
}
}, 500);
}
}
function execute(commands) {
commands = commands.replace(/\n/g, '; ');
try {
var data = db.exec(commands);
console.log(data);
} catch(e) {
console.log(e);
}
}
function bin2Array(bin) {
'use strict';
var i, size = bin.length, ary = [];
for (i = 0; i < size; i++) {
ary.push(bin.charCodeAt(i) & 0xFF);
}
return ary;
}
Now this works and i can access all the columns and values in the database, however there is one column which is of type blob and that just shows up as empty. Any ideas of how i can access the contents of this blob?
The correct answer!
So what I was trying to ask in this question is simply how to read the contents of a column of type blob using sql.js. The correct answer is to specify the column names in the question and for the column that contains data of type blob, get its contents using the hex function i.e. select column1,hex(column2) from table. It was by no means a question about the most efficient way of doing this. I have also written a blog post about this.
Here is a slightly modified copy of the function responsible for initializing my sqlite database:
sqlite.prototype._initQueryDb = function(file, callback) {
self = this;
var reader = new FileReader();
// Fires when the file blob is done loading to memory.
reader.onload = function(event) {
var arrayBuffer = event.target.result,
eightBitArray = new Uint8Array(arrayBuffer),
database = SQL.open(eightBitArray);
self._queryDb = database;
// Trigger the callback to the calling function
callback();
}
// Start reading the file blob.
reader.readAsArrayBuffer(file);
}
In this case, file is a local sqlite database handle that I get from an HTML input element. I specify a function to call when a change event happens to that input and get the blob from the resulting event.target.files[0] object.
For the sake of brevity on my part I left some things out but I can throw together a smaller and more simplified example if you are still struggling.
The answer is: with kripken's sql.js, that you mentioned above you can't. At least as of today (may 2014). The original author doesn't maintain sql.js anymore.
However, I'm the author of a fork of sql.js, that is available here: https://github.com/lovasoa/sql.js .
This fork brings several improvements, including support for prepared statements, in which, contrarily to the original version, values are handled in their natural javascript type, and not only as strings.
With this version, you can handle BLOBs (both for reading and writing), they appear as Uint8Arrays (that you can for instance convert to object URL to display contents to your users).
Here is an example of how to read blob data from a database:
var db = new SQL.Database(eightBitArray); // eightBitArray can be an Uint8Array
var stmt = db.prepare("SELECT blob_column FROM your_table");
while (stmt.step()) { // Executed once for every row of result
var my_blob = stmt.get()[0]; // Get the first column of result
//my_blob is now an Uint8Array, do whatever you want with it
}
db.close(); // Free the memory used by the database
You can see the full documentation here: http://lovasoa.github.io/sql.js/documentation/