I am trying to send values from Google sheets to Firebase so that the data table updates automatically. To do this I used Google Drive CMS which exported the data to Firebase perfectly. The problem is that I scrape data off of websites. For example, I get a list of data through the use of importXML:
=IMPORTXML("https://www.congress.gov/search?q=%7B%22source%22%3A%22legislation%22%7D", "//li[#class='expanded']/span[#class='result-heading']/a[1]")
CMS doesn't seem to get the values that this formula results in but the actual formula which causes errors. The way I adressed this is to make a new tab that has the formulas and keep the CMS tab with the value only. I've been copying and pasting this manually but want to make that process automatic. I cannot find any help to make a script that takes the values of the formulas from one tab and place those values in a different sheet.
Here are some pictures for reference:
*I put the blue highlight on the cell I am referring to for the google sheets and the data from the firebase is showing the first row of data that was exported.
How about this sample script? Please think of this as one of several answers. The flow of this script is as follows. When you use this script, please copy and paste it and run sample().
Flow :
Input a source range of active sheet.
Retrieve the source range.
Input a destination range of a destination spreadsheet.
Retrieve the destination range of the destination sheet.
Copy data from the source range to the destination range.
Data which included values, formulas and formats is copied.
Sample script :
function sample() {
// Source
var range = "a1:b5"; // Source range
var ss = SpreadsheetApp.getActiveSpreadsheet();
var srcrange = ss.getActiveSheet().getRange(range);
// Destination
var range = "c1:d5"; // Destination range,
var dstid = "### file id ###"; // Destination spreadsheet ID
var dst = "### sheet name ###"; // Destination sheet name
var dstrange = SpreadsheetApp.openById(dstid).getSheetByName(dst).getRange(range);
var dstSS = dstrange.getSheet().getParent();
var copiedsheet = srcrange.getSheet().copyTo(dstSS);
copiedsheet.getRange(srcrange.getA1Notation()).copyTo(dstrange);
dstSS.deleteSheet(copiedsheet);
}
If I misunderstand your question, I'm sorry.
Edit :
This sample script copies from values of source spreadsheet to destination spreadsheet.
function sample() {
// Source
var range = "a1:b5"; // Source range
var ss = SpreadsheetApp.getActiveSpreadsheet();
var srcrange = ss.getActiveSheet().getRange(range);
// Destination
var range = "c1:d5"; // Destination range,
var dstid = "### file id ###"; // Destination spreadsheet ID
var dst = "### sheet name ###"; // Destination sheet name
var dstrange = SpreadsheetApp.openById(dstid).getSheetByName(dst).getRange(range);
var dstSS = dstrange.getSheet().getParent();
var sourceValues = srcrange.getValues();
dstrange.setValues(sourceValues);
}
Related
I'm currently trying to connect a Lua Script with a GS WebApp. The connection is working but due to my lack of knowledge in GScripting I'm not sure why it isn't saving my data correctly.
In the Lua side I'm just passing in a hard-code a random name and simple numerical userid.
local HttpService = game:GetService("HttpService")
local scriptID = scriptlink
local WebApp
local function updateSpreadSheet ()
local playerData = (scriptID .. "?userid=123&name:Jhon Smith")
WebApp = HttpService:GetAsync(playerData)
end
do
updateSpreadSheet()
end
On the Google Script side i'm only saving the data on the last row and then add the value of the userid and the name.
function doGet(e) {
console.log(e)
// console.log(f)
callName(e.parameter.userid,e.parameter.name);
}
function callName(userid,name) {
// Get the last Row and add the name provided
var sheet = SpreadsheetApp.getActiveSheet();
sheet.getRange(sheet.getLastRow() + 1,1).setValues([userid],[name]);
}
However, the only data the script is saving is the name, bypassing the the userid for reasons I have yet to discover.
setValues() requires a 2D array and range dimensions should correspond to that array. The script is only getting 1 x 1 range and setValues argument is not a 2D array. Fix the syntax or use appendRow
sheet.getRange(sheet.getLastRow() + 1,1,1,2).setValues([[userid,name]]);
//or
sheet.appendRow([userid,name])
References:
appendRow
I have an excel file that contains all the filenames of the Images. The path of these images are stored in an Observable Collection via <File> class which came from the folder that contains all of the images. My goal is to create a hyperlink of these filenames by matching it through the pool of image file collection.
I would like to ask if how can I iterate faster through a large collection of file classes in order to get their paths easily.
For example:
Image name from Excel :
ABC_0001
The Full path from the collection must be:
C:\Users\admin\Desktop\Images\ABC_0001.jpg
In order to get their full path, I perform the iteration through Stream.
My procedures:
Extract data using Apache POI.
Stream through the Image Collection by converting each data into
their base filenames vs extracted data.
Get the result and store the fullpath on the object via
getAbsolutePath().
Code:
//storage during iteration
ObservableList<DetailedData> dataCollection = FXCollections.observableArrayList()
//Image collection containing over 13k Images listed via commons-io
ObservableList<File> IMAGE_COLLECTION = FXCollections.observableArrayList(FileUtils.listFiles(browsedFOLDER, new String[]{"JPG", "JPEG", "TIF", "TIFF", "jpg", "jpeg", "tif", "tiff"}, true));
//Sheet data
Sheet sheet1 = wb.getsheetAt(0);
for (Row row: sheet1)
{
DetailedData data = new DetailedData();
//extracted data from excel
String FILENAME = row.getCell(0,Row.MissingCellPolicy.CREATE_NULL_AS_BLANK).getStringCellValue();
//to be filled up based on stream result.
String IMAGE_SOURCE = null;
//stream code with the help of commons-io
File IMAGE = IMAGE_COLLECTION.stream().filter(e -> FilenameUtils.getBaseName(e.getName()).toLowerCase().equals(FILENAME.toLowerCase())).findFirst().orElse(null);
if (IMAGE != null)
IMAGE_SOURCE = IMAGE.getAbsolutePath();
data.setFileName(FILENAME);
data.setFullPath(IMAGE_SOURCE);
dataCollection.add(data);
}
Result:
Excel rows = 9,400
Image Files = 13,000
Iteration Time = 120,000ms
Are the results should appear normal or it can become faster?
I tried using parallelStream() and the results went faster but it consumes higher CPU usage.
This code should speed your code up a lot, but there are a few questions about your code.
ObservableList<DetailedData> dataCollection = FXCollections.observableArrayList() Why are you using ObservableList? Why is this a list of DetailedData and not File. Given that detailed data has setFileName and setFullPath. File already has these.
ObservableList<File> IMAGE_COLLECTION = FXCollections.observableArrayList(FileUtils.listFiles(browsedFOLDER, new String[]{"JPG", "JPEG", "TIF", "TIFF", "jpg", "jpeg", "tif", "tiff"}, true)); Why ObservableList?
These two are small things, but I am curious.
So what I think you should do is use a Map. Your code should look something like the code below.
//storage during iteration
List<DetailedData> dataCollection = new ArrayList();
//Image collection containing over 13k Images listed via commons-io
List<File> IMAGE_COLLECTION = new ArrayList(FileUtils.listFiles(new File("C:\\Users\\blj0011\\Pictures"), new String[]{"JPG", "JPEG", "TIF", "TIFF", "jpg", "jpeg", "tif", "tiff"}, true));
//Use this to map file name to file
Map<String, File> map = new HashMap();
//Use this to add data to the map
IMAGE_COLLECTION.forEach((file) -> {map.put(file.getName().substring(0, file.getName().lastIndexOf(".")).toLowerCase(), file);});
for (Row row: sheet1)
{
//extracted data from excel
String FILENAME = row.getCell(0,Row.MissingCellPolicy.CREATE_NULL_AS_BLANK).getStringCellValue();
//If the map contains the file name, create `DetailedData` object. Then set data. Then add object to datacollection list.
if (map.containsKey(FILENAME.toLowerCase()))
{
DetailedData data = new DetailedData();
data.setFileName(FILENAME);
data.setFullPath(map.get(FILENAME.toLowerCase()).getAbsolutePath());
dataCollection.add(data);
}
}
Comments in the code
I still believe this could be cleaned up a little more if you used List<File> dataCollection = new ArrayList()
If you really want to speed up your search, you should try not to do things repeatedly which could just be done once. For example you could use two loops. The first to prepare your search and the second to actually do the search. Inside your filter you call FilenameUtils.getBaseName and two time a conversion to lower case. It would be better to do these things only once in the first loop and store the resulting Strings in a list. In the second loop you then do the search on this list.
I am also wondering why you use ObservableLists here. A simple List would do as well.
I've tested another approach in this slow iteration.
It seems that the cause is declaring the Stream repeatedly inside the foreach.
I tried using Baeldung's solution <Supplier> and declared it outside the loop together with parallelStream()
Sample Code:
Supplier<Stream<File>> streamSupplier = () -> imageCollection.parallelStream();
for (Row row : sheet)
{
File IMAGE = streamSupplier.get().filter(e -> FilenameUtils.getBaseName(e.getName()).toLowerCase().equals(FILENAME.toLowerCase())).findFirst().orElse(null);
if (IMAGE != null)
IMAGE_SOURCE = IMAGE.getAbsolutePath();
}
Result went 45000ms
Please correct me if my approach was not right.
I have two datasets:
Dataset 1 - Records the details of a store visit. Merchandiser name, location, date & a relation to SKU (Dataset 2)
Dataset 2 - This is the SKU data, where the stock levels for each sku are input as a new record, each associated to a visit from Dataset 1.
I have two issues:
I want to combine this data into a single table. I want to show each SKU record, with additional columns for the visit information (such as the location & date). How do I do this.
How do I combine this data for use elsewhere, such as google data studio. Essentially I want to be able to see an SKU's stock-level's history, or the date it was last updated.
You need to create Calculated Data Source. You can refer this sample.
On a high Level
Add data source in Appmaker.
Select Calculated, provide Name and Create the data source.
Once your Calculated Model is in place. Add fields as per need basis. e.g. If you want to store Sum of two fields, create one Integer field in Calculated Model. Here's how your calculated data model will look like.
Now go to Second Tab which is "Datasources". Click on the Data Model name there. You should see an option to write server side script.
Here you should write your logic for combining your data sources. I can provide you one sample to achieve this.
//server script
var calculatedModelRecords = [];
var recordsByStatus = {};
var allRecord = app.models.Request.newQuery().run(); //your existing data source.
for (var i = 0; i < allRecord.length; i++) {
var record = allRecord[i];
var draftRecord = app.models.TAT.newRecord(); //new data source
draftRecord.CreatedOn = record.CreatedOn;
draftRecord.DocumentName = record.DocumentName;
draftRecord.DueDate = record.DueDate;
draftRecord.DaysPerStage = record.DaysPerStage;
draftRecord.Status = record.Status;
calculatedModelRecords.push(draftRecord);
}
return calculatedModelRecords;
I'm using a Google Spreadsheet to log some things on a day-to-day basis. To make it user-friendly to my colleagues I've made the spreadsheet as "interface-ish" as possible, basically it resembles a form.
This "form" has a submit button that saves the sheet and creates a new sheet (copy of template).
The problem is that the sheet that is saved should be saved with the date from a cell. BUT it saves with the date one day before the actual date... (!) I'm going nuts trying to figure out why.
Here's the code from the Google Apps Script I'm calling when the submit button is pressed:
function renameSheet() {
var ShootName = SpreadsheetApp.getActiveSheet( ).getRange("G8").getValue();
var DateName1 = SpreadsheetApp.getActiveSheet( ).getRange("A8").getValue();
var newdate = new Date(SpreadsheetApp.getActiveSheet( ).getRange("A8").getValue());
var Datename2 = Utilities.formatDate(newdate, "PST", "yyyy-MM-dd");
var NewName = Datename2 + " - " + ShootName;
SpreadsheetApp.getActiveSpreadsheet().renameActiveSheet(NewName);
var oldSheet = ss.getActiveSheet();
// create a duplicate of the template sheet
ss.setActiveSheet(ss.getSheetByName("Original0"));
var newSheet = ss.duplicateActiveSheet();
newSheet.activate();
ss.moveActiveSheet(1);
newSheet.setName("NewLog");
}
If cell A8 has the value "12.25.16" - the sheet will be named "12.24.16".
If anyone has a proper or even a dirty quickfix to this, I'd love to hear it.
You are hardcoding the timezone in this line :
var Datename2 = Utilities.formatDate(newdate, "PST", "yyyy-MM-dd");
but this does not take into account the daylight saving and date only values in spreadsheets are always at 00:00:00 hours so one hour shift can change the date...
Replace with an automated value like this :
var Datename2 = Utilities.formatDate(newdate, Session.getScriptTimeZone(), "yyyy-MM-dd");
I am looking for help with 2 parts of my iMacro Script...
Part1 - Variable
I am clicking on the follwoing line of a page in order to access the page I need to extract from.
1st Link
TAG POS=**8** TYPE=A FORM=NAME:xxyy ATTR=HREF:https://aaa.aaaa.com/en/administration/xxxx.jsp?reqID=h*
2nd Link
TAG POS=**9** TYPE=A FORM=NAME:xxyy ATTR=HREF:https://aaa.aaaa.com/en/administration/xxxx.jsp?reqID=h*
The tag pos is the variable, how can I get this so that when running on loop, the macro will select the next value on the screen (ie choose 8,9,10)? Some screens have 100 plus links to be clicked on.
Part 2 - Save CSV file
I have the saveas line in my file. But how can I make it so that there is only 1 csv file created (even if macro is runn 50 times)? Also, is there a way to format the CSV file from the iMacros so that each new run starts on another row (currently, all data extracts to row 1 across many columns.)
Thank you in advance,
Adam
This will do what you asked. It will loop the macro and each time set the new position number in the macro.
1)
var macro;
macro ="CODE:";
macro +="TAG POS={{number}} TYPE=A FORM=NAME:xxyy ATTR=HREF:https://aaa.aaaa.com/en/administration/xxxx.jsp?reqID=h*"+"\n";
for(var i=1;i<100;i++)
{
iimSet("number",i)
iimPlay(macro)
}
For the solution of part two you will need JavaScript scripting. First part is declaring macro and the second part is initiating the macro and the third part is the function which saves the extracted text into a file. Each time you run it will save in the new line.
2)
var macroExtractSomething;
macroExtractSomething ="CODE:";
macroExtractSomething +="TAG POS=1 TYPE=DIV ATTR=CLASS:some_class_of_some_div EXTRACT=TXT"+"\n";
iimPlay(macroExtractSomething)
var extracted_text=iimGetLastExtract();
WriteFile("C:\\some_folder\\some_file.csv",extracted)
//This function writes string into a file. It will also create file on that location
function WriteFile(path,string)
{
//import FileUtils.jsm
Components.utils.import("resource://gre/modules/FileUtils.jsm");
//declare file
var file = new FileUtils.File(path);
//declare file path
file.initWithPath(path);
//if it exists move on if not create it
if (!file.exists())
{
file.create(file.NORMAL_FILE_TYPE, 0666);
}
var charset = 'EUC-JP';
var fileStream = Components.classes['#mozilla.org/network/file-output-stream;1']
.createInstance(Components.interfaces.nsIFileOutputStream);
fileStream.init(file, 18, 0x200, false);
var converterStream = Components
.classes['#mozilla.org/intl/converter-output-stream;1']
.createInstance(Components.interfaces.nsIConverterOutputStream);
converterStream.init(fileStream, charset, string.length,
Components.interfaces.nsIConverterInputStream.DEFAULT_REPLACEMENT_CHARACTER);
//write file to location
converterStream.writeString("\r\n"+string);
converterStream.close();
fileStream.close();
}