Records and multithreading, converting to pass through the wall - google-app-maker

First, is there an official CS term for sending things between the front end and back end? I just made up "the wall" but I would like a cooler term.
So in appmaker it seems you cannot pass whole records through to the backend (although you can handle them on either end).
So basically what I was doing was
get set of records, divide into chunks
var records = app.datasources.filesToProcess.items;
call backend process one time per chunk with this
google.script.run.withSuccessHandler(onSuccess).backendProcess(records, start, end);
This allows for a kind of multithreading. The problem is passing records. Is there an easy way to get just the IDS from a set of records client side so I can pass those as an array in place of the records? Passing the record object itself gives an error.

Just do the following:
var records = app.datasources.filesToProcess.items.map(function(item) {return item.id;});
and then you can use
function backendProcess(records, start, end) {
var items = app.models.YourModel.getRecords(records);
}
On the backend.

Related

TypeError: Cannot read property "data" from undefined

There is a many to many relationship between two records namely countries and clients. When I fetch some records from the clients ( an array of clients ) and I try to assign them problematically to a country( record ) like this record[clientsRelationName] = clients I get the following bazaar error, TypeError: Cannot read property "data" from undefined. I know for sure that the variable clientsRelationName is actually a string that corresponds to the name of the relation which is simply just called clients. And it has nothing to do with a variable called data. In fact data does't exist. And I know for sure that record is a defined variable.
Any idea why this is happening? Is it a bug?
I have seen this issue where using Object.keys() on a server-side record yields [key, data, state] instead of the expected fields for that record. So if your programmatic assignment involves iterating on the properties of that record object, you may hit this data property.
Unfortunately that's all I know so far. Maybe the App Maker Team can provide further insight.
As you pointed out in your question, clientsRelationName is a string corresponding to the name of the relation. Your actual relation is just Clients, therefore either of the following should work:
record[Clients] = clients;
or
record.Clients = clients;
I would actually suggest using record.YourRelation because when you use the dot after your record the intellisense will automatically bring up all options for field names or relation end names that are available with that record.
After a lot of trail and error, I finally found a way to make it work using a rather very simple solution and its the only way I could make it work. Basically to a void getting this strange error when when modifying an association on a record TypeError: Cannot read property "data" from undefined , I did the following:
Loop through the record relation(array) and and popup every record in side it. Then loop through the other records that you want to assign the record relation to ( modify the association ) pushing every element to the record relation.
var length = record[relationNameAsVariable].length;
for(var i=0; i<length; i++){
record[relationNameAsVariable].pop();
}
now record[relationNameAsVariable] is empty so do the following:
for(var i=0; i < clientsArray.length; i++ ){
record[relationNameAsVariable].push(clientsArray[i]);
}
It could be a bug or something else that I'm doing wrong when trying to replace the whole association. I'm not sure. But this works like a champ.

CouchDB: Merging Objects in Reduce Function

I'm new to CouchDB, so bear with me. I searched SO for an answer, but couldn't narrow it down to this specifically.
I have a mapper function which creates values for a user. The users have seen different product pages, and we want to tally the type and products they've seen.
var emit_values = {};
emit_values.name = doc.name;
...
emit_values.productsViewed = {};
emit_values.productsViewed[doc.product] = 1
emit([doc.id, doc.customer], emit_values);
In the reduce function, I want to gather different values into that productsViewed object for that given user. So after the reduce, I have this:
productsViewed: {
book1: 1,
book3: 2,
book8: 1
}
Unfortunately, doing this creates a reduce overflow error. According to the other posts, this is because the productsViewed object is growing in size in the reduce function, and Couch doesn't like that. Specifically:
A common mistake new CouchDB users make is attempting to construct complex aggregate values with a reduce function. Full reductions should result in a scalar value, like 5, and not, for instance, a JSON hash with a set of unique keys and the count of each.
So, I understand this is not the right way to do this in Couch. Does anyone have any insight into how to properly gather values into a document after reduce?
You simple build a view with the customer as key
emit(doc.customer, doc.product);
Then you can call
/:db/_design/:name/_view/:name?key=":customer"
to get all products an user has viewed.
If a customer can have viewed a product several times you can build a multi-key view
emit([doc.customer, doc.product], null);
and reduce it with the built-in function _count
/:db/_design/:name/_view/:name?startkey=[":customer","\u0000"]&endkey=[":customer","\u9999"]&reduce=true&group_level=2
You have to accept that you cannot
construct complex aggregate values
with CouchDB by requesting the view. If you want to have a data structure like your wished payload
productsViewed: {
book1: 1,
book3: 2,
book8: 1
}
i recommend to use an _update handler on the customer doc. Every request that logs a product visit adds a value to the customer.productsViewed property instead of creating a new doc.

pagination in cassandra based web application

How can i do a pagination in cassandra based web application. I am using spring MVC on server side and jquery on client side. I tried this but was not sutisfied.
My row key is UUIDType and every time i am sending the start key as string from client browser so dont know how to convert it back to UUID. A simple example will be appriciated.
Spring-data has this functionality pre-rolled :
http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/repositories.html#web-pagination
If you use PlayOrm for cassandra it returns a cursor when you query and as your first page reads in the first 20 results and displays it, the next page can just use the same cursor in your session and it picks up right where it left off without rescanning the first 20 rows again.
Dean
I would suggest generic solution which shall work for any language. I used python pycassa to work this out:
First approach:
-Say if column_count = 10 (How many results to display on front end)
-collect first 11 rows, sort first 10, send it to user (front end) as JSON object and also parameter last=11th_column
-User then calls for page 2, with prev = 1st_column_id, column_start=11th_column and column_count = 10. Based on this, I query cassandra like cf.get('cf', key, column_start=11th_column, column_count=10)
-This way, I can traverse, next page and previous page.
-Only issue with this approach is, I don't have all columns in super column sorted. So this did not work.
Second approach ( I used in production ):
-fetch all super columns and columns for a row key. e.g in python pycassa, cf.get('cf',key)
-Sort this in python using sorted and lambda function based on column values.
-Once sorted, prepare buckets and each bucked size is of page size/column count. Also filter out any rogue data if needed before bucketing.
-Store page by page results in Redis with keys such as 'row_key|page_1|super_column' and keep refreshing redis periodically.
My second approach worked pretty good with small/medium amount of data.

page load taking much time asp.net

I have a user control that fetches data from a database. It takes a lot of time. Also the speed of my web application has become slow. What should I do to make page loading faster?
I will answer in general way.
If you only fetches data with out calculations, then:
DataBase Part
a. Optimize your sql query, be sure that you use the right indexes on database.
b. Do not load more than the data that you won to show, and do paging.
c. If you fetch data from too many tables at the same time, create a new 'flat' table, and pre render your results there on a scheduled regular base on a background thread.
Page Part
a. While you load data, show them imidiatle and do not buffer them, and time to time make a flush to the responce.
If you fetche data with calculations, then make the calculations on a background thread before you show them on a scheduled base, place them on one other flat table, and show that table.
For example, how to show the data while you get them....
<%
int KathePoteFlush = 1;
object Data;
While(GetNextData(Data))
{
if (20 % KathePoteFlush++ == 0)
Response.Flush();
Response.Write(RenderMyTableData(....data....));
};
%>

How to bind DataGrid to display only 25 records of a table having more than 1000 records ..?

I have a datagrid control which is bound to a table which containing more than 1000 records. And i want to show only 25 records once a time. I have used paging in datagrid. But each time the next page index is set, query is fired again. This takes lots of time. So what is the easiest way to bound data in this case to improve performance..??
I don't recommend using caching since the whole data will be returned to the server anyway for the first time.
You can improve performance by using custom paging queries to the database.
Assuming you're working with at least SQL Server 2005,
Here's a great article for your purpose with different benchmarking results
have you considered caching your data set? then you would only need to query the data if the cache is empty or expired.
When you handle your Page Changed event, you need to pull in the new page information. You will need to create a stored procedure that takes CurrentPageNumber and PageSize as arguments.
This is in addition to any other arguments you already supply when bringing down your data.
In the SP, you fill up a temporary table or table variable (or you can use a CTE) with the data that you are returning, but also the RowNumber.
Based upon your CurrentPageNumber argument, you are able to return all the results between CurrentPageNumber * PageSize and (CurrentPgaeNumber + 1) * PageSize - 1.
Here's a good resource:
How to return a page of results from SQL?

Resources