SuiteScript: What is the purpose of governance units? - suitescript

I noticed that through my browser console I was able to view the source code for the SuiteScript API. Each API funciton had a function call within in that looked like: nsapiLogUsage( 'nlapiLoadRecord', type ).
When I removed this, and inserted the entire function into my own script, and then called the function, governance units no longer mattered. My code executed on, and on, and on, with no restrictions. I am currently writing a scrip that will execute against 300k records, so governance units have been a huge issue to deal with.
Are governance units in place only to keep long running code from executing, or are there more serious consequences to not having governanc units?

Governance is a way to limit script execution to avoid the overcomsuption of resources server side, as metioned in the documentation:
SuiteScript
API Governance
Perhaps the behavior your are experiencing is not the intended. But actually it is not a feature that is considered for the web browser.

Each script type has its own usage limit. See Script Usage Limits
For instance, a client script has 1000, a scheduled script has 10000, and so on.
Each API call costs governance units consuming the usage allowed.
An error is thrown once the limit is reached.
If you are trying to update 300k records, I suggest you use a scheduled script because you can reset the governance units here using nlapiYieldScript().

Related

Lowering Cloud Firestore API Latency

I developed an Android application where I use Firebase as my main service for storing data, authenticating users, storage, and more.
I recently went deeper into the service and wanted to see the API usage in my Google Cloud Platform.
In order to do so, I navigated to https://console.cloud.google.com/ to see what it has to show inside APIs and Services:
And by checking what might cause it I got:
Can someone please explain what is the meaning of "Latency" and what could be the reason that specifically this service has so much higher Latency value compared to the other API's?
Does this value have any impact on my application such as slowing the response or something else? If yes, are there any guidelines to lower this value?
Thank you
Latency is the "delay" until an operation starts. Cloud Functions, in particular, have to actually load and start a container (if they have paused), or at least load from memory (it depends on how often the function is called).
Can this affect your client? Holy heck, yes. but what you can do about it is a significant study in and of itself. For Cloud Functions, the biggest latency comes from starting the "container" (assuming cold-start, which your low Request count suggests) - it will have to load and initialize modules before calling your code. Same issue applies here as for browser code: tight code, minimal module loads, etc.
Some latency is to be expected from Cloud Functions (I'm pretty sure a couple hundred ms is typical). Design your client UX accordingly. Cloud Functions real power isn't instantaneous response; rather it's the compute power available IN PARALLEL with browser operations, and the ability to spin up multiple instances to respond to multiple browser sessions. Use it accordingly.
Listen and Write are long lived streams. In this case a 8 minute latency should be interpreted as a connection that was open for 8 minutes. Individual queries or write operations on those streams will be faster (milliseconds).

API rate limits on calling getSyncChunk()

Given that Evernote don't publish their exact API rate limits (at least I can't find them), I'd like to ask for some guidance on it's usage.
I'm creating an application that will sync the user's notes and store them locally. I'm using getFilteredSyncChunk to do this.
I'd like to know how often I can make this API call without hitting the limits. I understand that the limits are on a per-user basis, so would it be acceptable to call this every 5 minutes to get the latest notes?
TIA
The rate limit is on a per API key basis. You'll be okay calling getFilteredSyncChunk every five minutes, although it's a little more efficient to call getSyncState instead.
In case you haven't seen it yet, check out this guide for info on sync (accessible from this page).

Slow Transactions - WebTransaction taking the hit. What does this mean?

Trying to work out why some of my application servers have creeped up over 1s response times using newrelic. We're using WebApi 2.0 and MVC5.
As you can see below the bulk of the time is spent under 'WebTransaction'. The throughput figures aren't particularly high - what could be causing this, and what are the steps I can take to reduce it down?
Thanks
EDIT I added transactional tracing to this function to get some further analysis - see below:
Over 1 second waiting in System.Web.HttpApplication.BeginRequest().
Any insight into this would be appreciated.
Ok - I have now solved the issue.
Cause
One of my logging handlers which syncs it's data to cloud storage was initializing every time it was instantiated, which also involved a call to Azure table storage. As it was passed into the controller in question, every call to the API resulted in this instantiate.
It was a blocking call, so it added ~1s to every call. Once i configured this initialization to be server life-cycle wide,
Observations
As the blocking call was made at the time of the Controller being build (due to Unity resolving the dependancies at this point) New Relic reports this as
System.Web.HttpApplication.BeginRequest()
Although I would love to see this a little granular, as we can see from the transactional trace above it was in fact the 7 calls to table storage (still not quite sure why it was 7) that led me down this path.
Nice tool - my new relic subscription is starting to pay for itself.
It appears that the bulk of time is being spent in Account.NewSession. But it is difficult to say without drilling down into your data. If you need some more insight into a block of code, you may want to consider adding Custom Instrumentation
If you would like us to investigate this in more depth, please reach out to us at support.newrelic.com where we will have you account information on hand.

Designing an asynchronous task library for ASP.NET

The ASP.NET runtime is meant for short work loads that can be run in parallel. I need to be able to schedule periodic events and background tasks that may or may not run for much longer periods.
Given the above I have the following problems to deal with:
The AppDomain can shutdown due to changes (Web.config, bin, App_Code, etc.)
IIS recycles the AppPool on a regular basis (daily)
IIS itself might restart, or for that matter the server might crash
I'm not convinced that running this code inside ASP.NET is not the right thing to do, becuase it would allow for a simpler programming model. But doing so would require that an external service periodically makes requests to the app so that the application is keept running and that all background tasks are programmed with utter most care. They will have to be able to pause and resume thier work, in the event of an unexpected error.
My current line of thinking goes something like this:
If all jobs are registered in the database, it should be possible to use the database as a bookkeeping mechanism. In the case of an error, the database would contain all state necessary to resume the operation at the next opportunity given.
I'd really appriecate some feedback/advice, on this matter. I've been considering running a windows service and using some RPC solution as well, but it doesn't have the same appeal to me. And I'd instead have a lot of deployment issues and sycnhronizing tasks and code cross several applications. Due to my business needs this is less than optimial.
This is a shot in the dark since I don't know what database you use, but I'd recommend you to consider dialog timers and activation. Assuming that most of the jobs have to do some data manipulation, and is likely that all have to do only data manipulation, leveraging activation and timers give an extremely reliable job scheduling solution, entirely embedded in the database (no need for an external process/service, not dependencies outside the database bounds like msdb), and is a solution that ensures scheduled jobs can survive restarts, failover events and even disaster recovery restores. Simply put, once a job is scheduled it will run even if the database is restored one week later on a different machine.
Have a look at Asynchronous procedure execution for a related example.
And if this is too radical, at least have a look at Using Tables as Queues since storing the scheduled items in the database often falls under the 'pending queue' case.
I recommend that you have a look at Quartz.Net. It is open source and it will give you some ideas.
Using the database as a state-keeping mechanism is a completely valid idea. How complex it will be depends on how far you want to take it. In many cases you will ended up pairing your database logic with a Windows service to achieve the desired result.
FWIW, it is typically not a good practice to manually use the thread pool inside an ASP.Net application, though (contrary to what you may read) it actually works quite nicely other than the huge caveat that you can't guarantee it will work.
So if you needed a background thread that examined the state of some object every 30 seconds and you didn't care if it fired every 30 seconds or 29 seconds or 2 minutes (such as in a long app pool recycle), an ASP.Net-spawned thread is a quick and very dirty solution.
Asynchronously fired callbacks (such as on the ASP.Net Cache object) can also perform a sort of "behind the scenes" role.
I have faced similar challenges and ultimately opted for a Windows service that uses a combination of building blocks for maximum flexibility. Namely, I use:
1) WCF with implementation-specific types OR
2) Types that are meant to transport and manage objects that wrap a job OR
3) Completely generic, serializable objects contained in a custom wrapper. Since they are just a binary payload, this allows any object to be passed to the service. Once in the service, the wrapper defines what should happen to the object (e.g. invoke a method, gather a result, and optionally make that result available for return).
Ultimately, the web site is responsible for querying the service about its state. This querying can be as simple as polling or can use asynchronous callbacks with WCF (though I believe this also uses some sort of polling behind the scenes).
I tell you what I have do.
I have create a class called Atzenta that have a timer (1-2 second trigger).
I have also create a table on my temporary database that keep the jobs. The table knows the jobID, other parameters, priority, job status, messages.
I can add, or delete a job on this class. When there is no action to be done the timer is stop. When I add a job, then the timer starts again. (the timer is a thread by him self that can do parallel work). I use the System.Timers and not other timers for this.
The jobs can have different priority.
Now let say that I place a job on this table using the Atzenta class. The next time that the timer is trigger is check the query on this table and find the first available job and just run it. No other jobs run until this one is end.
Every synchronize and flags are done from the table. In the table I have flags for every job that show if its |wait to run|request to run|run|pause|finish|killed|
All jobs are all ready known functions or class (eg the creation of statistics).
For stop and start, I use the global.asax and the Application_Start, Application_End to start and pause the object that keep the tasks. For example when I do a job, and I get the Application_End ether I wait to finish and then stop the app, ether I stop the action, notify the table, and start again on application_start.
So I say, Atzenta.RunTheJob(Jobs.StatisticUpdate, ProductID); and then I add this job on table, open the timer, and then on trigger this job is run and I update the statistics for the given product id.
I use a table on a database to synchronize many pools that run the same web app and in fact its work that way. With a common table the synchronize of the jobs is easy and you avoid 2 pools to run the same job at the same time.
On my back office I have a simple table view to see the status of all jobs.

Performance logging tips

I am developing large data collecting ASP.Net/Windows service application-pair that uses Microsoft SQL Server 2005 through LINQ2Sql.
Performance is always the issue.
Currently the application is divided into multiple larger processing parts, each logging the duration of their work. This is not detailed and does not help us with anything. It would be nice to have some database tables that contain statistics that the application itself collected from its own behavior.
What logging tips and data structures do you recommend to spot the parts that cause performance problems?
Edit:
Mostly I am looking for parts of the application that can cripple the whole system when excessively used. There are peaks during the day when some parts of the application are under heavy load. Some advanced logging would help me isolate the parts that need more attention and optimizing.
Don't use logging for this, use Performance Counters instead. The runtime impact of performance counters is minor and you can simple have them always on. To collect and monitor the performance, you can rely on the existing performance counters infrastructure (perfmon.exe, logman.exe, relog.exe etc).
I personally use XML and XSLT to generate the counters. I can then decorate all my code with performance counters tracking functions being run, average call duration time, number of executions, rate of executions per second and so on and so forth. Good choice of counters will give an immediate, accurate, performance picture much faster than logging can. While logging can give more insight on certain event paths (ie. order of events that lead to certain state), logging can seldom be 'always on' as the impact on performance is significant, not only on performance but most importantly on concurrency as most existing logging infrastructures add contention.
This is not a job for logging. It's a job for a profiler.
Try one of these:
JetBrains' dotTrace - http://www.jetbrains.com/profiler/index.html
Red-Gate ANTS - http://www.red-gate.com/products/ants_profiler/index.htm
Automated QA's AQTime - http://www.automatedqa.com/products/aqtime/index.asp
While I haven't (yet) tried it for myself, it may be worth looking at Gibraltar which can be used with PostSharp to put declarative performance logging into your code.
When dealing with problems like this I try and not add any extra headache by manually adding logging / tracing & timing into the application itself. If all you want is to tune the application then I suggest getting a profiler which will show you what areas of code are an issue. I recommend Red-Gate's Ant's Profiler.
Now if you want to collect statistics for monitoring or trending purposes then a profiler is not the right tool. I have had success using PerformanceCounters which let's many third party tools pull the performance information out of the application.
So what are you trying to do solve performance problems or monitor to ensure you catch a performance problem before it becomes severe?
EDIT
Based on your comment, I would look at using performance monitors around critical sections of code, timing how long it took to complete an operation. Then you can use the built in performance monitoring tools, or any number of third party tools to monitor and trend the stats.
SQL Server keeps track of some things for you, so try running some of these queries on your system:
Wayback Machine: Uncover Hidden Data to Optimize Application Performance
here is an example from the link:
--Identifying Most Costly Queries by I/O
SELECT TOP 10
[Average IO] = (total_logical_reads + total_logical_writes) / qs.execution_count
,[Total IO] = (total_logical_reads + total_logical_writes)
,[Execution count] = qs.execution_count
,[Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)
,[Parent Query] = qt.text
,DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY [Average IO] DESC;
the link contains many queries including ones for: Costly Missing Indexes, Logically Fragmented Indexes, Identifying Queries that Execute Most Often , etc...
I would start would diagnosing what is the real cause for the perf issue? Is it CPU, Memory, Disk or IO. This can be identified by few perfmon counters.
For example Linq2SQL uses Sync I/O which could be a big bottleneck for scalability. Because it uses Sync I/O windows threads get blocked and requests would end up waiting. This is usual suspect and might not be true. Here is an MSDN article how sync I/O could affect scalability.
If CPU is an issue then the next question is application CPU bound? Then you could use one of the profilers like mentioned above. Also look for time spent on GC perfmon counter that is another usual suspect?

Resources