Best practices for sharing scrips across proxy bundles - apigee

I would like to reuse some JavaScript resources across my API proxy bundles. But of course I would like to think about what the best practices for accomplishing this first.
For example given the policy:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Javascript async="false" continueOnError="false" enabled="true" timeLimit="200" name="script-trafficmanagementvars">
<DisplayName>Script-TrafficManagementVars</DisplayName>
<FaultRules/>
<Properties/>
<IncludeURL>jsc://Script-FunctionLibrary.js</IncludeURL>
<IncludeURL>jsc://Script-ErrorHandling.js</IncludeURL>
<ResourceURL>jsc://Script-TrafficManagementVars.js</ResourceURL>
</Javascript>
The following scripts are used across multiple proxies.
Script-FunctionLibrary.js
Script-ErrorHandling.js
While this one is specific to be a Proxy:
Script-TrafficManagementVars.js
I'd like your comment on the best practice for doing this.
I'm concerned about this approach because it builds dependencies across proxies. However I'm noticing opportunities for shared artifacts across proxies.
Quick note:
I deploy my proxies using a maven build pack (4G-gateway-maven-build-pack).
Thanks

You can check out Resources, which allows you to share resources across environments or organizations. Doing this will give you the added benefit of shorter deploy times.

All JavaScript files available across an Org are stored under:
/organizations/{org_name}/resourcefiles/jsc
JavaScript files stored in the /organizations collection are available to all the API proxies running in any environment.

Related

Spring boot jar on azure websites performance issues

I have an application built as a spring boot fat jar.
I host it in azure websites according to "official" documentation with a web.config similar too:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<add name="httpPlatformHandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform processPath="%JAVA_HOME%\bin\java.exe" arguments="-Djava.net.preferIPv4Stack=true -Dserver.port=%HTTP_PLATFORM_PORT% -jar "%HOME%\site\wwwroot\my-web-project.jar"">
</httpPlatform>
</system.webServer>
</configuration>
The application is monolithic in structure, not too large but does some mapping and has some layers to initialize so startup time is about 12 seconds locally.
It runs an H2 in memory database just for testing purposes.
Actual deployment and getting it running on azure websites were never really a problem, but there's some real performance issues, at least with the settings I have.
Some settings of interest:
Standard S1 instance (at the time of writing costing about ~$40USD/month).
Webapp configured with:
Tomcat8 (shouldn't really matter as the fat jar runs embedded tomcat)
JDK 8
Always on enabled
To be able to compare the numbers to something I ran the application on an Azure VM I have with similar (but not the same) specs and price range and ran the application on that one.
Here are some of the results:
Startup time of the application:
Azure websites: ~2 minutes
VM: 30 sek
Cold call:
Deployed/started the application and left it to make a call the day after.
Azure websites: 31119 ms
VM: 219 ms
Consequent call:
A call directly after the cold call, but to another endpoint.
Azure websites: 2685 ms
VM: 223 ms
My question here is:
Do any one know if it is viable to run spring boot fat jars hosted on azure websites?
As there is official documentation from Microsoft one would think that it is, and of course technically it is, but is it viable in production?
I'm not really after any debating about AWS vs Azure vs Google App Engine .....,
or to write wars/jars or how to host it.
I have reasons to want it this way. If it's not possible I have other options but would like to explore the idea first and see if any one else has better experiences?
Edit: Just to add to the information. The database was empty for all calls. So that shouldn't add any overhead to speak off. No data was actually fetched only empty lists.
Although it is an old question and my answer is rather not valuable since I have no knowledge on Azure, but I would like to share results of my research. I have couple of Spring Boot microservices and one relatively big service (170 MB fat jar that on my local machine starts in about 70 seconds). Azure starts small microservices in tens (if not in hundreds) of seconds... and I mean like really small microservices (an example of config server from Spring Cloud etc... there is nothing fancy going on there). If it comes to the big one - Azure starts it and after 2 minutes... starts it again... and again... and again. As a result it never gets started.
and it was all on B3 app service plan. So quite big plan.
Wrapping up, in my (strongly subjective) opinion - Azure is not a viable option for Java apps.
It's viable for your question about running spring boot fat jars hosted on Azure websites, please refer to my answer for the SO thread Deploying Springboot to Azure App Service and focus on the offical document for the part of spring boot.
Hope it helps. Any concern, please feel free to let me know.

Sharing web.Config settings across applications

I'm trying to implement ASP.Net FormsAuthentication across many applications. I have already got a working prototype of this that shares the same login in couple of applications. As described here, the applications need to share some settings for this to work, for example the machinekey-section:
<machineKey validationKey="C50B3C89CB21F4F1422FF158A5B42D0E8DB8CB5CDA1742572A487D9401E3400267682B202B746511891C1BAF47F8D25C07F6C39A104696DB51F17C529AD3CABE"
decryptionKey="8A9BE8FD67AF6979E7D20198CFEA50DD3D3799C77AF2B72F"
validation="SHA1" />
I would like to have these settings in one place and use them in all of the applications. It would not make sense to have the same settings in 10 applications. If I want to change a setting, I want to do it in one place and have all the applications to use that afterwards.
Is it possible to have these settings for example in a class library project which the other applications use? How would you implement this? I tried the configSource-attribute, but I think I cannot use it with a config-file inside the class library. Am I right?
What other approaches have you used? All comments are welcome. Thanks!
The best solution would be to define it in your web.config and include it in your deploy packages. Therefore, when you deploy a new version with modified web.config, the changes will be deployed everywhere.
You can also use the xml transforms (http://msdn.microsoft.com/en-us/library/dd465326.aspx) to ease the management of these settings (debug and production settings may be different). You can then use a different project configuration (and therefore, web.config settings) for each publish profiles.
The main advantage is that the whole process is then automated, less error-prone.

Hosting static content on different domain from webservices, how to avoid cross-domain?

We've recently been working on a fairly modern web app and are ready to being deploying it for alpha/beta and getting some real-world experience with it.
We have ASP.Net based web services (Web Api) and a JavaScript front-end which is 100% client-side MVC using backbone.
We have purchased our domain name, and for the sake of this question our deployment looks like this:
webservices.mydomain.com (Webservices)
mydomain.com (JavaScript front-end)
If the JavaScript attempts to talk to the webservices on the sub-domain we blow up with cross domain issues, I've played around with CORS but am not satisfied with the cross browser support so I'm counting this out as an option.
On our development PC's we have used an IIS reverse proxy to forward all requests to mydomain.com/webservices to webservices.mydomain.com - Which solves all our problems as the browser thinks everything is on the same domain.
So my question is, in a public deployment, how is this issue most commonly solved? Is a reverse proxy the right way to do it? If so is there any hosted services that offer a reverse proxy for this situation? Are there better ways of deploying this?
I want to use CloudFront CDN as all our servers/services are hosted with Amazon, I'm really struggling to find info on if a CDN can support this type of setup though.
Thanks
What you are trying to do is cross-subdomain calls, and not entirely cross-domain.
That are tricks for that: http://www.tomhoppe.com/index.php/2008/03/cross-sub-domain-javascript-ajax-iframe-etc/
As asked how this issue is most commonly solved. My answer is: this issue is commonly AVOIDED. In real world you would setup your domains such as you don't need to make such ways around just to get your application running or setup a proxy server to forward the calls for you. JSONP is also a hack-ish solution.
To allow this Web Service to be called from script, using ASP.NET AJAX, add the following line to the first web service code-behind :
[System.Web.Script.Services.ScriptService]
You can simply use JSONP for AJAX requests then cross-domain is not an issue.
If AJAX requests return some HTML, it can be escaped into a JSON string.
The second one is a little bit awkward, though.
You have 2/3 layers
in the web service code-behin class, add this atribute : <System.Web.Script.Services.ScriptService()> _
maybe you need to add this in the System.web node of your web.config:
<webServices>
<protocols>
<add name="AnyHttpSoap"/>
<add name="HttpPost"/>
<add name="HttpGet"/>
</protocols>
</webServices>
In the client-side interface
-Add web reference to the service on the subdomain (exmpl. webservices.mydomain.com/svc.asmx)
Visual studio make the "proxy class"
-add functionality in the masterpage's|page's|control's code behin
-Simply call this functions from client-side
You can use AJAX functionality with scriptmanager or use another system like JQuery.
If your main website is compiled in .NET 3.5 or older, you need to add a reference to the namespace System.Web.Extensions and declare it in your web.config file.
If you have the bandwidth (network I/O and CPU) to handle this, a reverse proxy is an excellent solution. A good reverse proxy will even cache static calls to help mitigate the network delay introduced by the proxy.
The other option is to setup the proper cross domain policy files and/or headers. Doing this in some cloud providers can be hard or even impossible. I recently ran into issues with font files and IE not being happy with cross domain calls. We could not get the cloud storage provider we were using to set the correct headers, so we hosted them locally rather than have to deal with a reverse proxy.
easyXDM is a cross domain Javascript plugin that may be worth exploring. It makes use of standards when the browser supports them, and abstracts away the various hacks required when the browser doesn't support the standards. From easyXDM.net:
easyXDM is a Javascript library that enables you as a developer to
easily work around the limitation set in place by the Same Origin
Policy, in turn making it easy to communicate and expose javascript
API’s across domain boundaries.
At the core easyXDM provides a transport stack capable of passing
string based messages between two windows, a consumer (the main
document) and a provider (a document included using an iframe). It
does this by using one of several available techniques, always
selecting the most efficient one for the current browser. For all
implementations the transport stack offers bi-directionality,
reliability, queueing and sender-verification.
One of the goals of easyXDM is to support all browsers that are in
common use, and to provide the same features for all. One of the
strategies for reaching this is to follow defined standards, plus
using feature detection to assure the use of the most efficient one.
To quote easy XDM's author:
...sites like LinkedIn, Twitter and Disqus as well as applications run
by Nokia and others have built their applications on top of the
messaging framework provided by easyXDM.
So easyXDM is clearly not some poxy hack, but I admit its a big dependency to take on your project.
The current state of the web is that if you want to push the envelop, you have to use feature detection and polyfills, or simply force your users to upgrade to an HTML5 browser. If that makes you squirm, you're not alone, but the polyfills are a kind of temporary evil needed to get from where the web is to where we'd like it to be.
See also this SO question.

wrong protocol for crossdomain.xml in flex app

I've changed the protocol for my flex app from https to http and flashplayer still wants to download the crossdomain.xml using https though with the port for http.
the app is accessed at http://domain01:8080/flex and it wants to get https:..samedomain..:8080/crossdomain.xml (at https:..samedomain..no_port/flex it works fine).
Anyone any idea why?
Thanks a lot,
Daniel
No direct answer as I haven't tried this scenario of specifying a non-default port but a couple of piece of info that might lead you to an answer:
http://learn.adobe.com/wiki/download/attachments/64389123/CrossDomain_PolicyFile_Specification.pdf?version=1
This might be of interest:
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM
"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*.example.com" to-ports="507,516-523"/>
</cross-domain-policy>
or this:
10,0,12,0 site-control's permitted-cross-domain-policies default for non-socket policy files is "master-only"
Maybe try an older version of Flash Player to see if something in the changes from 9->10 is causing the issue then finding the change in the change logs might be easier or perhaps it's a bug in the new version.
Good luck
Shaun
Flex (Atleast 3.5 AFAIK..) gets some identify crisis when you change port and use Https... The security model depends on the port.. I do not know the exact reason for the problem, but my solution was to load the crossdomain file in your app explicitly..
System.security.loadPolicyFile('https://mydomain:port/crossdomain.xml');
When you run into crossdomain issues, it's worth remembering that by using the Security class, you can always take explicit control over what crossdomain.xml file is loaded (in fact, the policy file can have any name you want). The default behavior of loading the policy file from the root of a server can often be too restrictive when dealing with more complex, real-world cases (with load-balancing or reverse proxies, for instance).
Try using:
Security.loadPolicyFile(<URI to the policy file goes here>);
The ASDocs are here and explain it quite well.
By taking control of how policies are loaded, you can gain more freedom and take a lot of the guesswork out of what can otherwise be a painful, frustrating experience. The Flash Player allows you to load multiple policy files which is handy if you need to integrate with more than one service layer (e.g. on one host through HTTPS and another through HTTP).
Good luck,
Taylor

performance boosters for an asp.net website in production server

I have an asp.net webforms application in production server and it was really slow. So i decided to get some performance tips from my fellow SO users.
I ve applied these to increase my asp.net website performance,
Set debug=false
Turn off Tracing
Image caching
<caching>
<profiles>
<add extension=".png" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
<add extension=".jpg" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
<add extension=".gif" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
</profiles>
</caching>
Any other real performance booster do you know? Any Suggestion...
A webpage can be fast only by design.
A simple option can not make your page load faster. The debug=off only eliminate the extra debugging functions and actually if you are not use them can not make many thinks.
I agree with all that Paul say, and you can found them here with more details and I have to say as extra...
You need to follow some guide and do a lot of work to make them real fast.
What I follow.
I use (custom) cache for my database action that really boost the data loading speed, but make at the same time the code to much more and I have spend a lot of time.
I have use profiler to find my slow points on the page and correct them.
I use the Inspector on Google Chrome Browser to locate slow loading and double loading problems.
I have eliminate the double use/create of any variables on custom controls.
I use cache on client browser base on this suggestions.
I use webfarm and/or webgarden (more than one pool).
How to see your page speed: http://code.google.com/speed/page-speed/docs/using.html
Optimize cache: http://code.google.com/speed/page-speed/docs/caching.html
Many general topic from google can be found here: http://code.google.com/speed/articles/
About caching: http://www.mnot.net/cache_docs/
Hope this help.
Not directly ASP.NET, but
1) Make sure compression is enabled within IIS.
2) If your site is cookie "heavy" host static files (images, CSS and JS) within a separate domain. Each request back to the server needs to send all site cookie information back to the server. So if your cookie usage is 10kb+ then 20 static file references within the page will result in an extra 200kb being sent back to the server in total. If you move the static files over to a domain which has no cookie requirements you remove this overhead. It is worth noting that due to a "fault" in how IE processes things, you don't get any benefit in using subdomains, IE appears to insist in sending all domain cookies to sub domains. An additional benefit to this is allowing more HTTP requests in parallel
Stop hacking your production server (that is likely to introduce functional bugs) and take a step back. Can you reproduce the performance problems in a non-production environment? If not, then do the work to try.
You should try to reproduce the problem as follows:
Get production-grade hardware in your test environment - web and database servers etc - running the same hardware as production
Run the same software set as production - this includes the same configuration of ASPNET and all other servicces used.
Load production-size data (production data if possible) into your databases (remember to firewall your lab from the internet so that it cannot send mail or other things to the internet, or your users might start receiving email notifications from the test system which would be bad!)
Create simulated traffic to the site to production level - this is potentially quite tricky but there are a lot of tools available
Now you've got a fighting chance to repro the problem in testing, you can try solutions.
Usually database-driven web sites are bottlenecked by the database, so I'd start there. The main tricks are
Do fewer queries
Optimise the queries you do do (fetch less data, use appropriate indexes etc)
Change the database structure so that the queries you have to do are easier for it (clustered indexes etc, denormalise possibly)
But any change you make, try it on your test system, measure the results, and if it doesn't help, ROLL IT BACK.
In general configuration changes are likely to make only minor differences, but you can try those too.
If all this sounds like too much effort, try throwing hardware at the problem - developer time is much more expensive than hardware. In the time it takes you to do the above (could be months depending on the complexity of the app) you could have bought some meaty production boxes. But be sure that it's going to help.
Does your database fit in RAM? Could it possibly fit in ram? If the answers to those questions are no and yes respectively, buy more ram for the database. This is one of the cheapest ways of making your db go faster without code changes.

Resources