Meteor methods and performance - meteor

We have some meteor methods in our application which are not really being used. There are also few methods available both at client and server side, but are "Actually" being used only on either of the 2 ends.
Question: Can these things have an impact/major impact on the overall performance of the system?
Also, will there be a major difference in performance if we use Meteor methods or Rest APIs?

Writing unused or barely-used methods will not really impact your performance, except if you've written thousands of them, in such case it will primarily impact the initial load and the restarts of your app when you modify it).
I never tried it but you can check if that works out for you: https://guide.meteor.com/code-style.html#eslint
Also, defining methods on either client or server side (and ending up calling them on both) is completely normal, it just depends on the variables/props you need.

Related

Cache HTTP requests in a serverless environment

I have an serverless lambda which does the following:
Start with a set of ids in the query (example.com?ids=a,b,c)
Does HTTP request to another webservice (based on the given ids) which I do not control
Renders the website based on the other webservice response
All works, no issues so far.
Today I introduced a new UI for my website. The user can toggle between "a tableview" and "a listview".
Because those differents views can also be controlled via (another) query paramter, I do a simple "redirect" to my own website. Assuming I'm looking currently at the tableview, for the "show listview" textfield I have a simple <a href="example.com?ids=a,b,c&view=list">[...]<a>.
This redirect leads, of course, to another call to the "other webservice". Even if I can be pretty sure that the content haven't change since my last call (just a few seconds/minutes ago).
My question is:
Can I somehow cache the HTTP requests from my lambda so that we won't do the call again?
I'm somewhat aware of the Cache-Control headers, but since it is an serverless environment it could (and probably will?! I don't know but I don't even care 😅) another machine without this cache. And therefore it will not be an cache hit and will do the requests anyways.
Please don't answer with solutions like "Use JavaScript for changing the UI". I'm aware that this is possible, but my main question is just how (and even if I can) cache such requests in a serverless environment.
Thanks in advance!
From documentation and common best practices we get the impression, that a Serverless function or more specifically an AWS Lambda function has only a very short lifespan. This is to the point, that we need to assume that a function is provisioned into its (firecracker) micro container for a single call only and gets de-provisioned afterwards.
However, to safe resources and to improve performance, the life cycle of a Lambda is rather: provisioning, use for several distinct function calls, de-provisioning.
This means irregardless of the used language, the container gets reused for a certain amount of time. Global resources you create in that time (global variables, static objects, files) will survive beyond a single function call.
Your case
In your exact case you can then implement whichever caching strategy you want. This should work most of the times for your use-case with two pitfalls you need to be aware of:
The micro container gets re-used between requests between different clients. Meaning that of course you need to have a way of access control to your cache, if this is relevant to your use-case.
You do not have direct control over the timeout time of your Lambda, meaning that you should anticipate that every now and then a user will experience the overhead of a non-cached request just due to bad timing.
Let us know about your final solution.

What should be the number of generic handler for an application

I have a web application running on asp.net 4.0 and oracle 11G.
I am using ado.net to connect database server.
My application is using all ready around 15-20 generic-http-handlers.
I am calling those generic-http-handler from jquery.
I want to use more of these but I am not sure about the effect of this on my appliation.
Kindly suggest is it good idea to go for more generic-http-handers?
Edit 1
I was going through the web to find out how many concurrent http request I can have in the same tab of a browser form the same domain.
I came across a niche question on this topic
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
It suggest that though you ave async=true in your ajax call from jquery but it will have to wait till other http request has finished.
It has also suggested that you can create sub-domain to overcome this issue.
Now can some one can suggest me weather I should go for more or not?
I'm not sure if you are asking about whether it is too many to have 20 handlers defined, or have 20 handlers invoked from jQuery, so I will address both.
In terms of defining many handlers, a generic HTTP handler (ASHX) is similar to the ASP.NET page handler (ASPX), but more lightweight in that it does not have the full lifecycle of a page, and is not intended for returning UI. Many large-scale applications have hundreds of ASPX pages defined, which is consistent with the design intention of ASP.NET Web Forms, where every UI page is a distinct ASPX. So, to have hundreds of ASHX, would be even less heavy than hundreds of ASPX, and no problem at all.
In terms of invoking 20 handlers, here we get into the conversation about "chunky" versus "chatty" interfaces. When interfacing over a WAN (i.e., between browser and server), a "chunky" interface (one which makes a smaller number of heavier calls) is better: when you try to scale your application, a "chatty" interface (one which makes a higher number of lighter calls) will hold open many more connections on the server, will often cause more load on the database in terms of a higher number of transactions and a higher number of open simultaneous connections, and therefore will generally not scale as well on the server side.
On the browser side, the news is even worse. Per HTTP specification, browsers limit you to two simultaneous requests, so if you mean to fire off your 20 requests all at once, it will not happen, which means you may get some performance problem from having so many jquery get/post calls queueing up at one time.
The tradeoff, of course, is that often the programming is often cleaner with a "chatty" interface. So here you must make the judgement about your future scaling needs, versus the importance of cleaner code.
I would say if you're building an application that, for its expected life and evolution, can comfortably run all of its traffic on a single web and single database server; AND, your browser code is set up in such a way that the 2 simultaneous requests will not cause you any performance issue, then it is reasonable to go for the "chatty" interface if it gives you much cleaner code.
But if you expect there to be a need for scaling beyond a single server; OR, there are common use cases where many of these jquery get/posts will be invoked simultaneously and hamper performance, then by all means I would refactor to a more "chunky" interface, which would mean not calling more than 20 handlers from a single page via jQuery.
If you've read this and still can't decide which is right, then I would recommend refactoring the interface to make it more "chunky".
Hope this helps, and best of luck to you!

Deprecating ASP.NET Web Methods

I have some internal-facing ASP.NET web services that have had numerous API additions over the years. Some of the original web methods, while still available for consumption, have recommended replacements available. I would like to steer consuming clients toward using these new methods so I can retire and eventually remove their elders.
If this were a client API rather than a web service API, I'd just mark the offending methods with the obsolete attribute. But .NET attributes do not get serialized and are not visible to consuming developers when they add or refresh web references.
What techniques are recomended for obsoleting ASP.NET web methods? Is there anything built into the tooling (VS2005-2010)? I don't want to break any of the existing clients, so I can't simply remove the web methods outright or change their internal behavior to reprot their usage as erroneous.
Tim, the short answer to this is unfortunately that you have to contact those clients and communicate the change with them and agree on timelines etc. There might be something that you can do to smooth the process over for them, particularly if they are not IT savvy clients and had to get their applications built by external contractors.
You can butter this up any way you like for them really, from the system is going to be replaced, to we are doing it bigger, better and faster.
Additionally you can build in code to slow them down, NOT RECOMMENDED, but then when they inquire you can give them the, we don't support that system any longer, it has been replaced by system 'X'.
If the new methods you are talking about are still just web-methods, you can just point the old ones to the new ones, and let the clients use the old one.
Another option is to identify the clients stuck on the old methods, get their IP addresses and lock it down so only they can use it, this way you ensure new clients will not attempt to connect to the old methods.
Other than that, I cannot think of anything that will not be a pain or difficult for both yo and the client.

Do ASP.NET developers really need to be concerned with thread safety?

I consider myself aware of the concepts of threading and why certain code is or isn’t “thread-safe,” but as someone who primarily works with ASP.NET, threading and thread safety is something I rarely think about. However, I seem to run across numerous comments and answers (not necessarily for ASP.NET) on Stack Overflow to the effect of “warning – that’s not thread-safe!,” and it tends to make me second guess whether I’ve written similar code that actually could cause a problem in my applications. [shock, horror, etc.] So I’m compelled to ask:
Do ASP.NET developers really need to be concerned with thread safety?
My Take: While a web application is inherently multi-threaded, each particular request comes in on a single thread, and all non-static types you create, modify, or destroy are exclusive to that single thread/request. If the request creates an instance of a DAL object which creates an instance of a business object and I want to lazy-initialize a collection within this object, it doesn’t matter if it’s not thread-safe, because it will never be touched by another thread. ...Right? (Let’s assume I’m not starting a new thread to kick off a long-running asynchronous process during the request. I’m well aware that changes everything.)
Of course, static classes, methods and variables are just the opposite. They are shared by every request, and the developer must be very careful not to have “unsafe” code that when executed by one user, can have an unintended effect on all others.
But that’s about it, and thus thread safety in ASP.NET mostly boils down to this: Be careful how you design and use statics. Other than that, you don’t need to worry about it much at all.
Am I wrong about any of this? Do you disagree? Enlighten me!
There are certain objects in addition to static items that are shared across all requests to an application. Be careful about putting items in the application cache that are not thread-safe, for example. Also, nothing prevents you from spawning your own threads for background processing while handling a request.
There are different levels of ASP.NET Developers. You could make a perfectly fine career as an ASP.NET Developer without knowing anything threads, mutexes, locks, semaphores and even design patterns because a high percentage of ASP.NET applications are basically CRUD applications with little to no additional business logic.
However, most great ASP.NET Developers which I have come across aren't just ASP.NET Developers, their skills run the gamut so they know all about threading and other good stuff because they don't limit themselves to ASP.NET.
So no, for the most part ASP.NET Developers do not need to know about thread safety. But what fun is there in only knowing the bare minimum?
Only if you create, within the processing stream for a single HTTPRequest, multiple threads of your own... For e.g., if the web page will display stock quotes for a set of stocks, and you make separate calls to a stock quote service, on independant threads, to retrive the quotes, before generating the page to send back to the client... Then you would have to make sure that the code you are running in your threads is thread-safe.
I believe you covered it all very well. I agree with you. Being focused on ASP.NET only it rarely (if at all) comes to multi-threading issues.
The situation changes however when it comes to optimizations. Whenever your start a long-lasting query, you may often want to let it run in a separate thread so that the page load does not stop until the server reports connection timeout. You may wish to have this page periodically check for completion status to notify the user. Here where it comes to multi-threading issues.

Sharing Logic Between the Browser and the Server

I'm working on an app which will, like most apps, have a whole boat load of buisness logic, almost all of which will need to be executed both on the server and the Flash-based client… And I'm trying to figure out the best (read: least complex) way to implement the rules engine.
These are the parameters of the problem:
The rules engine must both run in a web browser (ie, in Flash Player) and on the server. Duplicating the logic (eg, by writing a "server" version and a "client" version) would be an unacceptable risk.
The input/output data is fairly complex, so serialization is a nontrivial problem. We are currently using AMF for all of our serialization needs, and using another protocol would add significant complexity… So it should probably be avoided.
It is infeasible to implement a "rules description language". Experimentation has shown that rules are sufficiently complex that any such language would need to be Turing complete… Which would also add a significant amount of complexity.
The rules engine will not need to make some, but not very many, service calls.
Currently, the best contenders are:
Writing the code in ActionScript, then running it on the server. In theory it's possible to start up an AVM instance, get it long-polling a gateway, then pass data back and forth that way… But that seems less than ideal. Is there a "good" way of doing this?
Writing the code in Haxe. I don't know anything about Haxe's AMF support, so that could be a deal-breaker.
Something involving Tamarin. Seems like a viable option, but I haven't done enough research to tell either way.
So, what do you think? Are any of these options clearly better than others? Is there something I haven't though of that's worth considering?
Finally, thanks for reading this wall of text :)
How much data are you talking about? You can use Air if you want to run it on the server and access a queue or something.

Resources