am working on chat application, using signal R.
I was wondering about when to use multiple hubs in signal r, what are the advantages, to use multiple hubs, or to use single hub is good approach?
Per https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-server#multiplehubs
There is no performance difference for multiple Hubs compared to
defining all Hub functionality in a single class.
Therefore, it is simply a matter of deciding if you want to use multiple hubs to organize your code/functionality. OOP principals certainly apply, but it's a logical decision you have to make.
You should create a different one if it would serve a different purpose from your existing ones. You should follow the same approach you normally do in OOP - that is, a class should represent a logical unit.
Every new hub/connection you open will keep a live connection to the server. Each of those consuming network and processing resources. This is something important on mobile devices to prevent battery drain.
I would keep a single hub whenever is possible. A single hub on the server can be used from different logical parts or layers of your code.
Related
I would like to use Kotlin for Linux desktop application. It does not have good UI library. I decided Qt would work well. So I though I would combine those two together. I do not want to use bindings library since there seams to not be any stable and maintained language binding. The way I would like to bind those two would be through use of ZeroMQ. I would like to have two way communication with application (UI needs to react to back end events too).
Has anyone tried such architecture or similar? Will there be any problem like validation or not being able to bind to the data. I would like to minimize use of C++, and use Kotlin for application logic, database, http communication with web server.
I am looking to build medium complexity embedded touch based interface (buttons, text fields, data rows).
Has anyone tried that? Is there a design error?
Communication between ZeroMQ and UI would resemble EventBus pattern.
Q : Has anyone tried such architecture or similar?
Yes.
Q : Is there a design error?
No.
Given you run for right-sized problem approach, the best production-grade results are expected from extending the industry-proved ( since adopted as early as in PARCplace Systems SmallTalk evangelisation in early 1980-ies... indeed some time to prove it be valid and best in class, isn't it? ) Model-Visual-Controller.
Have implemented the MVC-architecture-pattern in a shape and form of a distributed-system, integrated atop the smart ZeroMQ communcation infrastructure. Remote-keyboard was one of remote'd C-controller-inputs (with a dumb CLI V-isual ), another host ( supported by a computing grid ) did consolidate and operate the global M-odel and all the MVC-state transitions, next using another remote V-isual platform, for GUI and some other MMI-interactions, recollected from there back, into the central M-odel part.
Indeed a lovely way to design whatever complex systems!
It was robust, smart, scalable and maintainable architecture and would but recommend to follow this path forwards.
I want to generate a very short Unique ID, in my web app, that can be used to handle sessions. Sessions, as in users connecting to eachother's session, with this ID.
But how can I keep track of these IDs? Basically, I want to generate a short ID, check if it is already in use, and create a new if it is.
Could I simply have a static class, that has a collection of these IDs? Are there other smarter, better ways to do this?
I would like to avoid using a database for this, if possible.
Generally, static variables, apart from the places may be declared, will stay alive during application lifetime. An application lifetime ended after processing the last request and a specific time (configurable in web.config) as idle. As a result, you can define your variable to store Short-IDS whenever you are convenient.
However, there are a number of tools and well-known third-parties which are candidate to choose. MemCache is one of the major facilities which deserve your notice as it is been used widely in giant applications such as Facebook and others.
Based on how you want to arrange your software architecture you may prefer your own written facilities or use third-parties. Most considerable advantage of the third-parties is the fact that they are industry-standard and well-tested in different situations which has taken best practices while writing your own functions give that power to you to write minimum codes with better and more desirable responses as well as ability to debug which can not be ignored in such situations.
Is it better to have one servlet running multiple tasks, or have multiple servlets?
eg.
At the moment i have it like this:
ViewCarsServlet
CarViewSalesServlet
AddCarSaleServlet
With each serlvet handliling my requests.
But would it be better to have obne such as CarServlet.
And then pass a Task variable into a If statement?
Which would be better coding practice?
It's better to have multiple servlets for multiple task. it will not affect performance as well as it is more user friendly, for a particular task we can hit separate servlet instead of making one servlet complex with lot of if else conditions, if we using only one servlet every time we need to check the conditions and then execute the respective task.
Define "better".
My personal taste would group related operations into a single servlet. I'd think about it as exposing a REST API of operations that went together. But that's just my personal opinion. I don't know of a "right" answer that everyone would agree to.
My question is possibly a subtle one:
Web services - are they extensions of the presentation/web layer? ..or are they extensions of the biz/data layer?
That may seem like a dumb question. Web services are an extension of the web tier. I'm not so sure though. I'm building a pretty standard webform with some AJAX-y features, and it seems to me I could build the web services in one of two ways:
they could retrieve data for me (biz/data layer extension).
example: GetUserData(userEmail)
where the web form has javascript on it that knows how to consume the user data and make changes to markup
they could return completely rendered user controls (html; extension of web layer)
example: RenderUserProfileControl(userEmail)
where the web form has simple/dumb js that only copies and pastes the web service html in to the form
I could see it working in either scenario, but I'm interested in different points of view... Thoughts?
In my mind, a web service has 2 characteristics:
it exposes data to external sources, i.e. other sources than the application they reside within. In this sense I agree with #Pete in that you're not really designing a web service; you're designing a helper class that responds to requests in a web-service-like fashion. A semantic distinction, perhaps, but one that's proved useful to me.
it returns data (and only data) in a format that is reusable by multiple consumers. For me this is the answer to your "why not #2" question - if you return web-control-like structures then you limit the usefulness of the web service to other potential callers. They must present the data the way you're returning it, and can't choose to represent it in another way, which minimises the usefulness (and re-usefulness) of the service as a whole.
All of that said, if what you really are looking at is a helper class that responds like a web-service and you only ever intend to use it in this one use case then you can do whatever you like, and your case #2 will work. From my perspective, though, it breaks the separation of responsibilities; you're combining data-access and rendering functions in the same class. I suspect that even if you don't care about MVC patterns option #2 will make your classes harder to maintain, and you're certainly limiting their future usefulness to you; if you ever wanted to access the same data but render it differently you'd need to refactor.
I would say definitely not #2, but #1 is valid.
I also think (and this is opinion) that web services as a data access layer is not ideal. The service has to have a little bit more value (in general - I am sure there are notable exceptions to this).
Even in scenario 1, this service is presenting the data that is available in the data layer, and is not part of the data layer itself, it's just that it's presenting data in a different format than a UI format (ie. JSON, xml etc.)
Regards which scenario I would use, I would go for scenario #1 as that service is reusable in other web forms and other scenarios.
While #1 (def. not #2) is generally the correct approach (expose just the data needed to the view layer and have all markup handled there), be careful with the web portion of the service in your design.
Data should only be exposed as a web service (SOAP/WSDL, REST) if it is meant to be consumed remotely (some SOA architects may argue this, but I think that is out of scope for this question), otherwise you are likely doing too much, and over-designing your request and response format. Use what makes sense for your application - an Ajax framework that facilitates client/server communication and abstracts the underlying format of communication can be a big help. The important thing is to nicely encapsulate the code that retrieves the data you want (you can call it a service, but likely it will just be a nicely written helper class) so it can be re-used, and then expose this data in whatever way makes the most sense for the given application.
What are the downsides to using static methods in a web site business layer versus instantiating a class and then calling a method on the class? What are the performance hits either way?
The performance differences will be negligible.
The downside of using a static method is that it becomes less testable. When dependencies are expressed in static method calls, you can't replace those dependencies with mocks/stubs. If all dependencies are expressed as interfaces, where the implementation is passed into the component, then you can use a mock/stub version of the component for unit tests, and then the real implementation (possibly hooked up with an IoC container) for the real deployment.
Jon Skeet is right--the performance difference would be insignificant...
Having said that, if you are building an enterprise application, I would suggest using the traditional tiered approach espoused by Microsoft and a number of other software companies. Let me briefly explain:
I'm going to use ASP.NET because I'm most familiar with it, but this should easily translate into any other technology you may be using.
The presentation layer of your application would be comprised of ASP.NET aspx pages for display and ASP.NET code-behinds for "process control." This is a fancy way of talking about what happens when I click submit. Do I go to another page? Is there validation? Do I need to save information to the database? Where do I go after that?
The process control is the liaison between the presentation layer and the business layer. This layer is broken up into two pieces (and this is where your question comes in). The most flexible way of building this layer is to have a set of business logic classes (e.g., PaymentProcessing, CustomerManagement, etc.) that have methods like ProcessPayment, DeleteCustomer, CreateAccount, etc. These would be static methods.
When the above methods get called from the process control layer, they would handle all the instantiation of business objects (e.g., Customer, Invoice, Payment, etc.) and apply the appropriate business rules.
Your business objects are what would handle all the database interaction with your data layer. That is, they know how to save the data they contain...this is similar to the MVC pattern.
So--what's the benefit of this? Well, you still get testability at multiple levels. You can test your UI, you can test the business process (by calling the business logic classes with the appropriate data), and you can test the business objects (by manually instantiating them and testing their methods. You also know that if your data model or objects change, your UI won't be impacted, and only your business logic classes will have to change. Also, if the business logic changes, you can change those classes without impacting the objects.
Hope this helps a bit.
Performance wise, using static methods avoids the overhead of object creation/destruction. This is usually non significant.
They should be used only where the action the method takes is not related to state, for instance, for factory methods. It'd make no sense to create an object instance just to instantiate another object instance :-)
String.Format(), the TryParse() and Parse() methods are all good examples of when a static method makes sense. They perform always the same thing, do not need state and are fairly common so instancing makes less sense.
On the other hand, using them when it does not make sense (for example, having to pass all the state into the method, say, with 10 arguments), makes everything more complicated, less maintainable, less readable and less testable as Jon says. I think it's not relevant if this is about business layer or anywhere else in the code, only use them sparingly and when the situation justifies them.
If the method uses static data, this will actually be shared amongst all users of your web application.
Code-only, no real problems beyond the usual issues with static methods in all systems.
Testability: static dependencies are less testable
Threading: you can have concurrency problems
Design: static variables are like global variables