I'm using limited data size broadband connections. So I want to disable all programmes (applications) to accessing internet in windows 8 other than browsers. so that I can save data usage by unwanted application.
The WinRT applications are sandboxed, which means they have a limited range of operations within the system and those operations interact with the application itself, only.
You cannot dictate to close an existing app which is running from a WinRT app. Only the user can do this.
At most, you should try to get the status of the network usage and if it is high, show the user a specific message, something like : At the moment, the data usage is relatively high, for better experience, try to close some the apps which cause this.
Related
*If you think I should ask this question elsewhere, please let me know.
Background:
I need to build an application for converting weights into piece counts. The weights currently come from scales that are connected to PCs via serial ports. I am replacing PC based applications that connect to the scales via a serial connection. I am considering the feasibility of making the next generation of these applications into a web based solution. However, I do not want to do this if it is not a better solution than building an application that runs on the client. In addition, I do not want to use any sort of browser specific technology (ActiveX).
FYI, we currently run a Windows based environment.
What I have so far:
I am currently thinking that I will need some sort of client side “service” to allow the scale data to be retrieved by the web application. I have looked into creating a WCF service for this task and have determined that it would probably work. This would require that the scale be connected to some sort of Windows based computer that is on the network. I would then interface the WCF service (running as a Windows Service on the PC) from an ASP.NET web application running on an IIS web server. This would minimize the footprint on the client and allow us to use a web application.
I am looking for any constructive thoughts and ideas. I am open to reviewing any feasible option that would make this solution as simple and reliable as possible.
Answering my own question per request #honeycomb.
I discovered two viable options for this purpose. Following are high-level overviews of the techniques we leveraged.
Develop a scale reader to be run on a PC connected to the weigh scale device via an RS-232 connection. This reader will forward any information received from the scale into a database. Combined with technologies like change notifications and server-side push notifications, this option will allow data from a weigh scale to be pushed into a web page with little effort and no additional cost. (This option has performed well during testing but is not yet in production)
Invest in converting weigh scale devices to use ethernet connections and connect them to the network. Use an OPC server with a driver that can connect to the weigh scales you are using to read the data from these devices. Consider KEPWare's offering for this purpose. Use KEPWare's tools to forward this data to a database or wherever it is needed. Once again, you can leverage change notifications and server-side push technologies to push this data into web applications in near real-time without polling. (This option is currently working in a critical, production environment)
The second option is probably better in the long-term, but this may vary based on your specific situation. It has some up front costs and would be better suited to new implementations. For my system, I am using the first option because it will ease the transition between the new and old systems.
Note: I am not in any way associated with KEPWare. I am only suggesting their product because it is the only one I am aware of that supports this functionality. I am sure there are other OPC servers that support this type of device.
After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.
Since this question is from a user's (developer's) perspective I figured it might fit better here than on Server Fault.
I'd like an ASP.NET hosting that meets the following criteria:
The application seemingly runs on a single server (so no need to worry about e.g. session state or even static variables)
There is an option to scale storage, memory, DB size and CPU-power up and down on demand, in an "unlimited" way
I researched but there seems not to be such a platform, that completely abstracts the underlying architecture away and thus has the ease of use of a simple shared hosting but "unlimited" scalability.
"Single server" and "scalability" are mutually exclusive, I'm afraid. But a good load-balancer will apply affinity to requests so you don't need to needlessly double-cache data on multiple servers.
However, well-designed web applications are easy to port to a multiple-server scenario.
I think your best option is something like Windows Azure Websites (separate from Azure Web Workers) which run on a VM you don't have access to. The VM itself provides enough power as-is necessary to run your website, so you don't need to worry about allocating extra CPU power or RAM.
Things like SQL Server are handled separately, but is very cheap to run, and you can drag a slider to give yourself more storage space.
This can be still accomplished by using a cloud host like www.gearhost.com. Apps live in the cloud and by default get 1 node worker so session stickiness is maintained. You can then scale that application larger workers to accomplish what you need, all while maintaining HA and LB. Even further you can add multiple web workers. Each visitor is tied to a particular node to maintain session state even though you might have 10 workers for example. It's an easy and cheap way to scale a site with 100 visitors to many million in just a few clicks.
I saw this question:
How many users on one azure instance before I hit performance issues?
Which discusses how many users an azure instance could support for a webpage. I'm wondering if this would be any different for a webpage vs webserver that client applications (such as mobile phones) are call into, to get data. For example, if you have a single azure webrole running that exposes a REST enpoint, how many devices could call into the service before it starts to buckle under pressure?
How long is a string? :-)
If your app computes one million digits of pi on each web request, it will probably handle fewer concurrent web requests than an app that replies to each web request with "hello world."
(This is another, blunter, version of David's answer.)
A Web Role instance is merely a Windows 2008 Server R2 (or SP2) virtual machine of a given size (1-8 cores, 1.75-14GB usable RAM, 100-800Mbps network). You can run web sites, different web servers (tomcat, for example), WCF services (through IIS or standalone ServiceHosts), etc.
Scaling is going to depend heavily on the app itself: Is it CPU-constrained? Network-constrained? Do you have queue-based workload and your queue backlog is growing?
Sometimes it's critical to scale up to larger VMs, just to handle one of the constraints mentioned. It's always wise to pick the smallest VM size to run in a baseline mode (e.g. 1 or 2 users), then scale out to more instances as needed.
It's important to identify the key performance indicators (KPI's) for your app. You can then automate your scaling, with something like the Autoscale Appliction Block (WASABi).
Here's a reference page with all VM sizes, with details about CPU, local disk, network bandwidth, and RAM.
I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.