We want to launch a vehicle tracking service, remote monitoring of assets through GPRS/SMS. development, integration and maintenance of gps tracking software /Remote Monitoring SYSTem (Gsm/Gprs based)having Google Map API or mapinfo,.img or possibility to integrate any other map service, geo fencing, geo-coding, reverse geo-coding, alerts on events, user friendly gui, dash board, Billing each user , scrolling, fuel meter display etc. For reference , have a look at gpsgate.com (tracking server solution)
How to develop this and how much time is needed for this ?, any idea ?
First of all you will need some sort of gateway. It must handle TCP connections from devices(use async sockets!=)), parse their data and send to storage.
Next big thing is storage itself. If you want to support different devices, I would suggest to use something like Apache Cassandra with keys, based on date(only date, not time)and device UID.
Third part of puzzle is how you going to present data to users. This is pretty simple. Id suggest REST services.
This is my own experience. On my last job I was an Architect/Lead on quite the same project.
It is now live and successful handling 30k+ devices online on 1 server for apps(IIS), 2 for data and 2 for TCP gateways.
If you want more specific info, feel free to ask=)
Honestly, it all depends on your skills and expertise.
A team that is well versed in designing complex systems like that could finish the task in 4-6 months.
Given that you are asking such a question, rather than already having a ballpark estimate, means that you probably would be learning as you go. This could easily stretch into over a year, especially without prior experience managing such an overarching project.
Related
I need to build a reliable predictive dialer based on Asterisk. Currently the system we use includes Wombat and Asterisk, and we do not find this solution usable as Wombat provides a poor API and it's impossible to use it without regular manual operations.
The system we want:
Can be used solely via API or direct database queries (adding lists to campaigns, updating lists, starting campaigns, stopping campaigns etc.) so that it can be completely integrated into an existing product
Is free, or paid for annually independent to the usage rate
Is considered stable
Should be able to handle tens of thousands of calls per day, if it matters
Use vicidial.org or hire freelancer to build new core with your needed api.
You can also check OSdial for this, it also developed using asterisk.
We have been working with a preview of the next version of Wombat, through the Early Access program, and Wombat has a complete configuration and reporting JSON API and you can deploy it "headless" in order to scale up to thousands of parallel lines. If you ask Loway they can likely get you access to the Early Access program.
BTW, Vicidial is great for agent-based outbound, but imposes quite a large penalty on the number of agents per server - you cannot reasonably use it to do telecasting at the scale we are looking for as it would require too many servers. Wombat is leaner and can drive over one thousands channel per server. YMMV.
This question would be better placed on a "hire-a-freelancer" site like oDesk ... if you need custom programing done, those are the sorts of places to go to get manpower.
Your specifications are well within what is possible with Asterisk. I'd strongly recommend looking at Vici Dial and OS Dial as others have suggested; out of the can, they are pretty good.
The hard part of any auto-dialer is not the dialer, oddly enough. It's the prediction algorithms, the answering machine detection algorithms and the agent UI. Those are what makes or breaks an auto-dialer application for a company.
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.
Our company has people in every catastrophic event here in the U.S. and parts of Canada. An example is they were quite prevalent in Katrina immediately after the event.
We are constructing an application to improve their job in the field which may be either ASP.NET or WPF, and the disconnect requirement makes us believe it will be a WPF application. Our people need to be able to create their jobs, provide all of the insurance and measurement data, and save it as if in the database whether or not the internet is available.
The issue we are trying to get our heads around is that when at catastrophic events our people need to be able to use our new application even when the internet is not available. (They were offline for 3 days in Katrina)
Has anyone else had to address requirements like this and suggestions on how they approached functioning on small-footprint devices while saving data as if they were still connected to the backend services and database? We also have to incorporate security into this as well, and do it well enough that their entered data loads into the connected database without issues.
Our longterm goal is to also provide this application for Android and IPad Tablet devices as well as laptops. Our initial desire for ASP.NET was it gave us an immediate application for the tablet environment. In the old application they have, they run a local server, run remote connections on the tablets and run the application through terminal server. Not pretty. Not pretty.
I feel this is a serious question that is not subjective so hopefully this won't get deleted.
Our current architecture on the server side is Entity Framework with a repository pattern, WCF services to satisfy CRUD requests returning composite data transfer objects, and a proxy for use by the clients.
I'm interested in hearing other developers' input and this design puzzle.
Additional Information Added to the Discussion
Lots of good information provided!!! I'll have to look at Microsoft Sync for sure. For the disconnected database I would be placing only list tables (enumerations) in the initial database. Jobs and, if needed, an item we call dry books, will be added for each client we are helping. (though I hope the internet returns by the time we are cleaning and drying out the homes) These are the tables that would then populate back to the host once we have a stable link. In the case of Katrina we also lost internet connectivity in our offices which meant the office provided no communication relief for days as well.
Last night I realized that our client proxy is the key to everything working! The client remains unaware of the fact that it is online or offline and leaves the synchronization process within that library. We are discovering how much data we are talking about today. I also want to make it clear that ASP.NET was a like-to-have but a thick client (actually WPF with XAML) may end up being our end state.
Now -- for multiple updates. The disconnected work will be going to individual homes by a single franchise. In fact our home office dispatches specific franchises to specific events. So we have a reduced likelihood (if any) of the problem of multiple people updating a record. The reason is that they are creating records for each job (person's home/office/business) and only that one franchise will deal with it. Of course this also means that if they are disconnected for days that the device that creates the job (record of who, where, condition, insurance company, etc) is also the only device that knows of the job. But that can be lived with. In fact we may be able to have a facility to sync the franchise devices on a hub.
I'm looking forward to hearing additional stories of how you've implemented your disconnected environment.
Thanks!!!
Looking at new technology from Microsoft
I was directed to look at a video from TechEd 2012 and thought I might have an answer. The talk was on using ASP.NET and MVC4 along with 2 libraries for disconnected behavior. At first I thought it would be great but then as it continued it worried me quite a bit.
First the use of a javascript backend to support disconnected I/O does not generate confidence. As a compiler guy (and one who wrote two interpretive languages) I really do not like having a critical business model reliant upon interpretive javascript. And script at that! It may be me but it just makes me shudder.
Then they show their "great"(???) programming model having your ViewModel exist as just javascript. I do not care for an application (asp.net and javascript) that can be, and may as well be (for lack of intellisense ) written in notepad.
No offense meant to any asp lovers, but a well written C# program that has been syntactically and type checked gives me stronger confidence in software than something written with a hope and prayer that a class namespace has been properly typed without any means of cross check. I've seen too many hours of debugging looking for a bug that ended up in a huge namespace with transposed ie in it's name. I ran my thought past the other senior developers in my group and we are all in consensus on this technology.
But we continue to look. (I feel this is becoming more of a diary than a question) :)
Looks like a perfect example for Microsoft Sync Framework
http://msdn.microsoft.com/en-us/sync/bb736753.aspx
A comprehensive synchronization platform that enables collaboration
and offline access for applications, services, and devices with
support for any data type, any data store, any transfer protocol, and
any network topology.
I often find that building a lightweight framework to fit my specific needs is more beneficial to me than using an existing one. However, always look at what's available and weigh the pros and cons before making that decision.
I haven't use the Microsoft Sync Framework, but it sounds like that's a good one to research first. If you have Sql Server Standard (or some other version other than the Express version) then replication might also be an option.
If you want to develop your own homegrown solution, then be sure to put lastupdated and dateadded fields on any tables that need to stay in sync. It doesn't 'sound' like your scenario will be burdened by concurrency issues (i.e. if person A and B both modify a field at the same time, who wins?). If that's the case then developing your own lightweight solution will be pretty straightforward.
As Jeremy pointed out, you will need a way to get the changes. In addition to using a web service, you can also use WCF which is similar to a web service in some ways. But my personal bias would be towards just accessing a SQL server remotely over the internet. The downside of that solution is added security concerns, while the upside is decreased development overhead (i.e. faster/easier development now and less maintenance over time). Also, the direct SQL solution is also assuming that this is an internal application... that you're in charge of all development and not working with 3rd parties who need access to your data and wouldn't be allowed to access it this way.
Not really a full answer but too much for a comment.
I have two apps one that synchs one way and the other two way.
I do a one way synch to client for disconnected operation. At the server full SQL Server and at the client Compact Edition. TimeStamp is a prefect for finding any rows that needs to be synched. I also don't copy the whole database as some of the largest table are non nonessential. The common use is the user marks identified records they want to synch.
If synch does what you need great +1 for Jakub. For me I don't have the option to synch the whole MSSQL both based on size and security.
Have another smaller application that synchs two way but in this case it has regions and update are only within the region. So a region only synchs their data and in disconnected mode they can only add new records. Update to an existing records must be performed in connected mode. That was mangeable. In that case MSSQL for the master and used XML for the client.
No news to you but the hard part of a raw synch is that two parties may have added or revised the same record.
I've written a ASP.NET app that I hope to sell to businesses, I could host the trial but it's designed to connect to the customers data so customers will certainly want to install it to do a successful evaluation.
I've never produced anything commercial before so I'm looking for advice on how best to limit the trial, a 30 day trial seems most common, do you simply rely on the clock of the PC/Server they install it on? Any other suggestions welcome, please keep in mind this is ASP.NET app so will be installed on their web server.
Thanks
Craig
I would just do it via the PC's clock. At the end of the day, they could just change the clock and continue to use your software, though it's probably not going to work in practice (i.e. most software actually uses the date/time for other things as well and changing it going to screw that up).
Generally, you can usually trust business more than you trust the general public. The liability of a business is much higher than that of an individual, so if it came to it, you could potentially sue them for quite a bit. That alone means most businesses will purchase licenses for all of their software: a few hundred (or even thousand) dollars for a software license is much better than risk getting sued.
When they sign up for the demo, make sure you get all of their contact details and so on.
I would setup a web service on your server to authenticate the demo application. The web service should get called periodically and if it fails, then shut down the application. That way you have complete control over the trial (you can extend it or shut it down remotely).
You should give them some sort of key which they will place in your web.config that will identify them as a customer.
Make sure you take the usual precautions of encrypting / using hashes with both the key and the web service so it's not bypassed.
This sort of thing has been well covered on SO in the past.
You cannot make it unbreakable, but you can make it very difficult for the client to break your trial period.
One way to do it is to take the first run time and encrypt that info and store it either in your web.config or database. This has a weakness though: what do you do if the value is not present where you expect it to be?
Another option is to ping a webservice that you host. If the webservice says their trial is over then you can render the appropriate page to tell them that. This has the advantage that the webservice is beyond their control and cannot be messed with. It has the disadvantage that not every client will want to be allowing their web app to phone home, and there may be connectivity issues which would interfere with the functioning of your app.
So you might want to come up with a variety of options, and then implement a licencing module using the Provider pattern, so that you can swap in the licencing module most suitable for that client.
Put a counter in the web.config, of course give the counter a non-related name so the customer does not know what it is for. Every time they access the application you can increment the counter. Give them x number of log-in's.
If you want you can encrypt the counter if you do not want the customer to figure out that the counter is incrementing.
I'm working on a forum based website, the site also supports onsite messaging (ie. the users can send private messages to other users), what I'm trying to do is notify a member if they have new messages, for example by displaying the inbox link in bold and also the number of messgages, e.g. Inbox(3)
I'm a little confused how this can be implemented for a website running on a server farm, querying the database with every request seems like an overkill to me, so this is out of question, probably a shared cache should be used for this, I tend to think this a common feature for many sites including many of the large ones (running on server farms), I wonder how they implement this, any ideas are appreciated.
SO caches the questions, however every postback requeries your reputation. This can be seen by writing a couple of good answers quickly, then refreshing the front page.
The questions will only change every minute or so, but you can watch your rep go up each time.
Waleed, I recommend you read the articles on high scalability. They have specific case studies on the architectures of various mega scale web applications. (See the side bar on the right side of the main page.)
The general consensus these days is that RDBMs usage in this type of application is a bottle neck. It is also probably safe to say that most of the highly scalable web applications sacrifice consistency to achieve availability.
This series should be informative of various views on the topic. A word on scalability is highly cited.
In all this, keep in mind that these folks are dealing with Flickr, Amazon, Tweeter scale issues and architectures. The solutions are somewhat radical departures from the (previously accepted) norms and unless your forum application is the next Big Thing, you may wish to first test out the conventional approach to determine if it can handle the load or not.