What architecture to use for my ASP.NET Application? - asp.net

Here is the scenario:
There's a data source sitting at site A that I could communicate using a set of APIs to get data I need.
I want to build an ASP.NET web application that periodically fetch data from site A and update/store the data in my own database. And periodically process the data and store processing results in my database so that users could browse the results in my web application front-end.
I have no clue how to design the architecture? How to achieve things like periodically communicate with another data source and process data in my database periodically in a web application?
I have very little experience designing web applications. It would be really nice if you could elaborate.

In answer to:
"How to achieve things like periodically communicate with another data source and process data in my database periodically in a web application?"
I do this by creating a web service, then creating a console application. I use Windows Task Scheduler to run the console application at an interval of my choice. The task is run and the web service is called, which communicates with various data sources and processes the data.

Kind of vague to answer.
There are tools that help with the communication with each API, some services provide wrappers for communication. When they are not provided look into something like Hammock as a wrapper.
High level helps, they are not ABSOLUTEs, just tips and thoughts
Follow a mutli-tiered model where you clearly separate your layers
Model
View
Controller
Use an ORM like ServiceStack for data access
Create a small console app to do the processing
Use a schedule job in windows to run it.
DONT do this with something like Quartz ( way too much overhead )
DONT do with with SqlServerAgent, too much overhead, not enough control if you are a .net Programmer
Watch how big your objects and lists are getting you will run out of memory when working with other people's data
Use JSON it is a great format to pass data around within and external to your application
Setup logging make sure it works, other peoples data breaks
Scrub incoming data, you can't assume other people's data is clean.
Profile your application to know where the hot paths are
Write unit tests
Run your unit tests regularly.
Test on multiple browser
Thats probably good for now. Clarify your question and we may be able to give more help.

Related

Modular Application Design

We currently have an application that is usable by several clients, it is used to download and store data from our application that they have on their environment.
We have a need to pass this application over to a developer but at the same time, we need to protect our code. The way that I see it working is that we would like to some how consider our current app a framework, allowing another app to be created on top of it, but the app may have its own screens, but re-use some of the built-in screens.
Is it possible to protect our app in such a way with out rewriting everything into protected DLL's? Or should we just suck it up and share our code with consulting firms that want to build these types of apps for our clients?
If your proprietary code is entirely focused on downloading and storing data. You could create an online REST api that returns the data over the internet. The other developer could then just request the data from your servers using an HTTP call.
However if your code needs to be client-side, the only real thing you can do is compile a DLL, and even then that can be decompiled.

Putting a new web interface on an old fat-client database

My company has a fairly old fat client application written in Delphi. We are very interested in replacing it with a shiny new web application. This will make maintenance a breeze and many clients want a web application.
The application is extremely rich in domain knowledge, some of which is out of our control. Our clients use the program to manage their own clients and report them to the government. So an inaccurate program is a pretty big thing. The old program has no tests. We are not sure yet if we will implement automated testing with the new one.
We first planned to basically start from scratch. But we are short handed and wanting to basically get everyone on the web as soon as possible. So instead of starting from scratch we've decided to try to make use of the legacy fat-client database.
The database is SQL Server and can be used in SQL Server 2008 easily. It is very rich in stored procedures, functions, a few triggers, and lots of tables with over 80 columns... But it is decently normalized. We want for both the web application and fat client to be capable of using the same database. This is so that if something breaks badly in the web application, our clients can still use the fat client and connect to our servers. After the web application is considered "stable", we'd deprecate the fat client.
Has anyone else done this? What tips can you give? We want to, after getting everyone on the website, to slowly change the database structure to take care of some design deficiencies. What is the best way to keep this in a data access layer so that later changes are easy?
And what about actually making the screens? Is there any way easier than just rewriting an 80 field form in ASP.Net? Are there any tools that can make this easier?
The current plan is to use ASP.Net WebForms (.Net 3.5). I'd really like to use MVC, but no one on the team knows it including me.
We are not sure yet if we will implement automated testing with the
new one.
Implement automated testing. What's the point in replacing one buggy program with another?
Good question, but "Slowly change" the db structure after getting everyone on the website, sounds like a joke...
I would rather take the opportunity to create a fresh db structure, write a bulletproof migration script for you db, that you can try out and rewrite a zillion times without any side effect fro your clients, and then write whaterver you want (fat/web) on the new db, have it tested and migrate everyone when it's ready.
I have a couple suggestions:
1) create a service layer to abstract away the dependance on the DAL. In a situation as you describe having a layer of indirection for the UI and BLL to rely on makes DB changes much safer.
2) Create automated tests (both unit and integration), especially if you plan on making fairly significant changes to the Domain or Persistance layers (BLL/DAL). To make this really easy you should always try to program to an interface. This makes your code more flexible as well as letting you use mocking frameworks (Moq is one I like) to ensure your tests truely are unit tests and not integration tests.
3) Take a look at DDD (http://domaindrivendesign.org/) as it seems to fit pretty well with the given scenario. At the very least there are some very useful patterns that can help make your application more flexible.
4) MVC isnt very hard to learn at all, it is however an easy way to get unit testing setup for the UI as a result of the MVC architecture (testing the controller and not the view). That said, there is no reason you couldn't unit test web forms, its just a bit more work. MVC really is just a UI framework/design pattern (more Model2 but we can ignore that for now). It gets you closer to the metal so to speak as you will be writting a lot more HTML and using a Model (the 'M') for passing data around.
For DDD take a look at Eric Evans book: http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=sr_1_1?s=books&ie=UTF8&qid=1317333430&sr=1-1
Hope that helps
ASP.NEt forms is a no starter, is completely inappropriate for something like this. I recommend to start with something like Creating an OData API for StackOverflow including XML and JSON in 30 minutes, then build your Web app on top of that (ie. push it to the client, use JQuery/Silverlight).

Steps involved in reading an xml file from webservice .NET

I would like to know what are the basic steps involved in setting up your application to able to read data from another application. Then take that data and modify it and send it back to the application.
The data being read will have over 100 fields.... what is the most efficent way to store them? Put them in a class object?
I know web services are involved...any other info would be great!
The application is in .NET using vb
Thanks
You may need to be more specific about your requirements to get a truly useful answer. That said, Windows Communication Foundation (WCF) is likely to make your life much easier. Google for tutorials -- I can't say I have a favorite. You can handle one- or two-way communication readily with WCF, and you can then focus more on making your application logic work.

Consuming StackOverflow API and Visual Studio 2010

I have downloaded TheWorldsWorstStackOverflowClone. One of the project is called TheWorldWorsts.ApiWrapper, which basically is the core of accessing the API. There is a class called ApiProxy.cs, which has all the methods for the API call. This is good.
Now what I want to do is I am trying to collect data from this API interface and store it in a database. I know the limit to the API call is 10k per day. I.e: I want to be able to call the method in the ApiProxy class 10k times per day, done automatically. How can I do this?
The non-automatic way would be to create a dummy site where when every time I access the site it does all that process, but this not efficient. It seems that I have to write some kind of a scheduler by deploying a web service, but that is too complicated... as explained here. Any other simpler methods?
A Windows Service or Desktop App might be a better solution than a web application. You are not deploying a web service, you are consuming one using a proxy class, and this does not require you to have a web server or a web site.
You could use a web application to control and monitor progress as your service downloads data, but the actual work is long running and needs to be offloaded to another process or thread so you can tell the user whats going on.
Check out this one
http://stacky.codeplex.com/
This looks what you need, though I am facing some debugging issues, but hope you can figure it out.

Getting data out of PeopleSoft

We have a PeopleSoft installation and I am building a separate web application that needs to pull data from the PeopleSoft database. The web application will be on a different server than PeopleSoft, but the same internal network.
What are my options?
This one's an oldie but it may still be of interest.
PeopleSoft has it's own schema within the host database (Oracle, SQL Server, DB2 etc) which are the PSxxx tables, eg: PSRECDEFN is the equivalent of Oracle's DBA_TABLES. These tables should not be touched by any external code. The application tables are stored in PS_xxx tables, eg: PS_JOB. These tables can be read and updated by any SQL code.
Many batch programs in PeopleSoft (eg: Application Engines, COBOL or SQRs) access the tables directly, and this is the fastest way to get data into or out of the database. However PeopleSoft has quite a rich application layer which is bypassed when doing direct SQL. This application layer must be replicated in direct SQL code, especially for inserts or updates. There may be updates to other tables, calculations or increments of database-stored counters.
To determine how to do this one must look through the PeopleCode (a VB6-like interpreted language), page design (via Application Designer) and use the PeopleCode and SQL trace tools. These days the application layer is huge, so this can be a lengthy task for non-trivial pages. PeopleSoft groups related pages into "Components", and all pages in the component are saved at the same time.
Component Interfaces were introduced with PeopleTools 8 as a means to avoid doing all of this. Using a generator within the PeopleSoft app designer, a Component Interface is generated based on the component. For many components these can be used to access the pages as a user would, and can be accessed via PeopleCode programs, and therefore via App Engine programs and via the Integration Broker. They can also be wrapped in Java code and access directly by code able to execute against the app server with a web service wrapper. This method is best for low-volume transactions: heavy extracts work better with native SQL.
The online development and tracing tools in PeopleSoft are pretty good, and the documentation is excellent (although quite extensive) and available on: http://download.oracle.com/docs/cd/E17566_01/epm91pbr0/eng/psbooks/psft_homepage.htm
If you are just looking at bringing out data from a given Component, the easiest way would be to turn on the SQL trace (under the utilities menu in PeopleSoft) and bring up some records for the Component. Wading through the trace file will give you a good idea of what to do, and much of the SQL could be cut and pasted. Another method would be to find an existing report that is similiar to what you are trying to do and cut out the SQL.
Have a PeopleSoft business analyst on hand to help you develop the requirements wouldn't hurt either.
Yes - Integration Broker is Peoplesoft's proprietary implementation of a publish/subscribe mechanism, speaking xml. You could of course just write code that goes against your database using JDBC or OLE/ODBC. Nothing keeps you from doing this. However, you must understand the Peoplesoft database schema, so that you are pulling from, or inserting/updating/deleting all of the proper data. Peoplesoft takes care of this for you.
Also, check out Component Interfaces - and they are exposed as an API to Java or C/C++.
I guess it depends on your requirement, and which version of PeopleSoft you're on.
Do you want real-time lookup? If that's the case then you'll want to look at Web Services/Integration Broker.
If you want a batch/bulk export then a scheduled App Engine would do the trick.
The best way is to use Integration Broker (IB) services to expose the PeopleSoft database data to external applications. The external application will be able to access the PeopleSoft IB services as XML over HTTP, thus allowing you to use any widely used XML parsers for this purpose.
The problem with component interfaces as opposed to Integration Broker is that component interfaces tend to be much slower than direct DB access from within IB service PeopleCode. Also future additions to the component attached to the component interface sometimes tend to 'break' the interface.
For more details on PeopleSoft Integration broker, you can access the online documentation at http://docs.oracle.com/cd/E26239_01/pt851h3/eng/psbooks/tibr/book.htm
Going directly to the database means you have to re-create the presentation logic... see my longer answer above. You can do this for simple pages but otherwise using a component interface is the way to go.
You can also write a sqr process for bulk data extraction. SQR will create the output file which the other application can pick. SQR would be faster than the application engine programs as it performs most of the operations in memory.

Resources