One of our clients has a Java EE application. We would like to develop a new project using ASP.NET/C# by hosting the application as a sub directory under this Java EE project.
My questions are:
Will the .NET application run smoothly?
Do I need to keep anything in mind before I make a promise to the client?
The way you strucure your projects do not affect the behavior of your applications at all.
However in the end, each of the compiled and not compiled resources need to be configured propoerly to their proper Web Server, you shouldn't have any problem at all.
IIS has its own directory and Tomcat(or whatever you are using) will have its own directory.
Just let him understand that there is no sense on sharing the projects in a single root folder if the projects are not going to be related at all.
The only way to make them interact is by means of services and queues that you can orchestrate in any of both technologies.
UPDATE
let's suppose that:
you are using default of both web servers: your IIS need your applications to be copied to c:\inetpub folder whereas tomcat uses the $CATALINA_BASE system variable to locate their own folder. That won't be a problem at all.
Now, let's suppose that your client chose the same exact folder to be the root of your websites in tomcat and iis, (very bad maintenance decision by the way)
you could also separate both environments by having two folders : JAVA and DOTNET
Now let's suppose your client won't accept any logic suggestion, and you have to merge java files and aspx files, technically there won't be any issue because each web server will handle requests for very different issues, however, if you are also using the same resources, let's say a picture used in both pages, you will have locked-files issues, your iis can only respond for its own behavior and tomcat will only respond with its own behavior.
So in summary, technically speaking it could work, performance will be hit on your hard drive, it all depends on the request loads of each app, but overall it is a bad infrastructure design.
hope it helps,
Related
I've a dedicated machine with at least 6 diferent asp.net 4.5 applications where the developer deployed compiled versions. This apps are all working fine now, but I haven't access to source code.
Now I want to deploy this apps to Azure, but not to a VM, to an Azure Web App Service. Is it posible?
Thanks in advance!!!
Quite possibly. We can't say for sure without more information.
You'll need to FTP all files from your existing root directory/directories to your new Web App. If it's a vanilla ASP.NET web app and there aren't any dependent issues (such as databases on other servers that you cannot move or poke holes through firewalls), it should work.
There are many considerations. For instance, if the applications have dependencies on specific drive letters, you won't be able to mount those drives.
This is just one example, you can take a look at the restrictions that are imposed on Web Apps: https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox
The best way to know, is to create a new site, deploy the files, and see what breaks.
ASP.NET Web Forms apps have a large startup cost on the first request as they need to compile the views code (.ascx, .aspx, etc). We have to deploy these projects to several servers, which requires priming each of them so that the first users to hit certain areas of the site don't have a bad response time. Today this is a manual process, and we're making it automated by running aspnet_compiler.
Is is possible to run aspnet_compiler on the build server and deploy its output so that we do not have to run it on each web server we're deploying to?
Bonus Question: When we specify a target directory with the targetdir option in aspnet_compiler, how does IIS know where to look for the compiled files? i.e. Where is that information stored?
The project in question is a Web Application project (not a Web Site project).
There is an option in Visual Studio's Publish dialogue which may be what you want.
I've not used it to see if it actually has the effect you want, but there's several options in Configure for it. You may be able to do what you want with that.
(P.S.: This is an ASP.NET Web Forms Project as well, in VB.NET. Though the language is irrelevant.)
Also, according to this MSDN article you can add the -u and -v switches to the aspnet_compiler to get it to precompile the views for updating.
You may also want to look at this MSDN article as well, as it describes a scenario similar to yours. Especially the Compiling an Application for Deployment section, which states:
You compile an application for deployment (compilation to a target location) by specifying the targetDir parameter. The targetDir can be the final location for the Web application, or the compiled application can be further deployed.
Essentially, you might be able to run this command on your build server, and then distribute your binaries as needed.
Our solution file contains many different projects including an ASP.NET MVC web app, a windows service, and several desktop applications. One project handles logging and has an app.config file containing a list of recipients to inform when something fatal has been logged.
Concerning the deployment of our webapp, I was wondering if it would be better to create a section in the web.config containing this information so that I wouldn't have to deploy the logger's app.config, or is it better to do deploy separately, since other applications use the logger and it should depend on its own file to tell it who to inform?
Typically it's best to push the responsibility of infrastructure configuration up to the consuming client application (whether that be configuration through code, Web.config, app.config, etc.). That is to say, if the library is expecting certain sections to exist in the default config file, make providing this the responsibility of the client's config.
Do what makes the most sense to you and your team.
I've seen this done a dozen ways and it never changed how successful the application was and no customer/stakeholder/enduser ever cared.
I have a presentation web farm with four load-balanced servers. I have one web application with two website domains that represent that application. Rather than constantly push to two different folder locations, I figured I can push to one location and have a virtual directory on each website point to the single folder on the webserver.
Here is the setup:
The load balancer is CoyotePoint. The web framework is asp.net 3.5. IIS 6 (slowing moving to 7).
I'm concerned about performance in a production environment. Are there any ramifications to having two websites with virtual directories pointing to the same directory on disk? Should I also be worried about application pools?
I think I found what I was looking for. It's called CentralizedWebFarmManagement for IIS. Specifically, I think the Shared-configuration, and Web Deployment Tools for Web Farm is exactly what I need!
It depends on what this application is doing. If you're doing anything at all fancy with System.IO you're going to run into issues.
There are other ways to make the pushing of files easier. I highly recommend creating a quick bat file with a few robocopy commands in it.
I have been struggling with the best way to make sure that the certain XML configuration files stay synchronized between multiple servers in a Web Farm. I am not necessarily concerned about the Web.Config, as much as I am concerned about some of the other configuration files that are present in the application.
For example, we store caching policies in an external XML file, where it has its own schema, and will soon have its own tool to maintain the values. Once the changes are applied, they should be migrated across the farm.
Some scenarios that I have considered so far:
RoboCopy, replication, or equivalent. This requires that work only ever be done on a particular node of the farm. (Push to application.)
Configuration Server. All external configurations and their tools are stored on a physical instance of IIS. The application will retrieve these configuration files on application start and periodically poll for changes. (Application polls.)
Team Build. We could host the tools in-house and set up a post-build process to deploy the files. (Application polls.)
Database Storage. Applications could read and poll database for configuration. (Application polls.)
All of these scenarios have their pros and cons. I am not sure what the best solution is, although I think having the application poll for changes might be the cleanest approach. Still, the question is which way would you best consider this to be accomplished?
I have web cluster running Windows Server 2008. To keep everything synchronized I'm using a DFS share that is on each of the members. Any changes made to any of the members are replicated to the others. I'm also using the IIS "Shared Configuration" feature to store my metabase within this DFS share. That way all of the IIS settings are replicated as well.
On a project that I worked on, we used a product called ServerSync to replicate files on the farm. It works pretty good and it is -fast-.