Why would installation method affect .NET service? - asp.net

I have built a WCF service that is hosted in a Windows Service following this article: http://msdn.microsoft.com/en-us/library/ms733069.aspx. Part of what the code in the service does is join a multicast group and listen for data that is broadcast to the group. Then it processes it. I have found that when I install the service manually using InstallUtil it works fine. To install it manually I do the following:
Build the MyService project in Visual Studio.
Right click on the Visual Studio Command Prompt and choose Run As Administrator
Navigate to the folder that has the MyService.exe file
Run the InstallUtil command as follows: installutil.exe MyService.exe
The service installs in Windows fine and then I start it. Then I run my ASP.NET application which is the client for the service and it runs fine. The service receives and processes the data just fine.
However I am trying to use Advanced Installer to build an MSI or EXE that will install the service and the ASP.NET application all at once so it doesn't have to be done manually. I am able to successfully create the Advanced Installer project that does this and it actually installs both the ASP.NET application and the Windows Service just fine and it starts my Windows Service too. However the really strange thing is that when I run the application I find that my service code can not receive any multicast data. It seems to block on that line of code and I never get any data. Does anyone know why this would happen? I have tried using an EXE and using "Run As Administrator" when I do the Advanced Installer installation. Here is the code from my service.
_groupAddress = IPAddress.Parse(_myIPAddress);
_listener = new UdpClient(_myPort);
_groupEP = new IPEndPoint(_groupAddress, _myPort);
_listener.JoinMulticastGroup(_groupAddress);
byte[] _bytes = _listener.Receive(ref _groupEP);
It seems to block on that last line of code and it never receives any data. This only happens when I install using Advanced Installer. When I install manually it works fine.

A service is configured to run under the identity of a user. Is this different when you install with the different methods?
Do you use the same port number in both cases, if not it could be the firewall.
99% sure that you have checked it, but check that the service is running after it is installed using advanced installer.
Check the event log for problems with the service.

It may be that your Windows Service is not running with sufficient credentials to perform this action. To test this, I'd recommend trying to change the user account being used for the service to see if that makes any difference.
To do this, go to the services applet (start, run, type services.msc). Find your service, right-click, properties, "Log On" tab, choose "This Account" and select an administrator user account that the service can run under.

I initially thought as the guys said this was a problem with the user credentials. But since you said in both cases the service is installed under the LocalSystem the problem seems to be elsewhere.
I recommend you first check the system "Events Viewer" for any messages regarding your message failing to start, maybe there you can find more information about the failure case.
If you can't find more detailed there I suggest a little bit of reverse engineering, to see what InstallUtil does and Advanced Installer doesn't, or the other way around. Advanced Installer comes along with the Repackager tool. You can use this tool to capture the system changes performed when running "InstallUtil" by providing a dummy executable to the Repackager when it is asking for the setup package, for example Notepad.
When the Repackager launches Notepad, leave it open and run your install command for the service, after the service finished installing, close Notepad and leave the repackager do its job. Then analyze the new project that it generates to see what resources has captured, like files, registry, services, etc...
You can also capture the install package create by Advanced Installer to see if the services installation from it creates less or more registration info for your service.

Related

Symbols not loading for remote debugging on ASP.Net app built and deployed from Azure DevOps Pipeline

I have an ASP.Net application who's code is sitting in an Azure Repo. The project has a build pipeline that builds on master branch merges. I then have a Deployment pipeline that takes the latest build and deploys local on my web server through a deployment pool I have running on my server. The web application builds with the VS Build task and deploys with the IIS Web App Deploy task. Both work fine.
I have one VM in with Visual Studio that I am trying to use to remote debug the web server. I have VS Remote Tools on the web server and it successfully runs. On my VM, I am able to open VS, attach debugger to a remote process on the web server successfully. The problem is that the symbols are not loading and I'm not sure what the correct sequence of items is here.
First, it doesn't appear that there are any .pdb files in the build produced by the Azure Pipeline. Second, I'm not sure what is the proper way to get the code onto the VM for debugging (Clone repo, vs download zip, etc). Third, I attempted to add a Publish Smybols task to my deploy pipeline, however its generating .pdb folders not files, and I'm not sure where to place these either on the web server, or on the vm.
My background is in classic local TFS setups, so working, building and deploying from Azure DevOps has me confused on how to get remote debugging to work.
OK this is not for the faint-hearted. It has taken me 3hrs to slowly work through this - but it's worth it. Many times has something worked locally, but then when you trigger an Build Agent with CI on a remote server you can't Step through the code with breakpoints.
So this info is if you are using the above situation - Azure build agent and Continuous Integration. If you are using a Publish Profile this doesn't apply.
First things first... The most important parts of this answer can be found in this blog:
https://willys-cave.ghost.io/i-have-a-dream-of-a-single-build-consistent-x-and-simple/
I've added that Url to the wayback machine at archive.org in case it disappears.
So yes the problem is the .PDB files - they need to be included by adding Publish symbols task. in your VSO pipeline.
Note: I had to change the BuildConfiguration parameter to debug (different from Willy's instructions). Otherwise when you eventually start to hit breakpoints the code is optimized and you won't see variable values in the hover-over etc.
In VS 2019 Willy's instruction for Link to the symbols during remote debugging sessions needs reading carefully. I didn't. There is a better image on:
https://devblogs.microsoft.com/devops/vsts-is-now-a-symbol-server/
I include the screen capture here:
Importantly you need to add your VSTS hostname into the list of Symbol Servers
Now mine still wasn't hitting the breakpoints and I found this page (which is generally about using the slightly different method of Publish Profiles), but I noticed some more components were loaded into IIS... Yes! You may need these too.
https://learn.microsoft.com/en-us/visualstudio/debugger/remote-debugging-azure?view=vs-2019
So the most important image I will paste here:
You need to add IIS Management Scripts and Tools to your IIS installation.
That should do it. Also I run my remote debugger as Administrator, attach it to the w3wp.exe (show All Users Processes) and if it doesn't appear - reload the remote page and Refresh as if the pool goes to sleep you won't see it in the list
Good luck!

How get core get current user info?

how get current logged user on windows pc in new .net core framework ?
Old namespace
System.DirectoryServices;
System.DirectoryServices.AccountManagement;
not implemented yeat :(
Advise the ported library novell.ldap but I still do not understand how to get information about the user.
I get short name from System.Security.Principal.WindowsIdentity.GetCurrent().Name
how get full name ? Example "Mary Ann"
First, ASP.NET Core can run on Linux and Mac, where you have no "Windows" user at all.
But even if you will run you app on Windows only - what user you mean? If your website is running on server behind IIS - than IIS is started as a service, without any interactive user, and your app also started under some service account.
But if you will run your app only under Windows and only interactively (user will start dotnet run or similar) then easiest way - read environment variables USERDOMAIN and USERNAME. In command prompt run set and you will see all environment variables currently defined in your session. You may read them via Environment class or add to ConfigurationBuilder with AddEnvironmentVariables() in Startup.cs.

How to install FitNesse on application server as war/ear

Fitnesse download page only has option for standalone.jar and this is also what the instructions are for. Is it somehow possible to install FitNesse on a separate app server, such as Tomcat? There's not directly any war/ear to download, but can I bundle one somehow?
I'm experimenting with acceptance testing frameworks and need to run the tests on a very specific test environment, and thus require a possibility for installing on an already running app container where the tests are executed. Changes for getting even java executable from command line in this environment are slim, and if possible, the process would take probably months to realize.
I do not believe it is possible, but even if you were to get the wiki running inside an app server, a test run would still try to start a new java process (by starting the java executable) so you still need access to that executable.
But does the test environment really need to be in the app server? I usually use FitNesse to test an application from the outside: the test framework makes remote (http) calls to an application running in an app server, but it does not run in that same app server itself.

can azure web role instantiate an activeX component?

I have an asp.net website that I'm looking to migrate over to Azure. I have been doing some analysis of the website and code to understand issues with the migration. I am confident that 95% of the code will be fine as most of it is pretty standard web forms and dot net programming.
However, I have just run across an ActiveX component that is installed into the \windows directory on the webserver.
I am wondering if this will be an issue for the migration? There could easily be a number of follow-on questions as well depending on the answer. How do Azure web roles handle instantiation of activeX server components? Can I include the DSINTX.OCX file into the solution or do I wrap it in a dotnet assembly?
private DSINTXLib.Dsintx m_dsintx;
...
m_dsintx = new DSINTXLib.DsintxClass();
Installation of the ActiveX component should not be difficult. You can use a startup task running elevated to install it, assuming that there's an unattended installation mode for it. I blogged about this process for a Windows Service a while back.
http://blogs.msdn.com/b/golive/archive/2011/02/11/installing-a-windows-service-in-a-worker-role.aspx
If you don't have an installation file, then create a script that installs and registers the control and then use RDP to your role instance to debug. The blog post goes over some of these techniques as well. (Use notepad to create the command file, not VS.) You can add the OCX to your project, but be sure to set the Copy Local property to True so it becomes part of the package that is sent to Azure.

What could be good ways to deploy ASP.Net Web Applications?

We currently deploy web applications by creating a database and running SQL scripts through query analyzer. Then we copy the output from "publish website" and set up that website in IIS.
We have seen websetup in visual studio, but that part seems to be thinly documented. For example, we are not clear how to ask the user for IP and password of SQL server. We also tend to get websites deployed this way coming up under folders like http://example.com/project, instead of just http://example.com.
Then there are issues with AJAX.Net not being installed or some or the other patch not applied.
So far, we have physical access to the servers. Pretty soon though we are going to be shipping CDROMs. What is the practical tradeoff between manual intervention and automation?
Avoid Visual Studio deployment, and automate as much as possible. Web Deployment Projects and NAnt can be your friends!
Briefly, our deployment setup:
We use RedGate SQL to script differences between dev and live database.
An NAnt build file which calls MSBUILD to build the web deployment project (.wdproj), zips up the resulting compiled web app (along with the SQL change script) and then uploads the zip file to the server.
On the server side, there is another NAnt build file which takes the application offline, backs up the database, backs up the website. runs the SQL change script, unzips the new version and brings the app online.
Step 3 is usually run "manually" (one double-click), but sometimes scheduled for late at night. You could do exactly the same from a CDROM, or even write a pretty little Windows Forms app as a wrapper.
Quite happy to give details of the NAnt script if you're interested.
Have you tried using Web Deployment project? There is support for VS 2008 also now..
I deploy mostly ASP.NET apps to Linux servers. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/

Resources