I've got problems with session replication on Glassfish 3.1.1 Open Source edition.
There are two physical servers included in on cluster. On the first one there is DAS and instance 1. On the second physical server there is instance 2. Both servers run Windows 7 x64. I am following this tutorial:
http://javadude.wordpress.com/2011/05/12/glassfish-3-1-%E2%80%93-clustering-tutorial-part2-sessions/
As far as I understand when session replication works there should the same session when I visit the web app one both physical instances. Thus, when I log in into instance 1, I should be automatically logged in on instance 2 as well. Is this right?
Does anybody know how to solve this problem?
Thanks in advance.
Related
I have an old ASP.NET 4.6.1 app running in a VM on Azure.
I’m trying to create messages in an Azure Storage Queue and nothing is happening when I run it on production VM. However, on my dev machine, it works fine and I can create messages in the same queue that I’m trying to access from the production VM.
The call to the queue is within a try catch block and it’s not throwing any errors.
Another important point is that I had use the old/deprecated WindowsAzure.Storage NuGet package as that’s the one that seems to work on this ASP.NET MVC 4.6.1 app.
Any idea what could be the issue here? Because I don’t see any errors, I’m not sure how to go about fixing this problem.
According to MS Docs, one troubleshooting option you can try is "Redeploy Windows virtual machine to new Azure node"
The doc says,
If you have been facing difficulties troubleshooting Remote Desktop
(RDP) connection or application access to Windows-based Azure virtual
machine (VM), redeploying the VM may help.
Source: https://learn.microsoft.com/en-us/azure/virtual-machines/troubleshooting/redeploy-to-new-node-windows
See also additional troubleshooting steps:
Restart the virtual machine
Recreate the endpoint / firewall rules / network security group
(NSG) rules
Connect from different location, such as a different Azure virtual
network
Recreate the virtual machine
There are various reasons when you cannot start or connect to an application running on an Azure virtual machine (VM). Reasons include the application not running or listening on the expected ports, the listening port blocked, or networking rules not correctly passing traffic to the application.
Source: https://learn.microsoft.com/en-us/azure/virtual-machines/troubleshooting/troubleshoot-app-connection
This might be network Firewall issue. Open azure portal from production vm machine. You can even try to manually see the storage and upload files from web.
My ASP.Net site runs as farm of Windows EC2 web servers. Due to a recent traffic surge, I switched to Spot instances to control costs. Spot instances are created from an AMI when hourly rates are below a set rate. The web servers do not store any data, so creating and terminating them on the fly is not an issue. So far the website has been running fine.
The problem is deploying updates. The application is updated most days.
Before the switch to a Spot fleet, updates were deployed as follows (1) a CI server would build and deploy the site to a staging server (2) I would do a staggered deployment to a web farm using a simple xcopy of files to mapped drives.
After switching to Spot instances, the process is: (1) {no change} (2) deploy the update to one of the spot instances (3) create a new AMI from that deployment (4) request a new Spot fleet using the new AMI (5) terminate the old Spot fleet. (The AMI used for a Spot request cannot be changed.)
Is there a way to simplify this process by enabling the nodes to either self-configure or use a shared drive (as Microsoft Azure does)? The site is running the Umbraco CMS, which supports multiple instances from the physical location, but I ran into security errors trying to run a .Net application from a network share.
Bonus question: how can I auto-add new Spot instances to the load balancer? Presumably if there was a script which fetched the latest version of the application, it could add the instance to the load balancer when it is done.
I have somewhat similar setup (except I don't use spot instances and I have linux machines), here is the general idea:
CI creates latest.package.zip and uploads it to designated s3 bucket
CI sequentially triggers the update script on current live instances which downloads the latest package from S3 and installs/restarts service
New instances are launched in Autoscaling group, attached to Load balancer, with IAM role to allow access to S3 bucket and User data script that will trigger update script on initial boot.
This should all be doable with windows spot instances I think.
I have a Dedicated Windows Server 2012 lets call it 'A' where my ASP.NET 4.5 website is up and active on IIS 8. Due to the many downtime issues, I want to now move my website to Azure.
I created an Azure VM with an identical environment and copied all files and database. However I am unable to get the website up and running (2 days now) on the VM.
Is there a way using which I can replicate the exact environment that I had in Server 'A'?
You can copy your existing VM (if it is in VHD format) to Azure. See https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-createupload-vhd/ for more information.
If your server is a physical, you can use something like https://www.microsoft.com/en-us/download/details.aspx?id=42497 to do a conversion, and then upload that to Azure.
Lastly, using services such as chef, puppet, or DSC to ensure that server configurations are standard across machines.
I’m trying to get remote debugging working again on a Windows 2003 server that I use for ASP.Net 2.0 development. I had everything up and running for months then one day, I was forced by our AD policy to change my password and the remote debugging has not worked since.
I have a Windows 2003 server running virtually (MS Virtual PC 2007) on the same system that I run my Visual Studio 2005 IDE from (Windows XP Pro). Both systems are members of the same domain and my domain account is in the admin group of both systems. I’m logged into the XP machine running Visual Studio and the Windows server running the Debug Console using this domain account.
When I try to attach to the remote debugger from within VS I get an error after about 1 minute… “Unable to connect to the Microsoft Visual Studio Remote Debugging Monitor and ‘myServer’. This operation returned because the timeout period expired.”
When attaching from VS, I have used just the server name as well as the full debug server instance name (Domain\user#myServer) that is listed in the console but I get the same results. I have also tried running the console as a service using my domain account (this is the original way I had it setup) and from a share on the XP PC running VS….again same results. I also checked the permissions on the Debug Console and both the admin group as well as my domain account are listed and Debug is set to “Allow” for both.
When I try to attach to the debugger from within VS I can see the connection popup on the Remote Debug Console of the server and it says it’s connected but I noticed that it’s trying to connect as a different user than what I’m logged into either machine as. The debug console shows the connection belongs to a local account (myServer\user1)… I would expect to see Domain\User. The local account that shows in the debug console does exist locally on both systems and is in the local admin group on each system but I have no idea why it would all of a sudden try to use that account rather than the domain account I’m logged in as on both systems.
As I mentioned, everything was working for months and only seem to stop functioning after the AD account password was changed.
Does anyone have any ideas on what could be causing this issue?
Any help would be greatly appreciated.
Thanks!
I thought I would put an update to this question in case someone else runs into a similar issue.
Well after pulling my hair out on this issue for a couple days now, I decided to try installing VS 2005 on a new workstation. I was able to connect to the remote debug process of my dev server without any problems.
I’m still not 100% sure what the actually cause of the problem was but I thnk it was related to 2 different issues. Once I was able to get the remote debugging working from the new workstation I decided to reset the VS Environmental settings on the one I was having issue with. I’m not sure why I did not try this sooner …but this seems to have taken care of the problem and remote debugging now works from the system I was having problems with.
The issue of the wrong user was due to a cached login account being used over that of the currently logged in domain user. I ran "%windir%\system32\rundll32.exe keymgr.dll, KRShowKeyMgr" on the workstation running VS, found the local account in question, deleted it from the Stored User Names and Passwords box and from that point on the remote debugger used my domain account as it should.
After deploying a new build (mostly changed DLL's) of an ASP.NET web app the CPU on the server is now jumping to 100% every few seconds and the culprit is lsass.exe. Do you think that the deployment of the asp.net web app to the server and this problem are related? (or a coincidence that it happened at the same time?)
More info:
This is the first time that I've done the build on a Server 2008 x64 machine. Previous the builds were done on a Server 2003 x86 machine. Target is "Any CPU" so should work on either. Deployed to server is Server 2003 x86.
I've searched the web for more info on this and have confirmed that the process is lsass.exe (first character a lower case L and not an upper case i) so ruled out the virus version. Found some docs relating to a Server 2000 bug but doesn't apply here.
I eventually isolated the problem to an ASP forum running "under" that ASP.NET web app. Using the admin page on the forum I took the forum down and then brought it back up again and the problem disappeared. I find this very frustrating because the problem has now gone but I don't know what caused it and as such it could easily return.
I also installed this Microsoft Hotfix and rebooted this server but that didn't work.
Have you checked the System and Application event logs for anything unusual?
Have you updated to use Active Directory role provider? I've seen issues where enumerating groups to do role checking pegs the CPU and really slows down the app. I actually implemented a customized provider that allowed me to specify a particular OU and set of groups that I actually care about to get around this issue.
The xperf tools distributed in the Windows Performance Toolkit will tell you exactly what is usin CPU time or disk bandwith. These tools are free and work on any retail build of WS2008 or Vista. Here is series of posts on the xperf tools from myself.