IIS website authorization failure - asp.net

I have mapped the WWWRoot folder on D: drive and also enabled the shared configuration in IIS7, the physical path 'Test Settings' says the Authentication is successful, but the authorization says "Path might not exist and could not be verified (D:\wwwroot)".
Details: The path does not exist or environm
ent variables in the path could not be expanded to verify whether it exists.
P.S: The D: drive is created using SUBST command which maps a folder on C: Drive as a separate drive. I don't understand what is causing this issue.

subst is a command-line shortcut that applies only to your interactive session. Other users on the computer (including the system user that the IIS service will be running as) won't see any D: drive.
If you really must expose a directory as a different drive then try the DosDevices registry hack outlined in this question.

Related

Running access for Artifactory

New to Artifactory so please bear with me.
Trying (and failing) to create new access token.
The GUI in Artifactory has nothing for this but points to a users guide (https://www.jfrog.com/confluence/display/RTF/Access+Tokens) which talks about managing access tokens through a WAR file.
Here is the blurb:
Access Service
From Artifactory version 5.4, access tokens are managed under a new service
called Access which is implemented in a separate WAR file, access.war. This
change has no impact on how access tokens are used, however, the Artifactory
installation file structure now also includes the added WAR file under the
$ARTIFACTORY_HOME/webapps folder. Artifactory communicates with the Access
service over HTTP and assumes it is running in the same Tomcat using the
context path of "access".
OK, great. So how do I access this thing?
I also don't know much about web apps/servers. Prior to today, I thought WAR was a fight between nations :-)
My Artifactory server proc is running, and I can confirm that the access war file (apparently a jar file of sorts) is in the webapps dir.
I am able to get a artifactory via "http://myserver:8081/artifactory/webapp/#/home".
As it turns out, I believe the interface to manage access tokens is not provided through a gui. Rather, you have to use REST and curl commands.
The documentation mentions:
It is up to the Artifactory administrator to make sure that all participating instances are equipped with the same key pair.
That means you need to have access to the server (where Artifactory is installed).
On that server, the folder where Artifactory is installed is reference ARTIFACTORY_HOME.
That is what is used in the next doc extract:
Start up the first Artifactory instance (or cluster node for an HA installation) that will be in your circle of trust. A private key and root certificate are generated and stored under $ARTIFACTORY_HOME/access/etc/keys.
Copy the private key and root certificate files to a location on your file system that is accessible by all other instances/nodes that are in your circle of trust.
Before bootstrapping, for each of the other instances/nodes, create the $ARTIFACTORY_HOME/access/etc folder and create a properties file in it called access.bootstrap.config with the following contents:
key=/path/to/private.key
crt=/path/to/root.crt
When each instance/node starts up, if the $ARTIFACTORY_HOME/access/etc/access.bootstrap.config file exists, then the private key and root certificate are copied from the specified location into the server's home directory under $ARTIFACTORY_HOME/access/etc/keys.

Unable to get temp directory for .NET web site hosted in Azure App Service

We're working on validating our Loupe service to run as an Azure App Service and have run into a showstopper we can't figure out. Anything that attempts to resolve a temp directory fails with the exception:
mscorlib : System.IO.IOException
The directory name is invalid.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.__Error.WinIOError()
at System.IO.Path.InternalGetTempFileName(Boolean checkHost)
The stack trace has this within the .NET method for generating a temp file name. This stack trace is common to pretty much all the areas we get the failure. For a bit it seemed that if we forced the site to restart and/or forced the underlying App Service Plan to rescale it would go away until we next updated the site but no longer.
Since the only search results we could find said this error happens when impersonation is enabled and the user the site's impersonating doesn't have access to the IIS App Pool user's temp directory we've dug into that. First, we can confirm from our logging that the thread is not impersonating at the time the failed request is made. Second, just for fun we added this to the web.config to be doubly sure:
<system.web>
<identity impersonate="false"/>
</system.web>
All to no avail. If this was a generic problem with Azure App Services then I would presume it would break many systems, so I have to conclude we've done something fascinating and wrong to cause it.
This might not be the exact answer you're looking for but it might help point you in the right direction.
I had similar issues a while back using the Azure App Services. I found that accessing the local file system was somewhat problematic. Sometimes it worked fine and other times it didn't.
Eventually, I discovered that when an Azure App Service is instantiated, it doesn't always use the same drive letters for the system behind it. In some cases, this can cause the environment variables to be blatantly incorrect. They "think" they are set properly, but that's not always the case.
Generating a temp filename will use that environment variable for the path and if it's set to C: but the machine has a D: drive instead, if will fail. The C: drive doesn't exist and therefore the path to the temp file can't exist either.
To identify if this is the problem, you need to enable RDP so you can log into it directly. https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-role-enable-remote-desktop
It's the only way I was able to eventually figure it out.
If you open up the Kudu instance for your App Service Web App you'll be able to see what the local Temp directory is on the Managed VM underneath. You can access Kudu by going to "Advanced Tools" on the App Service blade in the Azure Portal, or by navigating to the https://{web app name}.scm.azurewebsites.net domain for your Web App.
Once in Kudu, click on Environment in the top navigation. The Temp directory is usually D:\local\Temp and that path is stored in the "TEMP" environment variable made accessible to your Web App.

Deploying not happening in publishing process

I am trying to publish to local file system, however publishing is not happening properly and its failed to deploy in my 2011 GA VM environment.
I am getting "Polling for notification for destination: YTnMgU6u5Vh09cOGUG7ouA== has exceeded polling attempts for transaction: tcm:0-121257-66560" error in "Preparing Deployment" stage.
I have used the “Local File System” protocol in my publication target and I have provided path like d:\tridion\publish.
And I have provided the same path in cd_storage_conf.xml under the <storage type=”filesystem”>. All other storage types are commented.
And in cd_deployer_conf.xml , quee location path is c:\tridion\incoming.
When I publish any page into my publication target, the zipped package is placed in the d:\tridion\publish and it’s not deployed.
Do I need to do any other thing to deploy the zipped package?
The path provided in the cd_deployer_conf.xml (the one you specify in Queue/Location!!!) needs to be the same one you provide in your publication target (in your case you have in the publicationTarget some path on D drive while in the deployer conf you have another one from C drive). Then you also need make sure that your deployer is initialized. You can easily determine if your deployer is initialized by looking if the meta.xml is regenerated in the deployer incoming folder.
Not sure if this is relavant but you might be interested also in how to install the deployer: as a .NET WebSite, as a Java WebSite or Windows Service
Hope this helps.
You say your working sites use HTTP sender/deployer. In that scenario your deployer is triggered by the HTTP servlet which receives the transport package.
When you use local file system - you MUST configure your deployer to work in a different way. It has to run as some form of background service. Typically on a windows box this means installing the deployer as a windows service. Keep in mind that this will then probably have additional config files for the deployer and broker/storage.

Msdeploy replace attribute

I am trying to msdeploy to restore the site on destination computer from the package i created on source IIS 7 site. The destination server IIS is also IIS7.
The destination server however does not have the drive D: as the physical drive. the D: is associated to a CD Row drive.
I use the replace attribute while using msdeploy but the rule does not work.
Below is my command
msdeploy -verb:sync -source:package=d:\site.zip -dest:apphostconfig="Default Web Site" -replace:objectName="metaProperty",scopeAttributeName="name",scopeAttributeValue="Path",targetAttributeName="value",match="d:",replace="c:" -verbose -whatif > msdeploysync.log
However, the -whatif does not show the path changed to C: and also if i run the command, i get message saying "Device not ready" which means that the D: replace is not working.
i am stuck.. any help ?
The provided mechanism for changing the path (in a non-IIS version specific way, mind you) is to set a parameter of kind DestinationVirtualDirectory:
-setParam:kind=DestinationVirtualDirectory,scope="Default Web Site",value="c:\full\path\to\website"
If you would like to strick to simply replacing the drive, try changing your replace directive to this:
-replace:objectName=virtualDirectory,scopeAttributeName=physicalPath,match=^C:,replace=D:
Here's some official docs on the various parameter types: Using declareParam and setParam

How to log in over network with CMD via UNC path?

I want to be able to connect to a remote machine through its UNC path in either windows CMD or powershell; I have tried C:\pushd \\MyServer\"User Folders"\localUser\TestFolder but when this executes, I get "Logon failure: unknown user name or bad password"
is "pushd" even the right command to use here? I have files that I want to exchange between the two machines on the same network, can there be permission bits I'm overlooking here?
No, pushd is not the right command. For connecting to a remote share you need the command net use:
net use X: \\SERVER\SHARE /user:DOMAIN\USER
If you're using the same account on both hosts (both a domain account as well as identical local accounts will work) you can omit the /user:DOMAIN\USER part.
Normally you'd connect only to the share, but you can also connect directly to some folder below the share:
net use X: \\SERVER\SHARE\some\subfolder
pushd should work for you, given that you have the required permissions to access the share as the current user.
Source:
If you specify a network path, the pushd command temporarily assigns
the first unused drive letter (starting with Z:) to the specified
network resource. The command then changes the current drive and
directory to the specified directory on the newly assigned drive. If
you use the popd command with command extensions enabled, the popd
command removes the drive-letter assignation created by pushd.
Note that the Powershell pushd alias (really Push-Location) does not map a drive letter, but otherwise works the same, i.e. lets you use the respective share as current directory.
So, yes it looks like you have a permission problem. Try accessing the share using explorer (or net use as #Ansgar Wiechers suggests in his answer, or even a simple dir \\share\...) to cross check.

Resources