About log4net config - asp.net

I have debugged the code leaving to us from a vendor for several months. Till today I just find that the log4net configuration still contains vendor's information.
My simple question:
Does the vendor can track and record my work during the period? Is it secure for my pasted work?
What is the smtpHost IP?
<appender name="SmtpAppender" type="log4net.Appender.SmtpAppender">
<to value="somebody#some_vendor.com"/>
<from value="person#some_vendor.com"/>
<subject value="Company Portal"/>
<smtpHost value="xy.a.b.cd"/>
<authentication value="Basic"/>
<username value="somebody#some_vendor.com"/>
Thank you.

Ok, it isn't secure, because if your application has internet access and smtphost is available and works, then everything, that logger is supposed to log will be mailed to developer.
Look through the code, what type of messages are loged?
smtphost is dns name for machine that provides stmp service, for sending emails.

The smtpHost is the ip or domain of the smtp server to use when the SmtpAppender is called.
So any level configured to be logged using the smtpappender generates an email using that smtpserver as host.
The server your code is running on has to be able to connect to xy.a.b.cd in order to be able to send messages.

Related

How can I switch an existing Azure web-role from http over to https

I have a working Azure web role which I've been using over an http endpoint. I'm now trying to switch it over to https but struggling mightily with what I thought would be a simple operation. (I'll include a few tips here for future readers to address issues I've already come across).
I have created (for now) a self-signed certificate using the powershell commands documented by Microsoft here and uploaded it to the azure portal. I'm aware that 3rd parties won't be able to consume the API while it has a self-signed certificate but my plan is to use the following for local client testing before purchasing a 'proper' certificate.
ServicePointManager.ServerCertificateValidationCallback += (o, c, ch, er) => true;
Tip: you need upload the .pfx file and then supply the password you used in the powershell script. Don't be confused by suggestion to create a .cer file which is for completely different purposes.
I then followed the flow documented for configuring azure cloud services here although many of these operations are now done directly through visual studio rather than by hand-editing files.
In the main 'cloud service' project under the role I wanted to modify:
I imported the newly created certificate. Tip: the design of the dialog used to add the thumbprint makes it very easy to incorrectly select the developer certificate that is already installed on your machine (by visual studio?). Click 'more options' to get to _your_ certificate and then check the displayed thumbprint matches that shown in the Azure portal in the certificates section.
Under 'endpoints' I added a new https endpoint. Tip: use the standard https port 443, NOT the 'default' port of 8080 otherwise you will get no response from your service at all
In the web.config of the service itself, I changed the endpoint binding for the service so that the name element matched the new endpoint.
I then published the cloud project to Azure (using Visual Studio).
At this point, I'm not seeing the results I expected. The service is still available on http but is not available on https. When I try to browse for it on https (includeExceptionDetailInFaults is set to true) I get:
HTTP error 404 "The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable"
I interpret this as meaning that the https endpoint is available but the service itself is bound to http rather than https despite my changes to web.config.
I have verified that the publish step really is uploading the new configuration by modifying some of the returned content. (Remember this is still available on http.)
I have tried removing the 'obsolete' http endpoint but this just results in a different error:
"Could not find a base address that matches scheme http for the endpoint with binding WebHttpBinding. Registered base address schemes are [https]"
I'm sure I must be missing something simple here. Can anyone suggest what it is or tips for further trouble-shooting? There are a number of stack-overflow answers that relate to websites and suggest that IIS settings need to be tweaked but I don't see how this applies to a web-role where I don't have direct control of the server.
Edit Following Gaurav's suggestion I repeated the process using a (self-signed) certificate for our own domain rather than cloudapp.net then tried to access the service via this domain. I still see the same results; i.e. the service is available via http but not https.
Edit2 Information from csdef file... is the double reference to "Endpoint1" suspicious?
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="HttpsEndpoint" />
<Binding name="Endpoint1" endpointName="HttpEndpoint" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="HttpsEndpoint" protocol="https" port="443" certificate="backend" />
<InputEndpoint name="HttpEndpoint" protocol="http" port="80" />
</Endpoints>
<Certificates>
<Certificate name="backend" storeLocation="LocalMachine" storeName="My" />
</Certificates>

Wildfly: Encrypt password and username for database

I would like to hand over a webapplication to some people but these people should not allowed to has access to the database with some tools. Using the webapplicaton and in the background the database is ok.
Wildfly has a config with these code:
<xa-datasource jndi-name="java:jboss/datasources/ExampleXADS" pool-name="ExampleXADS">
<driver>h2</driver>
<xa-datasource-property name="URL">jdbc:h2:mem:test</xa-datasource-property>
<xa-pool>
<min-pool-size>10</min-pool-size>
<max-pool-size>20</max-pool-size>
<prefill>true</prefill>
</xa-pool>
<security>
<user-name>sa</user-name>
<password>sa</password>
</security>
</xa-datasource>
As you can see, there is also the username and password available. How is it possible to exclude / encrypt these, so only the administrator know the password for the database.
The same also for the whole application server - there are also users and password.
How can I do this?
EDIT:
The "customer" will get the whole application inclusive the webserver configuration. (Wilfly and .war - file)
It´s only for saving the software key in the database.
The first time if the "customer" start the web application, he will be prompted so enter the licence key.
After entering the license key a Webservice will be called. The return code is "false" or "true" (is key valid or is key not valid)
My first idea was to store the flag in the database. But if a user has access to the database, he can manipulate this flag on his own.
Is there any other possibility to set a flag for "the software key is valid" instead saving the flag in the database.
Any ideas?
You can use security domain to get over this, there could be some specific changes for Wildfly but for JBoss 7.1.1 here is what you need to do.
Find the location of jboss-logging-3.1.0.GA.jar in your JBoss/Widlfy server. In case of JBoss 7.1.1 it should be something like - modules\org\jboss\logging\main\jboss-logging-3.1.0.GA.jar
Find the location of picketbox-4.0.7.Final.jar
Check if the picketbox jar has org.picketbox.datasource.security.SecureIdentityLoginModule class.
Run the following command from JBoss server root folder to encrypt your datasource connection password
java -cp modules\org\jboss\logging\main\jboss-logging-3.1.0.GA.jar;modules\org\picketbox\main\picketbox-4.0.7.Final.jar org.picketbox.datasource.security.SecureIdentityLoginModule PasswordXYZ
Get the output text and in the standalone.xml add following security domain under elements:
<security-domain name="encrypted-ds-WASM2" cache-type="default">
<authentication>
<login-module code="org.picketbox.datasource.security.SecureIdentityLoginModule" flag="required">
<module-option name="username" value="WASM2"/>
<module-option name="password" value="89471a19022f8af"/>
<module-option name="managedConnectionFactoryName" value="jboss.jca:service=LocalTxCM,name=MySqlDS_Pool"/>
</login-module>
</authentication>
</security-domain>
Use this security domain in the datasource element as follows:
<datasource jta="false" jndi-name="java:jboss/jdbc/JNDIDS" pool-name="OFS1" enabled="true" use-ccm="false">
<connection-url>jdbc:oracle:thin:#x.x.x.x:1521:xxxx</connection-url>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<driver>oracle</driver>
<security>
<security-domain>encrypted-ds-WASM2</security-domain>
</security>
<validation>
<validate-on-match>false</validate-on-match>
<background-validation>false</background-validation>
<background-validation-millis>1</background-validation-millis>
</validation>
<statement>
<prepared-statement-cache-size>0</prepared-statement-cache-size>
<share-prepared-statements>false</share-prepared-statements>
</statement>
</datasource>
Reference Link: http://middlewaremagic.com/jboss/?p=1026
It is not possible. If the web application has to be able to decrypt the password to use the database, anyone on the server can do the same.
If you want to restrict access, keep the server under your control and let them access it only through a web front end.
(And even if it was possible to usefully encrypt, if they have server access they can trivially copy the database files onto their workstations, or add new user accounts to the database server).

In asp.net, Web Service endpoint is incorrect when client connects to production server

I've been scouring the net for almost two days and must be missing something (possibly basic).
On the test (local) web server I have set up a simple service, and using a client, I discover the service and run it without problems.
Using the same client, I discover the same service, but on the production server using https://MyNewStuff.com/WebServices/MyService.asmx (the real internet address of the service) without problems, but when I try to run it it fails with an EndPointNotFound exception. Upon investigating I find that the client's app.config is incorrect as follows;
<endpoint address="https://ProductionWeb.Ourdomain.com/WebServices/MyService.asmx"
binding="basicHttpBinding" bindingConfiguration="MyServiceSoap"
contract="MOX24.MyServiceSoap" name="MyServiceSoap" />
i.e., not set up correctly as it reflects https://ProductionWeb.Ourdomain.com ... and not https://MyNewStuff.com/WebServices, indicating that the service (discovery) is sending the wrong information to the clients (it is sending the server's name and domain and not the 'web' name).
Any help on this would be greatly appreciated!!
If your client is a web application, put https://MyNewStuff.com/WebServices/MyService.asmx in the Web.Release.config.

ASP PasswordRecovery Email Failure, DateTime precision

I have an asp.net 4.0 website and I'm using the PasswordRecovery control for a forgot password form. When I run the site locally it works fine, emails are sent. However, when I run the site from my vps, I get an error message when trying to send the email. There's nothing in the server's event log.
My PasswordRecovery aspx code is as follows:
<asp:PasswordRecovery ID="PasswordRecovery1" runat="server"
CssClass="mediumText">
<MailDefinition From="noreply#x.com" BodyFileName="~/EmailTemplates/PasswordRecovery.txt" />
My web.config mail settings are as follows:
<system.net>
<mailSettings>
<smtp from="noreply#x.com">
<network host="smtp.123-reg.co.uk" password="x" userName="x" />
</smtp>
</mailSettings>
</system.net>
I've now run SQL Profiler against the SQL Server Express Instance, and it turns out that an exception is being thrown from the SQL Server on the call to dbo.aspnet_Membership_GetUserByName. There's a type conversion issue because PasswordRecovery is passing a DateTime parameter with 7 decimal places for the seconds. If I manually execute the stored procedure with 3 decimal place,s then it works. Does anyone know why the precision of the DateTime parameter is different on my server than on my laptop?
As you will see from reading this issue and the subsequent responses, the DateTime precision issue you are seeing is just a result of SQL Profiler displaying the date with the wrong format. That issue is unrelated to what is preventing your email from being sent.
What is the error message you receive when you try to send the email? The issue is more than likely related to something preventing your email from making it to the SMTP server. Are you sending to the same SMTP using the same credentials from your machine?
I think it is some connectivity issue from your VPS to SMTP server.
You can test this outside of your code by trying to connect to SMTP server using telnet.
Follow the steps mentioned in the link below to
http://support.microsoft.com/kb/323350
The datetime conversion error is a red herring. That is only occuring when you copy it from the profiler and execute it in management studio. This is a known issue.
You might want to read this:
http://forums.asp.net/t/1398826.aspx/1
Sounds like it might simply be that the membership provider settings are different with regards to the passwords.
There are two possibilities that I see.
You don't have a machineKey defined in your web.config AND your local and VPS instance are both pointing to the same database. If this is true, put a machineKey in your config and regen your passwords.
Your membership provider config on the VPS used to be configured for hashed passwords and somewhere along the way was changed to encrypted passwords. (or vice versa). This would also cause issues.

Website (asp.net) to send emails via remote mail server, and not end up in spam folders

So, the setup is this, 2 separate servers...
Web server, has IIS7, MS SMTP
Mail server has MailEnable
On the web server, I'm sending an email from an ASP.Net app, via the mail server, and it is getting marked as spam
If I send an email through the mail server, just from a normal mail client, it doesn't get marked as spam.
I'm sure this is a setup issue, but what am I likely to have done wrong?
web.config:
<smtp from="website#domain.co.uk">
<network host="mail.mymailserver.co.uk" userName="website#domain.co.uk" password="password" />
</smtp>
asp.net, just a normal SmtpClient send:
SmtpClient client = new SmtpClient();
client.Send(mailMessage);
a random gut feeling reckons it's probably sending through the local SMTP server, then on to MailEnable, and that's giving it weird headers...just a thought though
The headers contain this line: Received-SPF: softfail (google.com: best guess record for domain of transitioning website#mydomain.co.uk does not designate unknown as permitted sender)
I've no idea what it means though (the part looks suspicious)
The Received-SPF error is related to Sender Policy Framework. What you need to do is change the DNS records on your domain to include your web server (usually its IP address) as a valid sender.
The SPF website has details on how to setup this configuration.
Edit: It's up to the mail client how to interpret: Received-SPF: softfail. So when you allow any domain to send emails, you might still run into this error. From http://www.openspf.org/SPF_Received_Header:
When an SPF query returns any other result, the MTA should add an advisory header to the message of the form "Received-SPF: neutral" or "Received-SPF: pass". That way, a spam filter further down the road can take that header into account as part of a more balanced decision.

Resources