I need to Encrypt the URLs in my ASP.NET MVC application.
Do I need to write the code in Global page in Route Collection to Encrypt all the URLs?
It's a bad idea to encrypt a URL. Period.
You may wonder why I say that.
I worked on an application for a company that encrypted its URLs. This was a webforms application. From the URL alone, it was nearly impossible to tell what part of the code I was hitting to cause that issue. Because of the dynamic nature of calling the webform controls, you just had to know the path the software was going to go down. It was quite unnerving.
Add to that that there was no role based authorization in the application. It was all based on the URL being encrypted. If you could decrypt the URL (which if it can be encrypted, it can be decrypted), then you could conceivably enter another encrypted URL and impersonate another user. I'm not saying it's simple, but it can happen.
Finally, how often do you use the internet and see encrypted URLs? When you do, do you die a little inside? I do. URLs are meant to convey public information. If you don't want it to do that, don't put it in your URL (or require Authorization for sensitive areas of your site).
The IDs you're using in the database should be IDs that are ok for the user to see. If you're using an SSN as a primary key, then you should change that schema for a web application.
Anything that can be encrypted can be decrypted, and therefore is vulnerable to attack.
If you want a user to only access certain URLs if they're authorized, then you should use the [Authorize] attributes available in ASP.NET MVC.
Encrypting an entire url, I agree, very bad idea. Encrypting url parameters? Not so much and is actually a valid and widely used technique.
If you really want to encrypt/decrypt url parameters (which isn't a bad idea at all), then check out Mads Kristensen's article "HttpModule for query string encryption".
You will need to modify context_BeginRequest in order to get it to work for MVC. Just remove the first part of the if statement that checks if the original url contains "aspx".
With that said, I have used this module in a couple of projects (have a converted VB version if needed) and for the most part, it works like a charm.
BUT, there are some instances where I have experienced some issues with jQuery/Ajax calls not working correctly. I am sure the module could be modified in order to compensate for those scenarios.
Based on the answers here, which did not work for me BTW, I found another solution based on my particular MVC implementation, and the fact that it also works depending on whether you're using II7 or II6. Slight changes are needed in both cases.
II6
Firstly, you need to add the following into your web.config (root, not the one in View folder).
<system.web>
<httpModules>
<add name="URIHandler" type="URIHandler" />
</httpModules>
II7
add this instead into your web.config (root, not the one in View folder).
<system.webServer>
<validation validateIntegratedModeConfiguration="false" />
<modules runAllManagedModulesForAllRequests="true">
<remove name="URIHandler" />
<add name="URIHandler" type="URIHandler" />
</modules>
Or you could add both. It doesn't matter really.
Next use this class. I called it, as you've probably noticed - URIHandler.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.IO;
using System.Text;
using System.Security.Cryptography;
using System.Diagnostics.CodeAnalysis;
public class URIHandler : IHttpModule
{
#region IHttpModule members
public void Dispose()
{
}
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
#endregion
private const string PARAMETER_NAME = "enc=";
private const string ENCRYPTION_KEY = "key";
private void context_BeginRequest(object sender, EventArgs e)
{
HttpContext context = HttpContext.Current;
//if (context.Request.Url.OriginalString.Contains("aspx") && context.Request.RawUrl.Contains("?"))
if (context.Request.RawUrl.Contains("?"))
{
string query = ExtractQuery(context.Request.RawUrl);
string path = GetVirtualPath();
if (query.StartsWith(PARAMETER_NAME, StringComparison.OrdinalIgnoreCase))
{
// Decrypts the query string and rewrites the path.
string rawQuery = query.Replace(PARAMETER_NAME, string.Empty);
string decryptedQuery = Decrypt(rawQuery);
context.RewritePath(path, string.Empty, decryptedQuery);
}
else if (context.Request.HttpMethod == "GET")
{
// Encrypt the query string and redirects to the encrypted URL.
// Remove if you don't want all query strings to be encrypted automatically.
string encryptedQuery = Encrypt(query);
context.Response.Redirect(path + encryptedQuery);
}
}
}
/// <summary>
/// Parses the current URL and extracts the virtual path without query string.
/// </summary>
/// <returns>The virtual path of the current URL.</returns>
private static string GetVirtualPath()
{
string path = HttpContext.Current.Request.RawUrl;
path = path.Substring(0, path.IndexOf("?"));
path = path.Substring(path.LastIndexOf("/") + 1);
return path;
}
/// <summary>
/// Parses a URL and returns the query string.
/// </summary>
/// <param name="url">The URL to parse.</param>
/// <returns>The query string without the question mark.</returns>
private static string ExtractQuery(string url)
{
int index = url.IndexOf("?") + 1;
return url.Substring(index);
}
#region Encryption/decryption
/// <summary>
/// The salt value used to strengthen the encryption.
/// </summary>
private readonly static byte[] SALT = Encoding.ASCII.GetBytes(ENCRYPTION_KEY.Length.ToString());
/// <summary>
/// Encrypts any string using the Rijndael algorithm.
/// </summary>
/// <param name="inputText">The string to encrypt.</param>
/// <returns>A Base64 encrypted string.</returns>
[SuppressMessage("Microsoft.Usage", "CA2202:Do not dispose objects multiple times")]
public static string Encrypt(string inputText)
{
RijndaelManaged rijndaelCipher = new RijndaelManaged();
byte[] plainText = Encoding.Unicode.GetBytes(inputText);
PasswordDeriveBytes SecretKey = new PasswordDeriveBytes(ENCRYPTION_KEY, SALT);
using (ICryptoTransform encryptor = rijndaelCipher.CreateEncryptor(SecretKey.GetBytes(32), SecretKey.GetBytes(16)))
{
using (MemoryStream memoryStream = new MemoryStream())
{
using (CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write))
{
cryptoStream.Write(plainText, 0, plainText.Length);
cryptoStream.FlushFinalBlock();
return "?" + PARAMETER_NAME + Convert.ToBase64String(memoryStream.ToArray());
}
}
}
}
/// <summary>
/// Decrypts a previously encrypted string.
/// </summary>
/// <param name="inputText">The encrypted string to decrypt.</param>
/// <returns>A decrypted string.</returns>
[SuppressMessage("Microsoft.Usage", "CA2202:Do not dispose objects multiple times")]
public static string Decrypt(string inputText)
{
RijndaelManaged rijndaelCipher = new RijndaelManaged();
byte[] encryptedData = Convert.FromBase64String(inputText);
PasswordDeriveBytes secretKey = new PasswordDeriveBytes(ENCRYPTION_KEY, SALT);
using (ICryptoTransform decryptor = rijndaelCipher.CreateDecryptor(secretKey.GetBytes(32), secretKey.GetBytes(16)))
{
using (MemoryStream memoryStream = new MemoryStream(encryptedData))
{
using (CryptoStream cryptoStream = new CryptoStream(memoryStream, decryptor, CryptoStreamMode.Read))
{
byte[] plainText = new byte[encryptedData.Length];
int decryptedCount = cryptoStream.Read(plainText, 0, plainText.Length);
return Encoding.Unicode.GetString(plainText, 0, decryptedCount);
}
}
}
}
#endregion
}
You don't need a NameSpace.
The above class does everything you need to Encrypt and Decrypt any URL parameters starting with '?' character. It even does a nice job of renaming your parameter variables to 'enc', which is a bonus.
Lastly, place the class in your App_Start folder, and NOT the App_Code folder, as that will conflict with 'unambiguous errors'.
Done.
Credits:
https://www.codeproject.com/questions/1036066/how-to-hide-url-parameter-asp-net-mvc
https://msdn.microsoft.com/en-us/library/aa719858(v=vs.71).aspx
HttpModule Init method were not called
C# Please specify the assembly explicitly in the type name
https://stackoverflow.com/questions/1391060/httpmodule-with-asp-net-mvc-not-
being-called
You can create a custom html helper to encrypt query string and use custom action filter attribute for decryption and getting original values back. You can implement it globally so won't take much of your time. You can take reference from here Url Encryption In Asp.Net MVC. This will help you out with custom helper and custom action filter attribute.
It's likely pointless to globally encrypt all the url parameters (query string). Most parameters are display items used by HttpGet. If everything is encrypted then this won't make for a very informative page. However if there are sensitive parameters that are only hidden fields (keys) on the client that eventually are returned to the server to identify a record, this might be worth encrypting.
Consider this viewModel:
public viewModel
{
public int key {get;set;} // Might want to encrypt
public string FirstName {get;set;} // Don't want this encrypted
public string LastName {get;set;} // Don't want this encrypted
}
The viewModel gets converted into a query string, something close to....
appName.com/index?Id=2;FirstName="John";LastName="Doe"
If this viewModel is passed as a query string, what's the point in encrypting the first and last names?
It should be noted that query strings are HttpGet. HttpPost use the session to pass values not query strings. HttpPost sessions are encrypted. But there is overhead to httpPost. So, if your page does actually contain sensitive data that needs to be displayed (perhaps the users current password) then consider going to HttpPost instead.
Related
I have a requirement that to Encrypt the custom Field and Decrypt Automatically while viewing the case in MS Dynamics CRM online Portal.
I created two Plugins one is for Encrypt at PreCaseCreate and the other is to Decrypt at PostCaseRetrieve.The Plugin for Encryption is Working fine,but plugin for decrypt is not working(which means the encrypted content is not decrypting while viewing in online portal).
Below is the code for decryption
// <copyright file="PostCaseRetrieve.cs" company="">
// Copyright (c) 2016 All Rights Reserved
// </copyright>
// <author></author>
// <date>4/20/2016 1:58:24 AM</date>
// <summary>Implements the PostCaseRetrieve Plugin.</summary>
// <auto-generated>
// This code was generated by a tool.
// Runtime Version:4.0.30319.1
// </auto-generated>
namespace CRMCaseEntityDecryptPlugin.Plugins
{
using System;
using System.ServiceModel;
using Microsoft.Xrm.Sdk;
using System.Text;
using System.Security.Cryptography;
using Microsoft.Xrm.Sdk.Query;
/// <summary>
/// PostCaseRetrieve Plugin.
/// </summary>
public class PostCaseRetrieve : Plugin
{
/// <summary>
/// Initializes a new instance of the <see cref="PostCaseRetrieve"/> class.
/// </summary>
public PostCaseRetrieve()
: base(typeof(PostCaseRetrieve))
{
base.RegisteredEvents.Add(new Tuple<int, string, string, Action<LocalPluginContext>>(40, "Retrieve", "incident", new Action<LocalPluginContext>(ExecutePostCaseRetrieve)));
// Note : you can register for more events here if this plugin is not specific to an individual entity and message combination.
// You may also need to update your RegisterFile.crmregister plug-in registration file to reflect any change.
}
/// <summary>
/// Executes the plug-in.
/// </summary>
/// <param name="localContext">The <see cref="LocalPluginContext"/> which contains the
/// <see cref="IPluginExecutionContext"/>,
/// <see cref="IOrganizationService"/>
/// and <see cref="ITracingService"/>
/// </param>
/// <remarks>
/// For improved performance, Microsoft Dynamics CRM caches plug-in instances.
/// The plug-in's Execute method should be written to be stateless as the constructor
/// is not called for every invocation of the plug-in. Also, multiple system threads
/// could execute the plug-in at the same time. All per invocation state information
/// is stored in the context. This means that you should not use global variables in plug-ins.
/// </remarks>
protected void ExecutePostCaseRetrieve(LocalPluginContext localContext)
{
if (localContext == null)
{
throw new ArgumentNullException("localContext");
}
// TODO: Implement your custom Plug-in business logic.
IPluginExecutionContext context = localContext.PluginExecutionContext;
IOrganizationService service = localContext.OrganizationService;
// The InputParameters collection contains all the data passed in the message request.
if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
{
// Obtain the target entity from the input parmameters.
Entity entity = (Entity)context.InputParameters["Target"];
if (entity.LogicalName.ToLower().Equals("incident"))
{
try
{
ColumnSet cols = new ColumnSet(new String[] { "title", "description", "new_phicontent" });
var incident = service.Retrieve("incident", entity.Id, cols);
if (incident.Attributes.Contains("new_phicontent"))
{
string PHIContent = incident.Attributes["new_phicontent"].ToString();
byte[] bInput = Convert.FromBase64String(PHIContent);
UTF8Encoding UTF8 = new UTF8Encoding();
//Encrypt/Decrypt strings which in turn uses 3DES (Triple Data Encryption standard) algorithm
TripleDESCryptoServiceProvider tripledescryptoserviceprovider = new TripleDESCryptoServiceProvider();
//Alow to compute a hash value for Encryption/Decryption
MD5CryptoServiceProvider md5cryptoserviceprovider = new MD5CryptoServiceProvider();
tripledescryptoserviceprovider.Key = md5cryptoserviceprovider.ComputeHash(ASCIIEncoding.ASCII.GetBytes("secretkey"));
tripledescryptoserviceprovider.Mode = CipherMode.ECB;
ICryptoTransform icryptotransform = tripledescryptoserviceprovider.CreateDecryptor();
string DecryptedText = UTF8.GetString(icryptotransform.TransformFinalBlock(bInput, 0, bInput.Length));
incident["new_phicontent"] = DecryptedText;
service.Update(incident);
}
}
catch (FaultException ex)
{
throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
}
}
}
}
}
}
I tried with PreCaseRetrieve event also,but i didn't got result
Kindly Provide some solution to resolve this.
Thanks in advance
Leave your plugin as a post plugin.
Target object from InputParameters is the object that is sent to the client, so if you modify the target object, you modify what is sent to the client. So don't retrieve the incident and then update incident. Instead, if entity contains the new_phicontent attribute, then you know the client requested the attribute and it needs to be decrypted so decrypt the value and then update entity["new_phicontent"]. Here's the updated code:
// Obtain the target entity from the input parmameters.
Entity entity = (Entity)context.InputParameters["Target"];
if (entity.LogicalName.ToLower().Equals("incident"))
{
try
{
if (entity.Attributes.Contains("new_phicontent"))
{
string PHIContent = entity.Attributes["new_phicontent"];
byte[] bInput = Convert.FromBase64String(PHIContent);
// removed for brevity
string decryptedText = UTF8.GetString(icryptotransform.TransformFinalBlock(bInput, 0, bInput.Length));
entity["new_phicontent"] = decryptedText;
}
}
catch (FaultException ex)
{
throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
}
}
Part of an app i'm building requires that admin users can let an employee access one page of the app to perform a task. After the employee has completed that task, they have no reason to return to the app.
This app is hosted online and so the employee access needs to be secured with a logon.
My question is, what is the best approach regarding providing a login account to a user who would only use the system once?
As I see it, I have two options:
Provide the admin users with one permanent login account for employees, which can be re-used for each employee (i would need to provide each employee with an extra passcode so that the system can look it up and see who they really are)
Create a login account for each employee as and when they need access, and then delete the login account after it has been used. For this username I would concatenate a common word (company name for example) with a unique id (possibly the id of their task)
Option 2 seems to make the most sense in terms of security. Are there any pitfalls with this approach, or are there any alternative solutions?
Personally, I would consider a third option: create a parallel access control table for this page. In other words, you'd have something like:
public class PageAccess
{
public string Email { get; set; }
public string Token { get; set; }
public DateTime Expiration { get; set; }
}
When an admin wants to grant access to the page, they would give the email of the user who should have access (Email). A random token would then be generated (saved hashed as Token). Then the user would be sent an email at their email address with a URL to the page which would include a parameter composed of the email address and token, and then base 64 encoded.
Upon clicking the link the user would be taken to the page, where first, the parameter would be validated: base 64 decode, split email and token, lookup the access record by email, hash token and compare to stored token, and (optionally) compare the expiration date with now (so that you can keep people from trying to access a URL from an email sent months or years ago).
If everything is kosher, the user is shown the page. When they complete whatever action they need to make, you delete the access record.
This is essentially the same process employed by a password reset, only here, you're just using it to grant one-time access instead of allowing them to change their password.
UPDATE
The following is a utility class that I use. I'm not a security expert, but I did some extensive reading and borrowed heavily from StackExchange code I found at some point, somewhere, which either doesn't exist publicly anymore, or evades my search skills.
using System;
using System.Security.Cryptography;
using System.Text;
public static class CryptoUtil
{
// The following constants may be changed without breaking existing hashes.
public const int SaltBytes = 32;
public const int HashBytes = 32;
public const int Pbkdf2Iterations = /* Some int here. Larger is better, but also slower. Something in the range of 1000-2000 works well. Don't expose this value. */;
public const int IterationIndex = 0;
public const int SaltIndex = 1;
public const int Pbkdf2Index = 2;
/// <summary>
/// Creates a salted PBKDF2 hash of the password.
/// </summary>
/// <param name="password">The password to hash.</param>
/// <returns>The hash of the password.</returns>
public static string CreateHash(string password)
{
// TODO: Raise exception is password is null
// Generate a random salt
RNGCryptoServiceProvider csprng = new RNGCryptoServiceProvider();
byte[] salt = new byte[SaltBytes];
csprng.GetBytes(salt);
// Hash the password and encode the parameters
byte[] hash = PBKDF2(password, salt, Pbkdf2Iterations, HashBytes);
return Pbkdf2Iterations.ToString("X") + ":" +
Convert.ToBase64String(salt) + ":" +
Convert.ToBase64String(hash);
}
/// <summary>
/// Validates a password given a hash of the correct one.
/// </summary>
/// <param name="password">The password to check.</param>
/// <param name="goodHash">A hash of the correct password.</param>
/// <returns>True if the password is correct. False otherwise.</returns>
public static bool ValidateHash(string password, string goodHash)
{
// Extract the parameters from the hash
char[] delimiter = { ':' };
string[] split = goodHash.Split(delimiter);
int iterations = Int32.Parse(split[IterationIndex], System.Globalization.NumberStyles.HexNumber);
byte[] salt = Convert.FromBase64String(split[SaltIndex]);
byte[] hash = Convert.FromBase64String(split[Pbkdf2Index]);
byte[] testHash = PBKDF2(password, salt, iterations, hash.Length);
return SlowEquals(hash, testHash);
}
/// <summary>
/// Compares two byte arrays in length-constant time. This comparison
/// method is used so that password hashes cannot be extracted from
/// on-line systems using a timing attack and then attacked off-line.
/// </summary>
/// <param name="a">The first byte array.</param>
/// <param name="b">The second byte array.</param>
/// <returns>True if both byte arrays are equal. False otherwise.</returns>
private static bool SlowEquals(byte[] a, byte[] b)
{
uint diff = (uint)a.Length ^ (uint)b.Length;
for (int i = 0; i < a.Length && i < b.Length; i++)
diff |= (uint)(a[i] ^ b[i]);
return diff == 0;
}
/// <summary>
/// Computes the PBKDF2-SHA1 hash of a password.
/// </summary>
/// <param name="password">The password to hash.</param>
/// <param name="salt">The salt.</param>
/// <param name="iterations">The PBKDF2 iteration count.</param>
/// <param name="outputBytes">The length of the hash to generate, in bytes.</param>
/// <returns>A hash of the password.</returns>
private static byte[] PBKDF2(string password, byte[] salt, int iterations, int outputBytes)
{
Rfc2898DeriveBytes pbkdf2 = new Rfc2898DeriveBytes(password, salt);
pbkdf2.IterationCount = iterations;
return pbkdf2.GetBytes(outputBytes);
}
public static string GetUniqueKey(int length)
{
char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890".ToCharArray();
byte[] bytes = new byte[length];
using (var rng = new RNGCryptoServiceProvider())
{
rng.GetNonZeroBytes(bytes);
}
var result = new StringBuilder(length);
foreach (byte b in bytes)
{
result.Append(chars[b % (chars.Length - 1)]);
}
return result.ToString();
}
public static string Base64Encode(string str)
{
return Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(str));
}
public static string Base64Decode(string str)
{
return System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(str));
}
public static string Base64EncodeGuid(Guid guid)
{
return Convert.ToBase64String(guid.ToByteArray());
}
public static Guid Base64DecodeGuid(string str)
{
return new Guid(Convert.FromBase64String(str));
}
}
Then, I do something like the following for generating password resets:
var token = CryptoUtil.GetUniqueKey(16);
var hashedToken = CryptoUtil.CreateHash(token);
var emailToken = CryptoUtil.Base64Encode(string.Format("{0}:{1}", email, token));
The hashedToken variable gets stored in your database, while emailToken is what is put in the URL that is sent to your user. On the action that handles the URL:
var parts = CryptoUtil.Base64Decode(emailToken).Split(':');
var email = parts[0];
var token = parts[1];
Look up the record using email. Then compare using:
CryptoUtil.ValidateHash(token, hashedTokenFromDatabase)
I have an asp web application I wanted to update to prevent cross site request forgery attacks.
I have used the Microsoft auto-generated code from VS 2012, and added it to the master page as described here. It is working well, but one page posts JSON via an AJAX request to a webmethod
I would like to check this ajax request as well.
The forseeable problems are:
var responseCookie = new HttpCookie(AntiXsrfTokenKey)
{
//Set the HttpOnly property to prevent the cookie from
//being accessed by client side script
HttpOnly = true,
this can obviously be changed, but this would then seem to increase site vulnerability. Is this a significant issue?
I can send the value of the viewstate hidden input with the ajax request, but this will then need to be decoded back into key value pairs to do the equivalent of:
(string)ViewState[AntiXsrfTokenKey] != _antiXsrfTokenValue
Is there an easy way to use existing asp.net methods to do this?
Thank you for any help.
Here is what i have discovered. I ended up using the LosFormatter, as described by geedubb, by adding the following code to the MasterPage, and assigning the value to a hidden input which is posted back with the ajax request. I did not realise when I posted the question that HttpCookie.HttpOnly property still posts back the cookie on an ajax request, and so can be left set to false.
internal string GetToken()
{
// call the static method to guarantee LosFormatter remains threadsafe
return GetToken(_antiXsrfTokenValue);
}
private static string GetCurrentUserName()
{
var currentUser = HttpContext.Current.User.Identity;
return (currentUser == null) ? string.Empty : currentUser.Name;
}
private static string GetToken(string token)
{
var los = new System.Web.UI.LosFormatter(true, token);
var writer = new System.IO.StringWriter();
var data = new Dictionary<string,string>();
data.Add("TokenValue",token);
data.Add("UserNameKey", GetCurrentUserName());
los.Serialize(writer, data);
return writer.ToString();
}
internal static void Validate(string token)
{
var request = HttpContext.Current.Request;
var requestCookie = request.Cookies[AntiXsrfTokenKey];
var antiXsrfTokenValue = requestCookie.Value;
var los = new System.Web.UI.LosFormatter(true, antiXsrfTokenValue);
var xsrfData = (Dictionary<string,string>)los.Deserialize(token);
if (xsrfData["TokenValue"] != antiXsrfTokenValue || xsrfData["UserNameKey"] != GetCurrentUserName())
{
throw new System.Security.Authentication.AuthenticationException("Validation of Anti-XSRF token failed.");
}
}
Initially, I had tried sending the value of the _VIEWSTATE hidden input, using the same code
var los = new System.Web.UI.LosFormatter(true, antiXsrfTokenValue);
var ajaxViewState = los.Deserialize(token)
but this threw an error stating the supplied key could not deserialze the string. obviously setting
Page.ViewStateUserKey = _antiXsrfTokenValue;
has a more complex key than the supplied key alone. I would be interested if anyone knew how to deserialize a viewstate string with a userKey.
The only problem with the method I have provided is the size of the string posted back - 1976 characters long for a GUID + 6 character username!!!!
If approaching this problem again, I would reference the System.Web.WebPages.dll (used in an mvc project), and use the same methods which create the Html.AntiForgeryToken in MVC
namespace System.Web.Helpers
{
/// <summary>
/// Provides access to the anti-forgery system, which provides protection against
/// Cross-site Request Forgery (XSRF, also called CSRF) attacks.
/// </summary>
public static class AntiForgery
{
public static void GetTokens(string oldCookieToken, out string newCookieToken, out string formToken)
public static void Validate()
I'm working on a web service using ASP.NET MVC's new WebAPI that will serve up binary files, mostly .cab and .exe files.
The following controller method seems to work, meaning that it returns a file, but it's setting the content type to application/json:
public HttpResponseMessage<Stream> Post(string version, string environment, string filetype)
{
var path = #"C:\Temp\test.exe";
var stream = new FileStream(path, FileMode.Open);
return new HttpResponseMessage<Stream>(stream, new MediaTypeHeaderValue("application/octet-stream"));
}
Is there a better way to do this?
Try using a simple HttpResponseMessage with its Content property set to a StreamContent:
// using System.IO;
// using System.Net.Http;
// using System.Net.Http.Headers;
public HttpResponseMessage Post(string version, string environment,
string filetype)
{
var path = #"C:\Temp\test.exe";
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(path, FileMode.Open, FileAccess.Read);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType =
new MediaTypeHeaderValue("application/octet-stream");
return result;
}
A few things to note about the stream used:
You must not call stream.Dispose(), since Web API still needs to be able to access it when it processes the controller method's result to send data back to the client. Therefore, do not use a using (var stream = …) block. Web API will dispose the stream for you.
Make sure that the stream has its current position set to 0 (i.e. the beginning of the stream's data). In the above example, this is a given since you've only just opened the file. However, in other scenarios (such as when you first write some binary data to a MemoryStream), make sure to stream.Seek(0, SeekOrigin.Begin); or set stream.Position = 0;
With file streams, explicitly specifying FileAccess.Read permission can help prevent access rights issues on web servers; IIS application pool accounts are often given only read / list / execute access rights to the wwwroot.
For Web API 2, you can implement IHttpActionResult. Here's mine:
using System;
using System.IO;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading;
using System.Threading.Tasks;
using System.Web;
using System.Web.Http;
class FileResult : IHttpActionResult
{
private readonly string _filePath;
private readonly string _contentType;
public FileResult(string filePath, string contentType = null)
{
if (filePath == null) throw new ArgumentNullException("filePath");
_filePath = filePath;
_contentType = contentType;
}
public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken)
{
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(File.OpenRead(_filePath))
};
var contentType = _contentType ?? MimeMapping.GetMimeMapping(Path.GetExtension(_filePath));
response.Content.Headers.ContentType = new MediaTypeHeaderValue(contentType);
return Task.FromResult(response);
}
}
Then something like this in your controller:
[Route("Images/{*imagePath}")]
public IHttpActionResult GetImage(string imagePath)
{
var serverPath = Path.Combine(_rootPath, imagePath);
var fileInfo = new FileInfo(serverPath);
return !fileInfo.Exists
? (IHttpActionResult) NotFound()
: new FileResult(fileInfo.FullName);
}
And here's one way you can tell IIS to ignore requests with an extension so that the request will make it to the controller:
<!-- web.config -->
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
For those using .NET Core:
You can make use of the IActionResult interface in an API controller method, like so.
[HttpGet("GetReportData/{year}")]
public async Task<IActionResult> GetReportData(int year)
{
// Render Excel document in memory and return as Byte[]
Byte[] file = await this._reportDao.RenderReportAsExcel(year);
return File(file, "application/vnd.openxmlformats", "fileName.xlsx");
}
This example is simplified, but should get the point across. In .NET Core this process is so much simpler than in previous versions of .NET - i.e. no setting response type, content, headers, etc.
Also, of course the MIME type for the file and the extension will depend on individual needs.
Reference: SO Post Answer by #NKosi
While the suggested solution works fine, there is another way to return a byte array from the controller, with response stream properly formatted :
In the request, set header "Accept: application/octet-stream".
Server-side, add a media type formatter to support this mime type.
Unfortunately, WebApi does not include any formatter for "application/octet-stream". There is an implementation here on GitHub: BinaryMediaTypeFormatter (there are minor adaptations to make it work for webapi 2, method signatures changed).
You can add this formatter into your global config :
HttpConfiguration config;
// ...
config.Formatters.Add(new BinaryMediaTypeFormatter(false));
WebApi should now use BinaryMediaTypeFormatter if the request specifies the correct Accept header.
I prefer this solution because an action controller returning byte[] is more comfortable to test. Though, the other solution allows you more control if you want to return another content-type than "application/octet-stream" (for example "image/gif").
For anyone having the problem of the API being called more than once while downloading a fairly large file using the method in the accepted answer, please set response buffering to true
System.Web.HttpContext.Current.Response.Buffer = true;
This makes sure that the entire binary content is buffered on the server side before it is sent to the client. Otherwise you will see multiple request being sent to the controller and if you do not handle it properly, the file will become corrupt.
The overload that you're using sets the enumeration of serialization formatters. You need to specify the content type explicitly like:
httpResponseMessage.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
You could try
httpResponseMessage.Content.Headers.Add("Content-Type", "application/octet-stream");
You can try the following code snippet
httpResponseMessage.Content.Headers.Add("Content-Type", "application/octet-stream");
Hope it will work for you.
Consider the requirement to log incoming SOAP requests to an ASP.NET ASMX web service. The task is to capture the raw XML being sent to the web service.
The incoming message needs to be logged for debug inspection. The application already has its own logging library in use, so the ideal usage would be something like this:
//string or XML, it doesn't matter.
string incomingSoapRequest = GetSoapRequest();
Logger.LogMessage(incomingSoapRequest);
Are there any easy solutions to capture the raw XML of the incoming SOAP requests?
Which events would you handle to get access to this object and the relevant properties?
Is there anyway IIS can capture the incoming request and push to a log?
You can also implement by placing the code in Global.asax.cs
protected void Application_BeginRequest(object sender, EventArgs e)
{
// Create byte array to hold request bytes
byte[] inputStream = new byte[HttpContext.Current.Request.ContentLength];
// Read entire request inputstream
HttpContext.Current.Request.InputStream.Read(inputStream, 0, inputStream.Length);
//Set stream back to beginning
HttpContext.Current.Request.InputStream.Position = 0;
//Get XML request
string requestString = ASCIIEncoding.ASCII.GetString(inputStream);
}
I have a Utility method in my web service that I use to capture the request when something happens that I am not expecting like a unhandled exception.
/// <summary>
/// Captures raw XML request and writes to FailedSubmission folder.
/// </summary>
internal static void CaptureRequest()
{
const string procName = "CaptureRequest";
try
{
log.WarnFormat("{0} - Writing XML request to FailedSubmission folder", procName);
byte[] inputStream = new byte[HttpContext.Current.Request.ContentLength];
//Get current stream position so we can set it back to that after logging
Int64 currentStreamPosition = HttpContext.Current.Request.InputStream.Position;
HttpContext.Current.Request.InputStream.Position = 0;
HttpContext.Current.Request.InputStream.Read(inputStream, 0, HttpContext.Current.Request.ContentLength);
//Set back stream position to original position
HttpContext.Current.Request.InputStream.Position = currentStreamPosition;
string xml = ASCIIEncoding.ASCII.GetString(inputStream);
string fileName = Guid.NewGuid().ToString() + ".xml";
log.WarnFormat("{0} - Request being written to filename: {1}", procName, fileName);
File.WriteAllText(Configuration.FailedSubmissionsFolder + fileName, xml);
}
catch
{
}
}
Then in web.config I store several AppSetting values that define what level I want to use to capture the request.
<!-- true/false - If true will write to an XML file the raw request when any Unhandled exception occurrs -->
<add key="CaptureRequestOnUnhandledException" value="true"/>
<!-- true/false - If true will write to an XML file the raw request when any type of error is returned to the client-->
<add key="CaptureRequestOnAllFailures" value="false"/>
<!-- true/false - If true will write to an XML file the raw request for every request to the web service -->
<add key="CaptureAllRequests" value="false"/>
Then in my Application_BeginRequest I have it modified like so. Note that Configuration is a static class I create to read properties from web.config and other areas.
protected void Application_BeginRequest(object sender, EventArgs e)
{
if(Configuration.CaptureAllRequests)
{
Utility.CaptureRequest();
}
}
One way to capture the raw message is to use SoapExtensions.
An alternative to SoapExtensions is to implement IHttpModule and grab the input stream as it's coming in.
public class LogModule : IHttpModule
{
public void Init(HttpApplication context)
{
context.BeginRequest += this.OnBegin;
}
private void OnBegin(object sender, EventArgs e)
{
HttpApplication app = (HttpApplication)sender;
HttpContext context = app.Context;
byte[] buffer = new byte[context.Request.InputStream.Length];
context.Request.InputStream.Read(buffer, 0, buffer.Length);
context.Request.InputStream.Position = 0;
string soapMessage = Encoding.ASCII.GetString(buffer);
// Do something with soapMessage
}
public void Dispose()
{
throw new NotImplementedException();
}
}
You know that you dont actually need to create a HttpModule right?
You can also read the contents of the Request.InputStream from within your asmx WebMethod.
Here is an article I wrote on this approach.
Code is as follows:
using System;
using System.Collections.Generic;
using System.Web;
using System.Xml;
using System.IO;
using System.Text;
using System.Web.Services;
using System.Web.Services.Protocols;
namespace SoapRequestEcho
{
[WebService(
Namespace = "http://soap.request.echo.com/",
Name = "SoapRequestEcho")]
public class EchoWebService : WebService
{
[WebMethod(Description = "Echo Soap Request")]
public XmlDocument EchoSoapRequest(int input)
{
// Initialize soap request XML
XmlDocument xmlSoapRequest = new XmlDocument();
// Get raw request body
Stream receiveStream = HttpContext.Current.Request.InputStream;
// Move to beginning of input stream and read
receiveStream.Position = 0;
using (StreamReader readStream = new StreamReader(receiveStream, Encoding.UTF8))
{
// Load into XML document
xmlSoapRequest.Load(readStream);
}
// Return
return xmlSoapRequest;
}
}
}
There are no easy ways to do this. You will have to implement a SoapExtension. The example at the previous link shows an extension that can be used to log the data.
If you had been using WCF, then you could simply set the configuration to produce message logs.
According to Steven de Salas, you can use the Request.InputStream property within the webmethod. I have not tried this, but he says that it works.
I would want to test this with both http and https, and with and without other SoapExtensions running at the same time. These are things that might affect what kind of stream the InputStream is set to. Some streams cannot seek, for instance, which might leave you with a stream positioned after the end of the data, and which you cannot move to the beginning.