I am using WSO2-EI 6.4.0. I have tried this development with link. It work for me. But I need to get user name and password from other back end service. In this example was showed the hard corded user and password. I have added that code for your reference. Please help me to get those user name and password from property file.
public boolean processSecurity(String credentials) {
String decodedCredentials = new String(new Base64().decode(credentials.getBytes()));
String usernName = decodedCredentials.split(":")[0];
String password = decodedCredentials.split(":")[1];
if ("admin".equals(username) && "admin".equals(password)) {
return true;
} else {
return false;
}
}
I have added WSO2 EI handler like following. I need to pass the value from back service or call other sequence and load.
<api context="/test">
<resource methods="POST">
<inSequence>
................
</inSequence>
<outSequence>
................
</outSequence>
</resource>
<handlers>
<handler class="rezg.ride.common.BasicAuthHandler">
<property name="cm_password" value="admin"/>
<property name="cm_userName" value="admin"/>
</handler>
</handlers>
</api>
When we run the above API, handlers are running first and then running in and out sequences. So I need to get user name and password calling Sequence or any other method before run this BasicAuthHandler.
If you need to read the property file from the class mediator it's just straight forward java property file reading. Please refer the following call sample of reading a property file. In this scenario, Just read the carbon.properties file exists in the conf directory.
public boolean mediate(MessageContext context) {
String passwordFileLocation = System.getProperty("conf.location")+"/carbon.properties";
try (FileInputStream input = new FileInputStream(passwordFileLocation)) {
Properties prop = new Properties();
// load a properties file
prop.load(input);
log.info("------org.wso2.CipherTransformation : " + prop.getProperty("org.wso2.CipherTransformation"));
} catch (IOException ex) {
ex.printStackTrace();
}
return true;
}
To get the server location and the conf locating, There are JAVA system properties are set at the time wso2 server starts. Following are some of the useful System system properties.
carbon.local.ip
carbon.home
conf.location
Related
I have an ASP.NET MVC app running in an Azure app service with one staging slot, and a build and release pipeline in VSTS.
I want the production instance to have Allow / in robots.txt and Disallow / in the staging slot at all times.
Currently we are changing robots.txt manually every time we do a swap but this is error prone
How can I automate this process?
To solve this problem I did consider creating the robots.txt file dynamically based on app settings set in the Azure portal (set to stay with the slot), however this won't work since after the swap happens prod will have the staging Disallow rule.
Can anyone advise the best way to manage this?
Robots are mainly used by search engines to crawl and check pages on the public websites. Staging and other deployment slots are not public (and should not be public — unless you have a good reason for that), and thus it doesn't make much sense to configure and manage it. Secondly, in most cases I would recommend to redirect any public request to your production slot and keep staging offline and active for internal use cases only. This would also help you to manage the analytics and logs coming from the public only, and not being polluted with internal and deployment slots stuff.
Anyways, if you are still inclined to do this, then there is one way that you can manage this. Write your own routing to control the robots file, and then render a content-type: text/plain page, which would be dynamic based on whether it is a staging or production request. Something like this,
// Create the robots.txt file dynamically, by controlling the URL handler
[Route("robots.txt")]
public ContentResult DynamicRobotsFile()
{
StringBuilder content = new StringBuilder();
content.AppendLine("user-agent: *");
// Check the condition by URL or Environment variable
if(allow) {
content.AppendLine("Allow: /");
else {
content.AppendLine("Disallow: /");
}
return this.Content(stringBuilder.ToString(), "text/plain", Encoding.UTF8);
}
This way you can manage how the robots.txt is created and you would be able to control the allow disallow for the robots. You can create a separate controller or an action only in the home controller of your app.
Now that you know how to do, you can setup the environment variables for the production/staging slots to check other requirements.
I use below code and It works for me
[Route("robots.txt")]
public ContentResult DynamicRobotsFile()
{
StringBuilder content = new StringBuilder();
if (System.Configuration.ConfigurationManager.AppSettings["production"] != "true")
{
content.AppendLine("user-agent: *");
content.AppendLine("Disallow: /");
}
return this.Content(content.ToString(), "text/plain", Encoding.UTF8);
}
web.config
<appSettings>
<add key="production" value="false" />
</appSettings>
<system.webServer>
<handlers>
<add name="RobotsTxt" path="robots.txt" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
</system.webServer>
EDITED
I use this version now.
[Route("/robots.txt")]
public ContentResult RobotsTxt()
{
var sb = new StringBuilder().AppendLine("User-Agent: *");
if (_env.IsProduction())
{
sb.AppendLine("Allow: /");
sb.AppendLine("Disallow: /admin");
}
else
{
sb.AppendLine("Disallow: /");
}
sb.AppendLine(string.Empty);
sb.AppendLine($"Sitemap: {this.Request.Scheme}://{this.Request.Host}/sitemap.xml");
return this.Content(sb.ToString(), "text/plain", Encoding.UTF8);
}
and I use IWebHostEnvironment to detect prod or not
public class SeoController : Controller
{
private readonly IWebHostEnvironment _env;
public SeoController(IWebHostEnvironment env)
{
_env = env;
}
}
I'm "playing" around with custom inbound URL routing and have came across a problem.
When I pass my custom route a URL to examine, that ends in *.+, my class is not fired when i submit the request.
An example URL would be "~/old/windows.html"
When I step through this in the debugger, my RouteBase implementation doesn't fire. If i edit the url that i pass to the constructor of my route to try to match against "~/old/windows", my implemetation is fired as expected.
Again, If i change the url ro examine to "~/old/windows." the problem reoccurs.
My Route Implementation is below :-
public class LegacyRoute : RouteBase
{
private string[] _urls;
public LegacyRoute(string[] targetUrls)
{
_urls = targetUrls;
}
public override RouteData GetRouteData(HttpContextBase httpContext)
{
RouteData result = null;
string requestedURL = httpContext.Request.AppRelativeCurrentExecutionFilePath;
if (_urls.Contains(requestedURL, StringComparer.OrdinalIgnoreCase))
{
result = new RouteData(this, new MvcRouteHandler());
result.Values.Add("controller", "Legacy");
result.Values.Add("action","GetLegacyURL");
result.Values.Add("legacyURL", requestedURL);
}
return result;
}
public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
{
return null;
}
}
In the RoutesConfig file I have registered my route like so :-
routes.MapMvcAttributeRoutes();
routes.Add(new LegacyRoute(new[]{"~/articles/windows.html","~/old/.Net_1.0_Class_Library"}));
Can anyone point out why there is a problem?
By default, the .html extension is not handled by .NET, it is handled by IIS directly. You can override by adding the following section in Web.config under <system.webServer> -
<handlers>
<add name="HtmlFileHandler" path="*.html" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
As pointed out here. The above will route EVERY .html file request to .NET, you might want to be more specific by providing a more complete path if you don't want your routing to handle every .html file.
I've found the problem, and I'm sure this will help out a lot of fellow developers.
The problem is with IIS Express that is running via Visual Studio.
There is a module configured in the applicationhost.config called :-
UrlRoutingModule-4.0
This is how it looks in file :-
<add name="UrlRoutingModule-4.0" type="System.Web.Routing.UrlRoutingModule" preCondition="managedHandler,runtimeVersionv4.0" />
You need to set the preCondition Parameter to "".
To do this :-
Run you app via Visual Studio
Right click on IIS Express in your system tray, select "Show All Applications"
Click on the project you wish to edit, then click the config URL.
Open the file with Visual Studio, Locate the module and ammend.
Hope this helps anyone else, who ran into a similar problem.
I have a little problem with the MuleSoft CMIS connector. I have an application that uploads and downloads files from Alfresco. I connect to Alfresco through AtomPub and use CMIS for all actions towards the Alfresco.
The problem is this:
I used to get the object from the repository and it worked fine. In my flow I added one component that takes the object from the flow, which is of type DocumentImpl, get InputStream, cast it to an Object and return it. The browser starts the download of the file but it has no idea what the file is because it has no extension attached to it.
And finally the question: How do I attach the extension to the file being downloaded?
EDIT some code added
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
MuleMessage mes = eventContext.getMessage();
System.out.println("Message is :" +mes);
DocumentImpl doc = mes.getPayload(DocumentImpl.class);
HttpResponse res = new HttpResponse();
InputStream a = doc.getContentStream().getStream();
String m = doc.getContentStreamMimeType();
String n = doc.getContentStreamFileName();
res.setBody(mes);
return a;
}
Ok i solved the problem. Basically the best way to do this is to change the flow to this:
<set-payload value ="#[payload.getContentStream()]" />
<set-variable value="#[payload.getMimeType()]" variableName="mime" doc:name="Variable" />
<set-variable value="#[payload.getFileName()]" variableName="name" doc:name="Variable" />
<!-- Set Content-Type to stored mimetype -->
<set-property value="#[flowVars['mime']]" propertyName="Content-Type" />
<set-property propertyName="File-Name" value="#[flowVars['name']]"/>
<set-property value="attachment; filename=#[flowVars['name']]" propertyName="Content-Disposition" />
this should be in the Mule Flow after
This takes mime type and file name from the payload and returns it!
I am working on a asp.net web application that has is a part of TFS and is used by the development team. Recently as part of the project we setup ADFS and are now attempting to enforce authentication of the project to an ADFS server.
On my development machine I have gone through the steps of adding STS reference which generates the Federation Meta-Data as well as updates the web.config file for the project. Authorization within the web.config uses thumbprint certification which requires me to add to my local machine the ADFS certificate as well as generate a signed certificate for the dev machine and add this to ADFS.
All is setup and working but in looking at the web.config. and FederationMetadata.xml document these "appear" to be machine specific. I suspect that if I check the project/files into TFS the next developer or tester that takes a build will end up with a broken build on their machine.
My question is within TFS what is the process for a scenario like this to check in and still allow my team to check out, build, and test the project with the latest code in their development or test environments?
My work around at this time is to exclude the FederationMetaData.xml and web.config from check in then on each development machine manually setup ADFS authentication as well as for product test. Once done each person can prevent their local copy of the FederationMetatData.xml and web.config from being checked in.(aka have their own local copy) then when checking in/out just ensure that each developer preserves their own copy (or does not check them into TFS)
This seems extremely inefficient, and all but bypasses the essence of source code management as developers are being required to keep local copies of files on their machine. This also seems to introduce the opportunity for accidental check-in of local files or overwriting local files.
Does anyone have any references, documentation or information on how to check-in code for (ADFS) machine specific configurations and not hose up the entire development environment?
Thanks in advance,
I agree that the way that the WIF toolset does configuration is not great for working in teams with multiple developers and test environments. The approach that I've taken to get past this is to change WIF to be configured at runtime.
One approach you can take is to put a dummy /FederationMetadata/2007-06/FederationMetadata.xml in place and check that in to TFS. It must have valid urls and be otherwise a valid file.
Additionally, you will need a valid federationAuthentication section in web.config with dummy (but of valid form) audienceUris, issuer and realm entries.
<microsoft.identityModel>
<service>
<audienceUris>
<add value="https://yourwebsite.com/" />
</audienceUris>
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="https://yourissuer/v2/wsfederation" realm="https://yourwebsite.com/" requireHttps="true" />
<cookieHandler requireSsl="false" />
</federatedAuthentication>
etc...
Then, change your application's ADFS configuration to be completely runtime driven. You can do this by hooking into various events during the ADFS module startup and ASP.NET pipeline.
Take a look at this forums post for more information.
Essentially, you'll want to have something like this in global.asax.cs. This is some code that I've used on a Windows Azure Web Role to read from ServiceConfiguration.cscfg (which is changeable at deploy/runtime in the Azure model). It could easily be adapted to read from web.config or any other configuration system of your choosing (e.g. database).
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated;
}
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
/// Due to the way the ASP.Net pipeline works, the only way to change
/// configurations inside federatedAuthentication (which are configurations on the http modules)
/// is to catch another event, which is raised everytime a request comes in.
ConfigureWSFederation();
}
/// <summary>
/// Dynamically load WIF configuration so that it can live in ServiceConfiguration.cscfg instead of Web.config
/// </summary>
/// <param name="sender"></param>
/// <param name="eventArgs"></param>
void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs eventArgs)
{
try
{
ServiceConfiguration serviceConfiguration = eventArgs.ServiceConfiguration;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")))
{
serviceConfiguration.AudienceRestriction.AllowedAudienceUris.Add(new Uri(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")));
Trace.TraceInformation("ServiceConfiguration: AllowedAudienceUris = {0}", serviceConfiguration.AudienceRestriction.AllowedAudienceUris[0]);
}
serviceConfiguration.CertificateValidationMode = X509CertificateValidationMode.None;
Trace.TraceInformation("ServiceConfiguration: CertificateValidationMode = {0}", serviceConfiguration.CertificateValidationMode);
// Now load the trusted issuers
if (serviceConfiguration.IssuerNameRegistry is ConfigurationBasedIssuerNameRegistry)
{
ConfigurationBasedIssuerNameRegistry issuerNameRegistry = serviceConfiguration.IssuerNameRegistry as ConfigurationBasedIssuerNameRegistry;
// Can have more than one. We don't.
issuerNameRegistry.AddTrustedIssuer(RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
Trace.TraceInformation("ServiceConfiguration: TrustedIssuer = {0} : {1}", RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
}
else
{
Trace.TraceInformation("Custom IssuerNameReistry type configured, ignoring internal settings");
}
// Configures WIF to use the RsaEncryptionCookieTransform if ServiceCertificateThumbprint is specified.
// This is only necessary on Windows Azure because DPAPI is not available.
ConfigureWifToUseRsaEncryption(serviceConfiguration);
}
catch (Exception exception)
{
Trace.TraceError("Unable to initialize the federated authentication configuration. {0}", exception.Message);
}
}
/// <summary>
/// Configures WIF to use the RsaEncryptionCookieTransform, DPAPI is not available on Windows Azure.
/// </summary>
/// <param name="requestContext"></param>
private void ConfigureWifToUseRsaEncryption(ServiceConfiguration serviceConfiguration)
{
String svcCertThumbprint = RoleEnvironment.GetConfigurationSettingValue("FedAuthServiceCertificateThumbprint");
if (!String.IsNullOrEmpty(svcCertThumbprint))
{
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.LocalMachine);
try
{
certificateStore.Open(OpenFlags.ReadOnly);
// We have to pass false as last parameter to find self-signed certs.
X509Certificate2Collection certs = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, svcCertThumbprint, false /*validOnly*/);
if (certs.Count != 0)
{
serviceConfiguration.ServiceCertificate = certs[0];
// Use the service certificate to protect the cookies that are sent to the client.
List<CookieTransform> sessionTransforms =
new List<CookieTransform>(new CookieTransform[] { new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(serviceConfiguration.ServiceCertificate)});
SessionSecurityTokenHandler sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());
serviceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
Trace.TraceInformation("ConfigureWifToUseRsaEncryption: Using RsaEncryptionCookieTransform for cookieTransform");
}
else
{
Trace.TraceError("Could not find service certificate in the My store on LocalMachine");
}
}
finally
{
certificateStore.Close();
}
}
}
private static void ConfigureWSFederation()
{
// Load the federatedAuthentication settings
WSFederationAuthenticationModule federatedModule = FederatedAuthentication.WSFederationAuthenticationModule as WSFederationAuthenticationModule;
if (federatedModule != null)
{
federatedModule.PassiveRedirectEnabled = true;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps")))
{
federatedModule.RequireHttps = bool.Parse(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps"));
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer")))
{
federatedModule.Issuer = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer");
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm")))
{
federatedModule.Realm = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm");
}
CookieHandler cookieHandler = FederatedAuthentication.SessionAuthenticationModule.CookieHandler;
cookieHandler.RequireSsl = false;
}
else
{
Trace.TraceError("Unable to configure the federated module. The modules weren't loaded.");
}
}
}
This will then allow you to configure the following settings at runtime:
<Setting name="FedAuthAudienceUri" value="-- update with audience url. e.g. https://yourwebsite/ --" />
<Setting name="FedAuthWSFederationIssuer" value="-- update with WSFederation endpoint. e.g. https://yourissuer/v2/wsfederation--" />
<Setting name="FedAuthWSFederationRealm" value="-- update with WSFederation realm. e.g. https://yourwebsite/" />
<Setting name="FedAuthTrustedIssuerThumbprint" value="-- update with certificate thumbprint from ACS configuration. e.g. cb27dd190485afe0f62e470e4e3578de51d52bf4--" />
<Setting name="FedAuthTrustedIssuerName" value="-- update with issuer name. e.g. https://yourissuer/--" />
<Setting name="FedAuthServiceCertificateThumbprint" value="-- update with service certificate thumbprint. e.g. same as HTTPS thumbprint: FE95C43CD4C4F1FC6BC1CA4349C3FF60433648DB --" />
<Setting name="FedAuthWSFederationRequireHttps" value="true" />
I have a console capplication that runs on the same computer that hosts a bunch of web.config files. I need the console application to open each web.config file and decrypt the connection string and then test if the connection string works.
The problem I am running into is that OpenExeConfiguration is expecting a winforms application configuration file (app.dll.config) and OpenWebConfiguration needs to be run through IIS. Since this is my local machine, I'm not running IIS (I use Visual Studio's built-in server).
Is there a way I can open the web.config files while still getting the robustness of .NET's capabilities to decrypt the connectionstrings?
Thanks
Update
The OpenWebConfiguration works if you are querying IIS directly or are the website in question that you want to look up the web.config for. What I am looking to accomplish is the same sort of functionality, but from a console application opening up the web.config file of a website on my same machine not using an IIS query because IIS isn't running on my machine.
Ok I got it... compiled and accessed this so i know it works...
VirtualDirectoryMapping vdm = new VirtualDirectoryMapping(#"C:\test", true);
WebConfigurationFileMap wcfm = new WebConfigurationFileMap();
wcfm.VirtualDirectories.Add("/", vdm);
// Get the Web application configuration object.
Configuration config = WebConfigurationManager.OpenMappedWebConfiguration(wcfm, "/");
ProtectSection(config, #"connectionStrings", "DataProtectionConfigurationProvider");
This is assuming you have a file called web.config in a directory called C:\Test.
I adjusted #Dillie-O's methods to take a Configuration as a parameter.
You must also reference System.Web and System.configuration and any dlls containing configuration handlers that are set up in your web.config.
The when the ConfigurationManager class grab a section from the config file, it has an "IsProtected" property that it can infer for a given section that you grab. If it is protected, you can then Unprotect it using some code.
The basic method for encrypting/decrypting goes like this (taken from article link below):
private void ProtectSection(string sectionName, string provider)
{
Configuration config =
WebConfigurationManager.
OpenWebConfiguration(Request.ApplicationPath);
ConfigurationSection section =
config.GetSection(sectionName);
if (section != null &&
!section.SectionInformation.IsProtected)
{
section.SectionInformation.ProtectSection(provider);
config.Save();
}
}
private void UnProtectSection(string sectionName)
{
Configuration config =
WebConfigurationManager.
OpenWebConfiguration(Request.ApplicationPath);
ConfigurationSection section =
config.GetSection(sectionName);
if (section != null &&
section.SectionInformation.IsProtected)
{
section.SectionInformation.UnprotectSection();
config.Save();
}
}
Check out this article for the full details on working with this.
public static string WebKey(string key)
{
var configFile = new System.IO.FileInfo(webconfigPath);
var vdm = new VirtualDirectoryMapping(configFile.DirectoryName, true, configFile.Name);
var wcfm = new WebConfigurationFileMap();
wcfm.VirtualDirectories.Add("/", vdm);
System.Configuration.Configuration config = WebConfigurationManager.OpenMappedWebConfiguration(wcfm, "/");
System.Configuration.AppSettingsSection appSettingSection = (System.Configuration.AppSettingsSection)config.GetSection("appSettings");
System.Configuration.KeyValueConfigurationElement kv = appSettingSection.Settings.AllKeys
.Where(x => x.Equals(key))
.Select(x => appSettingSection.Settings[key])
.FirstOrDefault();
return kv != null ? kv.Value : string.Empty;
}
I think you want to use WebConfigurationManager class with its OpenWebConfiguration method.
It takes a path to the web.config and should open it just like it would in a HTTPContext based application.