I'm having trouble getting the Hangfire (1.5.8) dashboard to work inside of an IIS Virtual Directoy. Everything works beautifully in my dev environment where my application is simply mapped to the root of localhost. Our beta server, on the other hand, uses Virtual Directories to separate apps and app pools.
It's an ASP.Net MVC site using Hangfire with an OWIN Startup class. It gets deployed to http://beta-server/app-name/. When I attempt to access either http://beta-server/app-name/hangfire or http//beta-server/hangfire I get a 404 from IIS.
For the purposes of troubleshooting this, my IAuthenticationFilter simply returns true.
Here is my Startup.cs, pretty basic:
public class Startup
{
public void Configuration(IAppBuilder app)
{
// For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=316888
GlobalConfiguration.Configuration
.UseSqlServerStorage(new DetectsEnvironment().GetEnvironment());
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
AuthorizationFilters = new[] {new AuthenticationFilter()}
});
app.UseHangfireServer();
}
}
Does anyone have a working implementation that gets deployed to a Virtual Directory? Are there any OWIN middleware admin/management tools I can use to dig into what URL is getting registered within IIS?
I ended up fixing this simply by adding the HTTPHandler to the section in web.config.
<system.webServer>
<handlers>
<add name="hangfireDashboard" path="hangfire" type="System.Web.DefaultHttpHandler" verb="*" />
</handlers>
</system.webServer>
I had a similar issue in ASP.NET Core 2.0 and it required proper authorization setup (I use a middleware to protect the route, so I did not rely on authorization in my example):
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new [] {new HangfireDashboardAuthorizationFilter()}
});
/// <summary>
/// authorization required when deployed
/// </summary>
public class HangfireDashboardAuthorizationFilter : IDashboardAuthorizationFilter
{
///<inheritdoc/>
public bool Authorize(DashboardContext context)
{
// var httpContext = context.GetHttpContext();
// Allow all authenticated users to see the Dashboard (potentially dangerous).
// handled through middleware
return true; // httpContext.User.Identity.IsAuthenticated;
}
}
There is not need to change anything in web.config.
For more information check Hangfire documentation about this topic.
I had the exact same problem. In my case, this was because of bad configuration - the Startup class was not called. So try to add the following to your config file:
<add key="owin:appStartup" value="YourProject.YourNamespace.Startup, YourProject" />
<add key="owin:AutomaticAppStartup" value="true" />
Hope this helps.
Martin
Related
I have an ASP.NET MVC app running in an Azure app service with one staging slot, and a build and release pipeline in VSTS.
I want the production instance to have Allow / in robots.txt and Disallow / in the staging slot at all times.
Currently we are changing robots.txt manually every time we do a swap but this is error prone
How can I automate this process?
To solve this problem I did consider creating the robots.txt file dynamically based on app settings set in the Azure portal (set to stay with the slot), however this won't work since after the swap happens prod will have the staging Disallow rule.
Can anyone advise the best way to manage this?
Robots are mainly used by search engines to crawl and check pages on the public websites. Staging and other deployment slots are not public (and should not be public — unless you have a good reason for that), and thus it doesn't make much sense to configure and manage it. Secondly, in most cases I would recommend to redirect any public request to your production slot and keep staging offline and active for internal use cases only. This would also help you to manage the analytics and logs coming from the public only, and not being polluted with internal and deployment slots stuff.
Anyways, if you are still inclined to do this, then there is one way that you can manage this. Write your own routing to control the robots file, and then render a content-type: text/plain page, which would be dynamic based on whether it is a staging or production request. Something like this,
// Create the robots.txt file dynamically, by controlling the URL handler
[Route("robots.txt")]
public ContentResult DynamicRobotsFile()
{
StringBuilder content = new StringBuilder();
content.AppendLine("user-agent: *");
// Check the condition by URL or Environment variable
if(allow) {
content.AppendLine("Allow: /");
else {
content.AppendLine("Disallow: /");
}
return this.Content(stringBuilder.ToString(), "text/plain", Encoding.UTF8);
}
This way you can manage how the robots.txt is created and you would be able to control the allow disallow for the robots. You can create a separate controller or an action only in the home controller of your app.
Now that you know how to do, you can setup the environment variables for the production/staging slots to check other requirements.
I use below code and It works for me
[Route("robots.txt")]
public ContentResult DynamicRobotsFile()
{
StringBuilder content = new StringBuilder();
if (System.Configuration.ConfigurationManager.AppSettings["production"] != "true")
{
content.AppendLine("user-agent: *");
content.AppendLine("Disallow: /");
}
return this.Content(content.ToString(), "text/plain", Encoding.UTF8);
}
web.config
<appSettings>
<add key="production" value="false" />
</appSettings>
<system.webServer>
<handlers>
<add name="RobotsTxt" path="robots.txt" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
</system.webServer>
EDITED
I use this version now.
[Route("/robots.txt")]
public ContentResult RobotsTxt()
{
var sb = new StringBuilder().AppendLine("User-Agent: *");
if (_env.IsProduction())
{
sb.AppendLine("Allow: /");
sb.AppendLine("Disallow: /admin");
}
else
{
sb.AppendLine("Disallow: /");
}
sb.AppendLine(string.Empty);
sb.AppendLine($"Sitemap: {this.Request.Scheme}://{this.Request.Host}/sitemap.xml");
return this.Content(sb.ToString(), "text/plain", Encoding.UTF8);
}
and I use IWebHostEnvironment to detect prod or not
public class SeoController : Controller
{
private readonly IWebHostEnvironment _env;
public SeoController(IWebHostEnvironment env)
{
_env = env;
}
}
We're having an odd issue with a WebAPI application hosted by another ASP.NET webapp. The WebAPI controllers are all mapped with Ninject but the ASP.NET host site does not use Ninject.
The issue is that any requests to any of the WebAPI controllers fail with a Ninject error and HTTP 500:
"An error occurred when trying to create a controller of type 'MyObjectsController'. Make sure that the controller has a parameterless public constructor."
However, once even a single request to the main webapp is made (such as opening the login page) then the WebAPI calls all work as expected. The WebAPI is registered and initialized as part of the Application_Start global event. The start event is triggered regardless of whether the first request comes in under the WebAPI or the webapp so it's not bypassing the global startup when coming through the WebAPI before the main app. The WebAPI registration is pretty standard stuff:
GlobalConfiguration.Configure(AddressOf WebApiConfig.Register)
And the Register function itself is nothing unusual:
// Web API configuration and services
var cors = new EnableCorsAttribute("*", "*", "*", "X-Pagination");
//To allow cross-origin credentials in Web API
cors.SupportsCredentials = true;
config.EnableCors(cors);
// To disable host-level authentication inside the Web API pipeline and "un-authenticates" the request.
config.SuppressHostPrincipal();
config.Filters.Add(new HostAuthenticationFilter(Startup.OAuthBearerOptions.AuthenticationType));
// Web API routes
var constraintResolver = new DefaultInlineConstraintResolver();
constraintResolver.ConstraintMap.Add("nonzero", typeof(NonZeroConstraint));
//constraintResolver.ConstraintMap.Add("NonEmptyFolderIds", typeof(NonEmptyFolderIdsConstraint));
config.MapHttpAttributeRoutes(constraintResolver);
var jsonFormatter = config.Formatters.OfType<JsonMediaTypeFormatter>().First();
jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
The NinjectConfig is also pretty standard:
public static class NinjectConfig
{
/// <summary>
/// THe kernel of Ninject
/// </summary>
public static Lazy<IKernel> CreateKernel = new Lazy<IKernel>(() =>
{
var kernel = new StandardKernel();
kernel.Load(Assembly.GetExecutingAssembly());
RegisterServices(kernel);
return kernel;
});
private static void RegisterServices(KernelBase kernel)
{
kernel.Bind<IMyObjectRepository>().To<MyObjectRepository>().InRequestScope();
...
}
}
An example of the DI usage (again, very basic and standard) is:
public class MyObjectRepository : IMyObjectRepository
{
private readonly IMyOtherObjectRepository _objectRepository;
...
public MyObjectRepository(IMyOtherObjectRepository objectRepository)
{
_objectRepository = objectRepository;
...
}
...
}
We want to avoid the requirement of the initial request to the webapp before the WebAPI is available for requests but nothing seems to be getting us towards a solution.
We initially tried out the IIS preloading/app initialization by setting Start Mode to AlwaysRunning and Start automatically to True in the AppPool config. We also enabled preloadEnabled to true and then added the applicationInitialization config section to the web.config such as the following:
<system.webServer>
...
<applicationInitialization>
<add initializationPage="login.aspx" />
</applicationInitialization>
...
</system.webServer>
However, none of these changes and variations of made any difference to the behavior of the WebAPI. We've scoured the web for more help but are at somewhat of a loss as pretty much everything we've come across points to setting the Start Mode, Start Automatically, preloadEnabled, and applicationInitialization and then it will magically work but that's definitely not our experience.
Does anyone have suggestions or ideas?
Install Ninject integration for WebApi nuget package. It creates a file which initializes Ninject on startup. Here is the doc.
I'm "playing" around with custom inbound URL routing and have came across a problem.
When I pass my custom route a URL to examine, that ends in *.+, my class is not fired when i submit the request.
An example URL would be "~/old/windows.html"
When I step through this in the debugger, my RouteBase implementation doesn't fire. If i edit the url that i pass to the constructor of my route to try to match against "~/old/windows", my implemetation is fired as expected.
Again, If i change the url ro examine to "~/old/windows." the problem reoccurs.
My Route Implementation is below :-
public class LegacyRoute : RouteBase
{
private string[] _urls;
public LegacyRoute(string[] targetUrls)
{
_urls = targetUrls;
}
public override RouteData GetRouteData(HttpContextBase httpContext)
{
RouteData result = null;
string requestedURL = httpContext.Request.AppRelativeCurrentExecutionFilePath;
if (_urls.Contains(requestedURL, StringComparer.OrdinalIgnoreCase))
{
result = new RouteData(this, new MvcRouteHandler());
result.Values.Add("controller", "Legacy");
result.Values.Add("action","GetLegacyURL");
result.Values.Add("legacyURL", requestedURL);
}
return result;
}
public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values)
{
return null;
}
}
In the RoutesConfig file I have registered my route like so :-
routes.MapMvcAttributeRoutes();
routes.Add(new LegacyRoute(new[]{"~/articles/windows.html","~/old/.Net_1.0_Class_Library"}));
Can anyone point out why there is a problem?
By default, the .html extension is not handled by .NET, it is handled by IIS directly. You can override by adding the following section in Web.config under <system.webServer> -
<handlers>
<add name="HtmlFileHandler" path="*.html" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" />
</handlers>
As pointed out here. The above will route EVERY .html file request to .NET, you might want to be more specific by providing a more complete path if you don't want your routing to handle every .html file.
I've found the problem, and I'm sure this will help out a lot of fellow developers.
The problem is with IIS Express that is running via Visual Studio.
There is a module configured in the applicationhost.config called :-
UrlRoutingModule-4.0
This is how it looks in file :-
<add name="UrlRoutingModule-4.0" type="System.Web.Routing.UrlRoutingModule" preCondition="managedHandler,runtimeVersionv4.0" />
You need to set the preCondition Parameter to "".
To do this :-
Run you app via Visual Studio
Right click on IIS Express in your system tray, select "Show All Applications"
Click on the project you wish to edit, then click the config URL.
Open the file with Visual Studio, Locate the module and ammend.
Hope this helps anyone else, who ran into a similar problem.
I am working on a asp.net web application that has is a part of TFS and is used by the development team. Recently as part of the project we setup ADFS and are now attempting to enforce authentication of the project to an ADFS server.
On my development machine I have gone through the steps of adding STS reference which generates the Federation Meta-Data as well as updates the web.config file for the project. Authorization within the web.config uses thumbprint certification which requires me to add to my local machine the ADFS certificate as well as generate a signed certificate for the dev machine and add this to ADFS.
All is setup and working but in looking at the web.config. and FederationMetadata.xml document these "appear" to be machine specific. I suspect that if I check the project/files into TFS the next developer or tester that takes a build will end up with a broken build on their machine.
My question is within TFS what is the process for a scenario like this to check in and still allow my team to check out, build, and test the project with the latest code in their development or test environments?
My work around at this time is to exclude the FederationMetaData.xml and web.config from check in then on each development machine manually setup ADFS authentication as well as for product test. Once done each person can prevent their local copy of the FederationMetatData.xml and web.config from being checked in.(aka have their own local copy) then when checking in/out just ensure that each developer preserves their own copy (or does not check them into TFS)
This seems extremely inefficient, and all but bypasses the essence of source code management as developers are being required to keep local copies of files on their machine. This also seems to introduce the opportunity for accidental check-in of local files or overwriting local files.
Does anyone have any references, documentation or information on how to check-in code for (ADFS) machine specific configurations and not hose up the entire development environment?
Thanks in advance,
I agree that the way that the WIF toolset does configuration is not great for working in teams with multiple developers and test environments. The approach that I've taken to get past this is to change WIF to be configured at runtime.
One approach you can take is to put a dummy /FederationMetadata/2007-06/FederationMetadata.xml in place and check that in to TFS. It must have valid urls and be otherwise a valid file.
Additionally, you will need a valid federationAuthentication section in web.config with dummy (but of valid form) audienceUris, issuer and realm entries.
<microsoft.identityModel>
<service>
<audienceUris>
<add value="https://yourwebsite.com/" />
</audienceUris>
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="https://yourissuer/v2/wsfederation" realm="https://yourwebsite.com/" requireHttps="true" />
<cookieHandler requireSsl="false" />
</federatedAuthentication>
etc...
Then, change your application's ADFS configuration to be completely runtime driven. You can do this by hooking into various events during the ADFS module startup and ASP.NET pipeline.
Take a look at this forums post for more information.
Essentially, you'll want to have something like this in global.asax.cs. This is some code that I've used on a Windows Azure Web Role to read from ServiceConfiguration.cscfg (which is changeable at deploy/runtime in the Azure model). It could easily be adapted to read from web.config or any other configuration system of your choosing (e.g. database).
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated;
}
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
/// Due to the way the ASP.Net pipeline works, the only way to change
/// configurations inside federatedAuthentication (which are configurations on the http modules)
/// is to catch another event, which is raised everytime a request comes in.
ConfigureWSFederation();
}
/// <summary>
/// Dynamically load WIF configuration so that it can live in ServiceConfiguration.cscfg instead of Web.config
/// </summary>
/// <param name="sender"></param>
/// <param name="eventArgs"></param>
void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs eventArgs)
{
try
{
ServiceConfiguration serviceConfiguration = eventArgs.ServiceConfiguration;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")))
{
serviceConfiguration.AudienceRestriction.AllowedAudienceUris.Add(new Uri(RoleEnvironment.GetConfigurationSettingValue("FedAuthAudienceUri")));
Trace.TraceInformation("ServiceConfiguration: AllowedAudienceUris = {0}", serviceConfiguration.AudienceRestriction.AllowedAudienceUris[0]);
}
serviceConfiguration.CertificateValidationMode = X509CertificateValidationMode.None;
Trace.TraceInformation("ServiceConfiguration: CertificateValidationMode = {0}", serviceConfiguration.CertificateValidationMode);
// Now load the trusted issuers
if (serviceConfiguration.IssuerNameRegistry is ConfigurationBasedIssuerNameRegistry)
{
ConfigurationBasedIssuerNameRegistry issuerNameRegistry = serviceConfiguration.IssuerNameRegistry as ConfigurationBasedIssuerNameRegistry;
// Can have more than one. We don't.
issuerNameRegistry.AddTrustedIssuer(RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
Trace.TraceInformation("ServiceConfiguration: TrustedIssuer = {0} : {1}", RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerThumbprint"), RoleEnvironment.GetConfigurationSettingValue("FedAuthTrustedIssuerName"));
}
else
{
Trace.TraceInformation("Custom IssuerNameReistry type configured, ignoring internal settings");
}
// Configures WIF to use the RsaEncryptionCookieTransform if ServiceCertificateThumbprint is specified.
// This is only necessary on Windows Azure because DPAPI is not available.
ConfigureWifToUseRsaEncryption(serviceConfiguration);
}
catch (Exception exception)
{
Trace.TraceError("Unable to initialize the federated authentication configuration. {0}", exception.Message);
}
}
/// <summary>
/// Configures WIF to use the RsaEncryptionCookieTransform, DPAPI is not available on Windows Azure.
/// </summary>
/// <param name="requestContext"></param>
private void ConfigureWifToUseRsaEncryption(ServiceConfiguration serviceConfiguration)
{
String svcCertThumbprint = RoleEnvironment.GetConfigurationSettingValue("FedAuthServiceCertificateThumbprint");
if (!String.IsNullOrEmpty(svcCertThumbprint))
{
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.LocalMachine);
try
{
certificateStore.Open(OpenFlags.ReadOnly);
// We have to pass false as last parameter to find self-signed certs.
X509Certificate2Collection certs = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, svcCertThumbprint, false /*validOnly*/);
if (certs.Count != 0)
{
serviceConfiguration.ServiceCertificate = certs[0];
// Use the service certificate to protect the cookies that are sent to the client.
List<CookieTransform> sessionTransforms =
new List<CookieTransform>(new CookieTransform[] { new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(serviceConfiguration.ServiceCertificate)});
SessionSecurityTokenHandler sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly());
serviceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
Trace.TraceInformation("ConfigureWifToUseRsaEncryption: Using RsaEncryptionCookieTransform for cookieTransform");
}
else
{
Trace.TraceError("Could not find service certificate in the My store on LocalMachine");
}
}
finally
{
certificateStore.Close();
}
}
}
private static void ConfigureWSFederation()
{
// Load the federatedAuthentication settings
WSFederationAuthenticationModule federatedModule = FederatedAuthentication.WSFederationAuthenticationModule as WSFederationAuthenticationModule;
if (federatedModule != null)
{
federatedModule.PassiveRedirectEnabled = true;
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps")))
{
federatedModule.RequireHttps = bool.Parse(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRequireHttps"));
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer")))
{
federatedModule.Issuer = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationIssuer");
}
if (!String.IsNullOrEmpty(RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm")))
{
federatedModule.Realm = RoleEnvironment.GetConfigurationSettingValue("FedAuthWSFederationRealm");
}
CookieHandler cookieHandler = FederatedAuthentication.SessionAuthenticationModule.CookieHandler;
cookieHandler.RequireSsl = false;
}
else
{
Trace.TraceError("Unable to configure the federated module. The modules weren't loaded.");
}
}
}
This will then allow you to configure the following settings at runtime:
<Setting name="FedAuthAudienceUri" value="-- update with audience url. e.g. https://yourwebsite/ --" />
<Setting name="FedAuthWSFederationIssuer" value="-- update with WSFederation endpoint. e.g. https://yourissuer/v2/wsfederation--" />
<Setting name="FedAuthWSFederationRealm" value="-- update with WSFederation realm. e.g. https://yourwebsite/" />
<Setting name="FedAuthTrustedIssuerThumbprint" value="-- update with certificate thumbprint from ACS configuration. e.g. cb27dd190485afe0f62e470e4e3578de51d52bf4--" />
<Setting name="FedAuthTrustedIssuerName" value="-- update with issuer name. e.g. https://yourissuer/--" />
<Setting name="FedAuthServiceCertificateThumbprint" value="-- update with service certificate thumbprint. e.g. same as HTTPS thumbprint: FE95C43CD4C4F1FC6BC1CA4349C3FF60433648DB --" />
<Setting name="FedAuthWSFederationRequireHttps" value="true" />
Bertrand created a blog post to specify how to use IoC in WCF Modules for Orchard.
In 1.1, you can create a SVC file using the new Orchard host factory:
<%# ServiceHost Language="C#" Debug="true"
Service="MyModule.IMyService, MyAssembly"
Factory="Orchard.Wcf.OrchardServiceHostFactory, Orchard.Framework" %>
Then register your service normally as an IDependency but with service and operation contract attributes:
using System.ServiceModel;
namespace MyModule {
[ServiceContract]
public interface IMyService : IDependency {
[OperationContract]
string GetUserEmail(string username);
}
}
My question is that all of Orchard's modules are really area's. So how can you build a route that hits the svc file created in the area/module?
Should you use the full physical path to get to the svc file (tried that and it caused a web.config issue since it was bridging a site and area).
http://localhost/modules/WebServices/MyService.svc
Or do you create a ServiceRoute with WebServiceHostFactory/OrchardServiceHostFactory?
new ServiceRoute("WebServices/MyService", new OrchardServiceHostFactory(), typeof(MyService))
Whatever I try I get a 404 when trying to hit the resource. I was able to get this working using a wcf Application project and setting WCF as a stand alone application, my issues started when trying to bring it into Orchard/MVC.
UPDATE
Thanks for the help Piotr,
This is the steps I took to implement the service.
Routes.cs
new RouteDescriptor { Priority = 20,
Route = new ServiceRoute(
"Services",
new WebServiceHostFactory(),
typeof(MyService)) }
If I use OrchardServiceHostFactory() instead of WebServiceHostFactory() I get the following error.
Operation is not valid due to the current state of the object.
Orchard Root Web.Config
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/>
<standardEndpoints>
<webHttpEndpoint>
<!--
Configure the WCF REST service base address via the global.asax.cs file and the default endpoint
via the attributes on the <standardEndpoint> element below
-->
<standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>
</webHttpEndpoint>
</standardEndpoints>
</system.serviceModel>
MyService
[ServiceContract]
public interface IMyService : IDependency
{
[OperationContract]
string GetTest();
}
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
class MyService : IMyService
{
public string GetTest()
{
return "test";
}
}
I couldn't get the service working by just modifying the module's web.config. I get the following error
ASP.NET routing integration feature requires ASP.NET compatibility.
UPDATE 2
Orchard Root Web.Config
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />
<!-- ... -->
</system.serviceModel>
Routes.cs
public IEnumerable<RouteDescriptor> GetRoutes() {
return new[] {
new RouteDescriptor { Priority = 20,
Route = new ServiceRoute(
"Services",
new OrchardServiceHostFactory(),
typeof(IMyService))
}
};
}
This works, the key here is that you must call typeof on the object that is referencing IDependency, WorkContextModule.IsClosingTypeOf cant handle the object that consumes the dependancy, it must take the Interface that it is directly called by.
As you stated, Orchard modules are areas in ASP.NET MVC terms, so the URL you provided is incorrect and should be:
http://localhost/Your.Orchard.Module/WebServices/MyService.svc
Where localhost is the virtual directory under which your app runs and /WebServices is a folder in the root of your module.
You can also create a service route programatically without problem. This article tells how to add new routes in Orchard. You can just assign a ServiceRoute to the Route property of a RouteDescriptor instead of a default MVC route (as shown in docs).
The question about adding ServiceRoute in area-enabled ASP.NET MVC app was asked before, check it out as it may help you out.
Btw - You may also check this SO question about prefixed service routes.
HTH