We have quite a large MVC4 application and we would like to have Selenium go through every page and make sure it loads - some sort of smoke test.
I can use reflection to go through the assembly, find all controllers and all actions, check if actions are not post, come up with parameters for actions that require parameters.
Then I'll feed this list to Selenium and check that everything I need on the pages is done appropriately.
But before I start playing with reflection, I'd like to check if this has already been done, so I don't reinvent the bicycle. I have googled for such thing, but could not find anything.
p.s. Writing the reflection code is not an issue. Selenium is covered as well. Just checking if this has already been done.
The AttributeRouting project has a route debugger in place, which does work even if you don't use attribute routing inside your project.
You can see the class that handles displaying the routes over on Github but I'm not sure it will display the routing information when the project isn't run locally. You may need to adapt that code so you can access it safely from your Selenium instance (and make it machine readable using JSON or something).
Related
I need to be able to perform all of the available functions that the Package Manager Console performs for code first DB migrations. Does anyone know how I could accomplish these commands strictly through user defined code? I am trying to automate this whole migration process and my team has hit the dreaded issue of getting the migrations out of sync due to the number of developers on this project. I want to write a project that the developer can interact with that will create and if need be rescaffold their migrations for them automatically.
PM is invoking through PowerShell and PS cmdlets (like for active directory etc.)
http://docs.nuget.org/docs/reference/package-manager-console-powershell-reference
The Package Manager Console is a PowerShell console within Visual
Studio
...there is essentially very little info about this - I've tried that before on couple occasions and it gets down to doing some 'dirty work' if you really need it (not really sure, it might not be that difficult - providing you have some PS experience)
Here are similar questions / answers - working out the PS comdlets is pretty involving - in this case it has some additional steps involved. And PS tends to get very version dependent - so you need to check this for the specific EF/CF you're using.
Run entityframework cmdlets from my code
Possible to add migration using EF DbMigrator
And you may want to look at the source code for EF that does Add-Migration
(correction: this is the link to official repository - thanks to #Brice for that)
http://entityframework.codeplex.com/SourceControl/changeset/view/f986cb32d0a3#src/EntityFramework.PowerShell/Migrations/AddMigrationCommand.cs
http://entityframework.codeplex.com/SourceControl/BrowseLatest
(PM errors also suggest the origins of the code doing the Add-Migrations to be the 'System.Data.Entity.Migrations.Design.ToolingFacade')
If you need 'just' an Update - you could try using the DbMigrator.Update (this guy gave it a try http://joshmouch.wordpress.com/2012/04/22/entity-framework-code-first-migrations-executing-migrations-using-code-not-powershell-commands/) - but I'm not sure how relevant is that to you, I doubt it.
The scaffolding is the real problem (Add-Migration) which to my knowledge isn't accessible from C# directly via EF/CF framework.
Note: - based on the code in (http://entityframework.codeplex.com/SourceControl/changeset/view/f986cb32d0a3#src/EntityFramework.PowerShell/Migrations/AddMigrationCommand.cs) - and as the EF guru mentioned himself - that part of the code is calling into the System.Data.Entity.Migrations.Design library - which does most of the stuff. If it's possible to reference that one and actually repeat what AddMigrationCommand is doing - then there might not be a need for PowerShell at all. But I'm suspecting it's not that straight-forward, with possible 'internal' calls invisible to outside callers etc.
At least as of this post, you can directly access the System.Data.Entity.Migrations.Design.MigrationScaffolder class and directly call the Scaffold() methods on it, which will return you an object that contains the contents of the "regular" .cs file, the "Designer.cs" file and the .resx file.
What you do with those files is up to you!
Personally, I'm attempting to turn this into a tool that will be able to create EF6 migrations on a new ASPNET5/DNX project, which is not supported by the powershell commands.
In one of my C# Template Building Blocks I have the following line of code
publication.GetListPublishItems(uriTarget, false, false,
TDSDefinesInterop.ListColumnFilter.XMLListDefault, listRowFilter);
Before implementing a Custom Resolver, this code executed very quickly. Now that my resolver is implemented for the Publication ItemType the code executes really slowly. From this I conclude that the new Resolver is being called behind the scenes by the GetListPublishItems() method (which makes sense). I assume I need to modify the resolver somehow. However I can't seem to hit a break point in my resolver when the method is called.
I normally attach to the 'TcmTemplateDebugHost' when debugging a template or directly to the publisher process when debugging the resolver. My Resolver only seems to get hit when I first press Publish and not when the GetListPublishItems() method is called.
So this question is twofold:
Do Resolvers get called when the GetListPublishItems() method is used?
Assuming they are called, which process should I attach to when I need to debug it in this scenario?
I don't know for certain, but I can't imagine a sane scenario where a custom resolver wouldn't be involved in GetListPublishItems(). Your evidence seems to back this up, but of course, if we can answer the second part of your question, we'll know it for certain.
I imagine that any normal assumptions you've made about the hosting process are probably correct, so for example, if you are invoking your template during a publish, then the TcmPublisher will be the process. Alternatively, if you were to open up the publish dialog for the publication in the GUI and hit "Show Items To Publish", then it would probably be the COM Surrogate process (dllhost.exe)... and so on. One way to find out for sure, though, is to use Sysinternals Process Explorer, which has a very handy feature that will allow you to search for which processes have a given dll loaded. (Look in the Find menu)
One likely cause for a breakpoint failing to bite is that Visual Studio isn't able to load the symbols correctly. When you're debugging a template building block, Tridion explicitly loads the symbols from a known location, which you can configure (tridion.templating/debugging/#pdbdirectory in the CM config), which is where the template uploader places the PDBs. When the publisher process loads the custom resolver, I doubt if there's any such special mechanism to locate the symbols, so you'll have to fall back to standard .NET methods. The first thing I'd try is to ensure your symbols for the custom resolver class are located in the same place as the assembly (i.e. your bin directory). Failing that you could perhaps configure a symbols path in Visual studio.
The first thing to do is to watch the debug output in Visual Studio. If you start the process and then attach to it, you will see the various assemblies being loaded. If Visual Studio can find the symbols, you will see that the output says "Symbols Loaded".
The challenge is to determine whether ASP.NET is enabled within IIS7 in a reliable and correct way.
Enabling/Disabling is done in this case by going into:
Server Manager ->
Roles ->
Web Server (IIS) ->
Remove Role Services ->
Remove ASP.NET
The natural place to determine this should be within the applicationHost.config file. However, with ASP.NET enabled or disabled, we still have the "ManagedEngine" module available, and we still have the isapi filter record in the tag.
The best I can find at the moment is to check if the <isapiCgiRestriction> tag includes the aspnet_isapi.dll, or that the ASPNET trace provider is available.
However these aren't detecting the presence of the ASP.NET config directly, just a side effect that could conceivably be reconfigured by the user.
I'd rather do this by examining the IIS configuration/setup rather than the OS itself, if possible, although enumerating the Roles & Services on the server might be acceptable if we can guarantee that this technique will always work whenever IIS7 is used.
Update
Thanks for the responses. Clarifying exactly what I want to do, I'm pulling settings from a variety of places in the server's configuration into a single (readonly) view to show what the user needs to have configured to allow the software to work.
One of the settings I need to bring in is this one:
The one highlighted in red.
I don't need to manipulate the setting, just reproduce it. I want to see whether the user checked the ASP.NET box when they added the IIS role to the server, as in this example they clearly didn't.
I'd like to do this by looking at something reliable in IIS rather than enumerating the role services because I don't want to add any platform specific dependencies on the check that I don't need. I don't know if it will ever be possible to install IIS7 on a server that doesn't have the Roles/Services infrastructure, but in preference, I'd rather not worry about it. I also have a load of libraries for scrubbing around IIS already.
However, I'm also having trouble finding out how to enumerate the Roles/Services at all, so if there's a solution that involves doing that, it would certainly be useful, and much better than checking the side effect of having the ASPNET trace provider lying around.
Unfortunately, if you don't check the ASP.NET button, you can still get the ManagedEngine module in the IIS applicationHost.config file, so it's not a reliable check. You can also have ASP.NET mapped as an isapi filter, so checking them isn't enough. These things are especially problematic in the case where ASP.NET was installed but has been removed.
It looks like the best solution would be to examine the Role Services. However, API information on this is looking pretty rare, hence the cry for help.
The absolute way to know if they checked that or not is to search the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\InetStp\Components
In there you should see two values set to 1, ASPNET and NetFxEnvironment and NetFxExtensibility. This registry key is the IIS Setup key that contains all the components that have been enabled in IIS.
Determining if asp.net is even an installed feature (prerequisite for enabling it) can be done through PowerShell, which implies there is .net api out there for it if you dig hard enough. The PowerShell methods:
Import-Module servermanager
Get-WindowsFeature web-asp-net
Which will return an object of type Microsoft.Windows.ServerManager.Commands.Feature. The installed property is boolean and indicates whether or not the feature is installed.
So do you want the easy way? Make a nice pretty .aspx page that displays as HTML with an error block in a div in a placeholder saying "You need to install ASP.NET" and have it change on ASP.NET being installed to instead say "ASP.NET is installed" and then just have the tool launch this webpage in the default browser after copying it to the directory identified in IIS as the *:80 site (or create the directory mapping in IIS programmatically by altering the XML and then removing it later)
May not be the most elegant but it does ensure that testing shows what features are truly installed versus what's in an XML file.
Because that will scream "do it the lazy ignorant way" I'll remind you that the only way for me to know in javascript what features I can use is to test them before I try to use them, or assume they're there and watch it blow up. My point is, it doesn't matter what gets reported in a file, it matters what you can actually use. Just because C:\Windows\Micrsoft.Net\Framework\v3.xxxxxxxx exists and has files doesn't mean the dll's are registered in the GAC, does it?
We have a series of web services that live in different environments (dev/qa/staging/production) that are accessed from a web application, a web site, and other services. There are a few different service areas as well. So for production, we have services on four different boxes.
We conquered the db connection string issue by checking the hostname in global.asax and setting some application wide settings based on that hostname. There is a config.xml that is in source control that list the various hostnames and what settings they should get.
However, we haven't found an elegant solution for web services. What we have done so far is add references to all the environments to the projects and add several using statements to the files that use the services. When we checkout the project, we uncomment the appropriate using statement for the environment we're in.
It looks something like this:
// Development
// using com.tracking-services.dev
// using com.upload-services.dev
// QA
// using com.tracking-services.qa
// using com.upload-services.qa
// Production
// using com.tracking-services.www
// using com.upload-services.www
Obviously as we use web services more and more this technique will get more and more burdensome.
I have considered putting the namespaces into web.config.dev, web.config.qa, etc and swapping them out on application start in global.asax. I don't think that will work because by the time global.asax is run the compilation is already done and the web.config changes won't have much effect.
Since the "best practices" include using web services for data access, I'm hoping this is not a unique problem and someone has already come up with a solution.
Or are we going about this whole thing wrong?
Edit:
These are asmx web services. There is no url referenced in the web.config that I can find.
Make one reference and use configuration to switch the target urls as appropriate. No reason to have separate proxies at all.
I'm sure there's a simple explanation for this, but I haven't had much luck at finding the answer yet, so I figured I'd put the word out to my colleagues, as I'm sure some of you've run into this one before.
In my (simple) dev environment, I'm working with a handful of WCF Web Services, imported into my FB3 project and targeting a local instance of the ASP.NET development Web server. All good, no problems -- but what I'd like to know now is, What's the right way to deploy this project to test, staging and production environments? If my imported proxies all point, say, to http://localhost:1234/service.svc (from which their WSDLs were imported), and all I'm deploying is a compiled SWF, does Flex Builder expect me to "Manage Web Services > Delete", "> Add", recompile and release ever time I want to move my compiled Flex project from development to test, and to staging, and ultimately into production? Is there a simpler workflow for this?
Thanks in advance -- hope my question was clear.
Cheers,
Chris
If you have path names which will change depending on the enviroment then you will likely need to recompile for each environment since these will be compiled in the swf.
I typically use ANT scripts to handle my compile/deployment process when moving from development and production environments. This gives me the ability to dynamically change any path names during the compile. These build files can be integrated into Flex Builder making this process very easy once you have everything set up, and can be done with one click or scheduled.
Thanks Brett. I've been meaning to dig into automating my build processes anyway, so now's probably as good a time as any. :)
You do not need to build a SWF for each environment. Here's a technique I use commonly:
Externalize your configuration properties into an XML file; in this case, it could be a URL for each service or a base URL used by all your services
When the application starts up, make an HTTPService call to load the XML file, parse it, and store your properties onto some bindable "configuration object"
Bind the values from that object against your objects that depend on the URLs
Dispatch an event that indicates your configuration is complete. If you have some kind of singleton event dispatcher used by some components in your app, use that, so that the notification is global
Now proceed with the rest of the initialization of your application
It takes a little work to orchestrate your app such that certain parts won't initialize until steps 1-5 take place. However I think it's good practice to handle a lot of this initialization explicitly rather than in constructors or various initialize or creationComplete events for components. You may need to reinitialize things when a user logs out and a different user logs in; if you already have your app set up to that initialization is something you can control then reinitialization will not be a problem.