Basic HTTP Authentication with Noir - http

I've been starting to figure out how to use noir and I would now like to use http basic authentication.
I've stumbled upon https://github.com/adeel/ring-http-basic-auth . Though, the given examples seem to apply to compojure and not to noir.
I wonder if noir's abstraction level is too high to allow different auth for diferent pages?!
I also know that this could be the way do go: http://webnoir.org/tutorials/others , though, I am not yet confortable enough with the whole clojure ecosystem in order to be able to see how to fit these ring handlers as http authentication for specific routes.
Is there anyone who can give me a hint about how should I think about this problem?
Thank you,

Check out Friend: https://github.com/cemerick/friend
It is a authorization/authentication library that works as ring middleware. Very easy to get up and going with.

Related

Extending rebus with new transport

Has anyone tried implementing a new transport for Rebus? How much work is involved? E.g. number of interfaces that needs to be implemented? Assuming a sensible transport mechanism is used such as Greg Youngs event store..
Thank you.
As the aptly named #user1121956 says, it's a matter of implementing IDuplexTransport, which is just bringing ISendMessages and IReceiveMessages together.
As you can see, the two interfaces boil down to two methods, so when I say that it's a lot of work to implement a new transport, it's because it's not trivial to implement those methods.
It doesn't mean that it's not possible, it's just that it's a place where you would need to be very careful to get things right - otherwise, messages might be dropped or other bad things might happen, and that would not be cool :)
With that said - if you feel like you're up to it ;) - I suggest you check out the Rebus source code and look into the contract tests for the transports - this is where a bunch of common scenarios get run against all the officially supported transports. A good starting point would be to extend the tests with a GregsEventStoreTransportFactory.
I will be happy to help you out with guidance along the way if you run into trouble!

Reduing the layer complexity of a Cairngorm based application

Could tools like SWFAddress be used in some clever ways to alleviate an existing client-server architecture. I see possibilities to even introduce REST-like pattern mapping or something like that.
What I am currently doing is following all the Cairngorm guidleines, which has already led to a bunch of commands which all make sense, but inclusing the business delegates, and all that stuff, I am getting into a hard time extending and refactoring the application (and actually layers were supposed to help, tight ... maybe I am not doing it quite right, I admit).
Anyway, what I thought of was somehow reducing the number of application events flying around, and the number of commands responding to them. Actualy, I am quite OK even coupling the view with some logic, if I can get rd of some layer complexity.
What I mean by that: perhaps, I could bind a button click to a url pattern (or use SWFaddress to change the url globally). On the other end, I wll be waiting for changes of the url, reformat it, and pass it onto a service delegate, which has the necessary mappings in mind, so it knows what method to call, or it could even pass the url directly to an HTTPSErvice. The delegate will then deal with the server response, and update the model, which through the bindings will update the view.
I am not going to completely ditch commands. I thing that they are good for scheduling of the internal interactions (within the client itself), but I'd like to abstain from using them for communication with the server.
Am I on the right path ?
Are you apposed to switching to an alternate framework than Cairngorm? You just described perfectly what most people's complaints are about it. I think it mostly exists from the throw back days of Flex development...
Most of the developers I know use a more "modern" framework, usually focusing on Dependency Injection (DI).
Here is a good starting point in analyzing the various frameowkrs in use today:
http://www.adobe.com/devnet/flex/articles/flex_framework.html
and for for further reading...
I personally prefer Swiz, and use it in all my projects. It still focuses on the command pattern, but alleviates a lot of the layer complexity, as you described.
If your questions was how can I make Cairngorm less like... well Cairngorm... then I'm afraid I can't help you there. :)
Cheers and good luck!

What does a developer need on the front end to ensure a successful project?

I have an idea for a business that requires a well designed web application. I'm not a rocket surgeon, but I'm smart enough to know that you get what you pay for and am willing to pay for talent. However, I want the development process to go as smoothly as possible and would like to know how to make that happen.
So, what information do developers need (or want) initially from the owner to avoid having to make assumptions about business (or other) requirements? Do I need to create state transition diagrams or write use cases?
Essentially, how do I take the concept in my head and package it in a way that allows the developer to do what they do best? (assuming that is creating good software. haha)
Any advice is appreciated.
Shawn
You may need to reword your question, as it is too general to get a good answer, so some vague details would be helpful.
But, the better vision you have of what you want the smoother it will be.
I find UML diagrams too confining, when you aren't going to be doing the work, as you may not come up with the best design.
So, if you start with designing out what each page should look like, as you envision it, then you can write up use cases, which are short scenarios.
So, you may write up:
A user needs to be able to log in using OpenID.
This will tell the developer one function that you want, and who you expect to do that action.
But, don't put in technologies, as you may think that a SOAP service is your best bet, but upon talking about it you may find that there is a better solution.
Use cases are good points to show what you are envisioning, and give text to your page designs.
Talk to the developers. Explain what you want and why you want it. Together you make the flow charts and whatnot. Writing requirements is part of the design process, and it's a good idea to have the developers onboard as soon as possible. Start simple and small, then grow and expand while iterating.
In talking over web services before, I have found the best starting point is drawing on a sheet of paper what you think the site will look like, and add in a few arrows from things you want clickable to the pages that should result. Keep it simple, nothing too fancy, and hopefully you and the developer can come to an understanding of what you want pretty quickly.
Use cases might be best for checking off all the points later in the project about how complete your site is; I haven't really found it to be a helpful starting point, but I'm sure others disagree. (They just seem too tedius to read when actually writing code.)
Same with state transition diagrams; they are too tedious and I think most developers will assume you made mistakes in them anyway. :) Everyone else does... Unless your project hinges very tightly on the correctness of a state machine, I wouldn't really bother.
This book contains some good advice on what constitutes a good statement of requirements from a programmers point of view. It also has the useful guideline of not trying to set the form of your requirements too early, and a substantial piece on describing the problem you are trying to solve.
I like UI mockups based on actual program/site flows e.g registering a customer or placing order. Diagrams/pictures of GUIs with structured, consistent data examples are unambiguous.
I agree that UML and use cases are only really useful if everyone speaks UML and the projects are of sufficient complexity (few are).
You may want to read up on Agile/Scrum techniques. These are becoming a sort of standard and when properly managed can save weeks of development time.
I find that words don't do a good job of communicating how a system is supposed to work. Wireframes, white-board drawings/transition diagrams, and low-fidelity prototypes are great ways to communicate a concrete idea. One example of a low-fidelity prototype is a "clickable" paper prototype that allows a user to touch "buttons" on paper to go from one drawing to another. It costs very little time (cheaper), but goes a long way to communicate an idea between two parties.
Stay away from formal documentation, UML diagrams, or class (technical documentation) diagrams that don't speak to you. This is what large, risk-averse companies move toward to be more "mature". These are also byproducts of an idea that is hashed out, and it sounds like you're in the hashing out stage.

Is obfuscation the best answer [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How effective is obfuscation?
Protect ASP.NET Source code
(Why) should I use obfuscation?
Is obfuscation the best answer for protecting our code ?
*Specially in Web Projects when you want to deliver your web projects as libraries of code to your customer ( the person who ordered ) *
Edited
At first my priority is Server-Side Code
and second Client-Side
but the main goal is when you want to deliver a complete web project
and you made every piece of your code as components and dlls now how effective can you protect them and doesn't allow others to make your code back from them .
Edited
The problem is that I want to protect the code that I'm written to a company that they ordered , now all my code are inside some DLLs ,
Now they can reverse engineer that and get my code , I want to prevent them from doing so ,
Is there anyway to do so or not ?
I think that is a unique question , And I didn't ask for what obfuscation is nor for tools of doing this activity , further than that I think this is apart from Client-Server Security
Sorry if my question wasn't clear at first , but if that is really a case to be deleted , no problem for me
Also
Also I wanted to have a comparison look at this problem and the solutions ,
because I think obfuscation wasn't the only possible solution at this , I think we can have maybe some logical sort of workarounds about this problem
Maybe not the best. If you are really ambitious, you can write your own web server (plugin).
But is it worth the effort?
Software is similar to a bike in the Netherlands, there is no known way of protection that is 100% safe. You use either a better protection than the other bikes (thieves are lazy). Or you must obfuscate the bike so they won't take it.
Another way to increase the level of protection is to use custom made ActiveX code to store mission critical algorithms. Of course, they can be reverse engineered, but javascript is easier.
What exactly are you trying to protect your code from?
Does your client-side code contain valuable business logic?
If not: you shouldn't bother obfuscating something that doesn't have much value. Personally I think clientside code theft is a something that people are far too concerned about. 99% of web apps don't really have anything special in terms of implementation on the client side. What you need to worry about more is someone ripping off the idea or visual look, which you obviously can't obfuscate.
If it does: you need to consider refactoring that logic out of the client side, as even with heavy obfuscation, a determined party will always be able to untangle it relatively easily. The code that adds real value to your app should ideally be running on your servers where it's considerably more difficult to get access to.
Even if people stealing your html markup or javascript was a something to worry about (and it probably isn't), obfuscation doesn't really solve the problem. In my opinion it is a waste of effort and money.
Hosting a critical function as a web service is probably the most sure way to protect it. It keeps the code out of the user's hands entirely. But then you're stuck hosting a service, and your users have to be on line to use your functionality.
Obfuscators help by hiding useful names and replacing control flow with weird but logically equivalent alternatives. They might thwart an amateur, but they'll only slow down a skilled reverse engineer for a few minutes, and they won't stop someone who is determined to penetrate your secrets.
I you really want to protect your code, you should write native code using a native code compiler (C++, Delphi). This still does not guarantee that your code is 100% safe because any experience developer can read assembler and essentially disassemble the native code program.
A determined hacker will always find a way to get to what they want.
The best we can do is to make it hard or painful for the would-be hacker to get at our code and the following options can help us:
Customize the CLR engine
Run an obfuscation tool over your code and use name and control flow obfuscation and string encryption
Make the application a Web-based application where all your proprietary code sits on a server somewhere
Watermark your code using your own custom techniques to "throw off" the would-be hacker
Implement techniques to prevent debugging (this is a very advanced topic!)
I really like a comment made by one of the head developers of the .NET framework where he said that he does not feel it's really the fact that others can get at our code that should be a concern to us, but rather, we should concern ourselves with the level of support we provide with our products.
So if we provide a good support base, it does not matter what the hackers do with our code, because the clients will trust us and our ability to support them using our product and not some cheap hacker-hacked program.
NO, obfuscation is not the best way to protect your code.
The tool you need to use is "copyright".
There is no (technological) way you can protect you code from someone determined enough (provided they have access to the binaries / scripts).
What you can do is prevent them from legally modifying/distributing your code.
The normal server-side code in Web projects should under no circumstances be visible to the outside world. So there is no point in obfuscating the code.
Besides that two minior points:
Javascript code is visible to the user and can be obfuscated. Minimizing javascript to save bandwidth is recommended anyway. Minimizing js also obfuscates the code.
Also important is that on production system the configuration setting customErrors should be set to RemoteOnly or On to avoid showing a stacktrace with to much code details.
If your client side code has any broad value to others, it will get reverse engineered regardless of any obfuscation.
The reality is that it's likely not going to be broadly useful to many and there is a lot of other code out there to look at so probably not worth doing more than minifying the code which is plenty of obfuscation and if your code is large, it will improve download speed.
Have you considered the alternative? That it's a good thing to give somethings back to the community? I'm sure you've looked at the code of more than one site, no?

ASP.NET and System.Diagnostics tracing - have I missed something, or is this a bad idea?

For various common reasons I wanted to use tracing for my ASP.NET application. Especially since I found out about the possibility to use the Service Trace Viewer tool which allows you to examine your traces in a powerful way.
Since I had never used this trace thing before, I started stuying it. After a while of Google, SO and MSDN I finally have a good idea of how things work. But I also found one very distrubing thing.
When using trace in ASP.NET applications it makes a lot of sense to group the trace messages together by web requests. Especially since one of the reasons I want to use it is for studying performance problems. The above mentioned tool also supports this by using <Corrleation> tags in the generated XML files. Which in turn come from System.Diagnostics.Trace.CorrelationManager. It also allows other nice features like Activity starting/stopping, which provides an even better grouping of trace messages. Cool, right?
I though so too, until I started inspecting where the CorrelationManager actually lived. After all - it was a static property. After some playing around with Reflector I found out something horrifying - it's stored in CallContext! Which is the kind of thing we shouldn't be using in ASP.NET, right?
So... am I missing something here? Is tracing really fundamentally flawed in ASP.NET?
Added: Emm, I'm kinda on the verge of rewriting this stuff myself. I still want to use the neat tool for exploring the traces. Any reason I shouldn't do this? Perhaps there is something better yet? It would be really nice if I got some answers soon. :)
Added 2: A colleague of mine confirmed that this is not just a theoretical issue. He has observed this in the system he's working on. So it's settled. I'm going to build a new little system that does things just the way I want it to. :)
Added 3: Wow, cool... the guys at Microsoft couldn't find anything wrong with using Correlation Manager in ASP.NET. So apparently we're not getting a fix for this bug after all...
You raise a very interesting question. After looking at Reflector, I also see that CorrelationManager is using the CallContext to store the activity id. I have not worked with tracing much, so I can't really speak on behalf of what types of activities it tracks, but if it tracks a single activity across the entire life cycle of a page request, per the article you referenced above, there is a possibility that the activity id could become disassociated with the actual activity. This activity would appear to die halfway through.
HttpContext would seem ideal for tracking an entire page request from beginning to finish, since it will be carried over even if the execution changes to a different thread. However, the HttpContext will not be transferred to your business objects, where as the CallContext would. On a side note, I saw that CallContext can also be transferred when using remoting between client and server apps which is pretty nifty, but in the case of tracking the website, this would not really be all that useful.
If you haven't already, check out this guy's site. The issue described in this article isn't specifically the same issue that Cup(Of T) article mentioned, but it's still pretty interesting. He also provides several very informative links on the page that describe components of the CorrelateionManager.
Unfortunately, I don't really have an answer to your question, but I definitely find the topic interesting and will continue looking into it. So please update this post as you learn more. I'm curious to see what you or others (hopefully someone out there can shed some light on the topic) find while looking into this.
Anyway, good luck. I'll talk to some of the peeps at my work about this and post more later if I find anything.
Chris
OK, so this is how it ended.
My colleague called Microsoft and reported this bug to them. Being certified partners means we get access to some more prioritized fixing queue or something... don't know that stuff. Anyway, they're working on it. Hopefully we'll see a patch soon. :)
In the mean time I've created my own little tracing class. It doesn't support all the bells and whistles that the default trace framework does, but it's just what I need. :) More specifically:
It writes to the same XML format as the default XmlWriterTraceListener so I can use the tool to analyze the logs.
It has a built in log rotation - something my colleague had to do himself on top of XmlWriterTraceListener.
The actual logging is deferred to another thread so performance can be measured more accurately.
Correlations are now stored in HttpContext.Items so ASP.NET threading peculiarities don't affect it.
Happy end, I hope. :)

Resources