Multi-user Snippet Manager - collaboration

Currently, we're using a wiki at work to share insights, tips and information. But somehow, people aren't sharing snippets that way. It's probably too inconvenient to write and too difficult to find snippets there.
So, is there a multi-user/collaborative snippets manager around? Something like Snippely. (Has anyone tried Snippely in multi-user mode?)
Since we're all on the same site, it would probably be best if it used mapped network drives or ODBC instead of its own server process.
Oh, and it has to support Unicode and let us choose any truetype font. We're using the hideous APL language, which uses special characters.
It would be nice if it didn't cost money, so I wouldn't have to convince management to pay for it as well as the other developers to use it.

Pastebin is a common solution to this. Just install somewhere on your network, then paste snippets. http://pastebin.com/
Works well when trying to debug a piece of code, or stack trace also.

There's Snip-it pro ( http://www.snipitpro.com ), I looked at it a while back, and the interface seemed to be pretty horrible. It's 40 bucks / seat, which is not too bad. Last time I was looking for a tool like that I found nothing at all, and I found that it's very hard to get my co-workers to start using snippet libraries - everybody is happy to google it or search their old codebases. These days I use Evernote for all of my own snippeting needs.

Related

Hukkster technology stack

I started using Hukkster.com a few days ago. It is really fast and accurate.
The bookmarklet of hukkster always fetches correct price from the product page.
This happens for all the featured merchants it supports.
I was really curious to know what technology stack they might me using for such a fast and accurate response ?
I have tried to search everything I could on google. I found nothing other than Hukkster success story, Hukkster in NEWS etc.
There was nothing related to technology stuff used by Hukkster.
It is Mozenda .
Found it. Here it is:
http://blogs.wsj.com/venturecapital/2012/08/29/the-founders-creators-of-new-shopping-app-hukkster-definitely-not-brogrammers/
The co-founders believed in their idea. There was just one problem–neither one knew how to code. They didn’t let that stop them. They developed a “paper prototype” that they could run without coding. They built a crawler using a data extraction service called Mozenda, and did the rest of Hukkster’s legwork with spreadsheets, emails and phones.
http://www.mozenda.com/

How to manage the article content in an asp.net web site

I'm planning to create a site for learning technologies, such as codeproject or codeplex. Can you please suggest to me the different ways to manage huge articles?
Look at a content management system, such as SiteFinity: http://www.sitefinity.com/. There are others, some free. You can find some on codeplex.com.
HTH.
Check out DotNetNuke CMS too >> http://www.dotnetnuke.com/
And here's a very hot list available of ASP.NET CMS systems:
http://en.wikipedia.org/wiki/List_of_content_management_systems#Microsoft_ASP.NET_2
Different ways to manage articles while building the entire system yourself. Hmm, ok, let me give it a try... here's the short version.
There are several ways you can "store" your articles (content, data, whatever), and the best way to do so is to use a Database. (SQL Server, MySQL, SQLCE, SQLite, Oracle, the list goes on).
If you're not sold on the idea of a database, you can use any other type of persistent storage that you like. IE: XML, or even flat "TXT" files.
Since you're using ASP.NET you now need to either write your code behind, or some other complied code to access your stored data. You pull it out of the storage and display it on the page/view.
Last but not least, I'd like to give you a suggestion (even though it's not part of your original question). As the other answerers have stated, you should look at a pre-built CMS. If nothing else, to see how it's done (not necessarily to use it as is). My philosophy is quite simple, if you want to be productive in your development, don't bother reinventing the wheel just for the sake of it. If someone else has already build and given away exactly what you need, you should at very least give it a look and use what you can. It will save you piles of time and heartache.
Your question is not vague enough to be closed, but is vague enough that answering all of the nuances could take several thousand lines.

How to scrape websites such as Hype Machine?

I'm curious about website scraping (i.e. how it's done etc..), specifically that I'd like to write a script to perform the task for the site Hype Machine.
I'm actually a Software Engineering Undergraduate (4th year) however we don't really cover any web programming so my understanding of Javascript/RESTFul API/All things Web are pretty limited as we're mainly focused around theory and client side applications.
Any help or directions greatly appreciated.
The first thing to look for is whether the site already offers some sort of structured data, or if you need to parse through the HTML yourself. Looks like there is an RSS feed of latest songs. If that's what you're looking for, it would be good to start there.
You can use a scripting language to download the feed and parse it. I use python, but you could pick a different scripting language if you like. Here's some docs on how you might download a url in python and parse XML in python.
Another thing to be conscious of when you write a program that downloads a site or RSS feed is how often your scraping script runs. If you have it run constantly so that you'll get the new data the second it becomes available, you'll put a lot of load on the site, and there's a good chance they'll block you. Try not to run your script more often than you need to.
You may want to check the following books:
"Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL"
http://www.amazon.com/Webbots-Spiders-Screen-Scrapers-Developing/dp/1593271204
"HTTP Programming Recipes for C# Bots"
http://www.amazon.com/HTTP-Programming-Recipes-C-Bots/dp/0977320677
"HTTP Programming Recipes for Java Bots"
http://www.amazon.com/HTTP-Programming-Recipes-Java-Bots/dp/0977320669
I believe that the most important thing you must analyze is which kind of information do you want to extract. If you want to extract entire websites like google does probably your best option is to analyze tools like nutch from Apache.org or flaptor solution http://ww.hounder.org If you need to extract particular areas on unstructured data documents - websites, docs, pdf - probably you can extend nutch plugins to fit particular needs. nutch.apache.org
On the other hand if you need to extract particular text or clipping areas of a website where you set rules using DOM of the page probably what you need to check is more related to tools like mozenda.com. with those tools you will be able to set up extraction rules in order to scrap particular information on a website. You must take into consideration that any change on a webpage will give you an error on your robot.
Finally, If you are planning to develop a website using information sources you could purchase information from companies such as spinn3r.com were they sell particular niches of information ready to be consume. You will be able to save lots of money on infrastructure.
hope it helps!.
sebastian.
Python has the feedparser module, located at feedparser.org that actually handles RSS in its various flavours and ATOM in its various flavours. No reason to reinvent the wheel.

"Selling" trac/buildbot/etc to upper management

My team works mostly w/ Flex-based applications. That being said, there are nearly no conventions at all (even getting them to refactor is a miracle in itself) and the like.
Coming from a .NET + CruiseControl.NET background, I've been aching to getting everyone to use some decent tracking software (we're using a todo list coded in PHP now) and CI; I figured trac+BuildBot would be a nice option.
How would you convince upper management that this is the way to go, as well as some of the rules mentioned in this post? One of my main issues is that everyone codes without thinking (You'd be amazed at the type of "logic" this spawns...)
Thanks
Is there anything you could do now that wouldn't require permission from anyone else? Could you start by just using trac/buildbot/etc for just your own work, then add in others as they are interested?
In my experience you can get quite far by doing w/out asking.
Tell the management that they'll be better able to keep their eye on progress with such a tool.
Are there specific benefits to the route that you're suggesting that you could show them without them having to buy in?
I had an experience with getting my team to accept a maven + cruisecontrol CI setup. Basically I tried to get them to go along with it for a few days and they kept balking because it was unfamiliar. Then I just did it on my own and had all broken builds emailed to the mailing list. That night the project lead made a check in that broke the build (he just forgot a file) and, of course, everybody was emailed with his screw up.
The next day he came over to me and said, "I get it now."
It required no effort from him to get involved and got to see the benefits for free.

Is Wiki Content Portable?

I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.

Resources