I need to draw a more elaborate Mindmap to present my test strategy to my client. I have no experience of creating mind map with any tool.
Can someone suggest any good mindmap making tool?
For "pure" mind mapping I would suggest Freeplane (free and open source). I know people using Freeplane for professional test case generation. Very helpful in this respect are
extensive scripting support that can be used to support testcase entry and for customized exports
multiple fields per node that can be used for different purposes: attributes (tabular data), notes, detail
If your primary focus is the generation of presentations then you should probably use a different tool.
For more elaborate mindmap I would suggest XMind.
With XMind you can even create testcases inside your mindmap using its matrix features. There are lots more features like:
Timeline
Gantt view
Filters
Drilldown
Try https://github.com/mindolph/Mindolph , this desktop application provides features that you can create and manage mind map easily.
You may try online service MindMup or desktop ConceptDraw MINDMAP. Though the first one is not that professional and intuitive as ConceptDraw tool, it is free. The second product has a 21-day trial period, brainstorm mode, multiple hyperlinks, export to MS PowerPoint or Web pages and so on.
Related
What would be the HTML code to "filter out" a handful of specific user stories?
Your question is highly unspecific. The only way to get stories is to programatically access the API via a language like Javascript, Java, C#, C++, etc., etc.
You can embed javascript into your html page and get the code to fetch stories with a filter passed in on the access. To see how to structure a query, you could turn on the developer tools in your browser and have a look at the network accesses that the browser does when fetching stories into a custom list app on a page. Using the custom list, you could refine your query to what you want first.
You could always build a custom app for a specific use case, but if you're looking for data and having trouble finding it, there are ways to do so with a combination of custom lists, Rally's own query language, and creative use of advanced filters. It's also possible to massage your data in way that makes Rally's native reporting a bit easier to use.
This is just an example but, if I'm looking to get information on the quarterly progress of my team who don't use start/end date or releases/milestones, there's not a lot available from an app/report standpoint that's already built. However, if I coach my team on keeping a few simple data elements neat and tidy, and utilize the custom report views to make that data useful, it can be pretty quick and easy to implement.
I have my teams keep a few basic fields up to date: Title, Owner, Project, Tags, Refined Estimate (all at a feature level), and most importantly - keeping a parent/child relationship between most work.
Now I can build a report that filters by a certain tag, that can also be filtered by team, and also has the ability to show additional valuable data that can be unearthed because your house is tidy. In this case, you can now display a column that will total all child objects under a certain feature, and display that next to 'Planned' estimate, which will give you the ability to also export and show a planned vs. actual to help your teams estimate more accurately.
It's a round-about way of saying there are a lot of possibilities with the tool if you can use your resources. Building custom apps means you also have to maintain them or pay someone with the knowledge to do so.
I have used scrapy and beautiful soup many times, however find kimonolabs solution much easier and faster. The only problem is that sometimes jobs do need a bit of tweaking, which is not possible (e.g., crawling using a unique pattern).
Is there any other solution which combines the ease with optional complexity? Mainly I want to define a page scraping template using a WYSIWYG interface, and then programatically write the crawler.
Use an Import.io extractor.
Download the Import.io browser
Create an extractor (what you call a "scraping template")
From your code use the extractor's REST API
Full disclosure: I'm one of the founders of ParseHub.
ParseHub tries to solve exactly this problem. It gives you a gui and powerful tools for defining templates visually, and falls back to a subset of javascript if you need more fine-grained control. All of the programming primitives that you're familiar with (if, for, break, recursion, etc.) are available.
You can find it at www.parsehub.com
Try Agenty
Agenty has exact same feature to scrape websites, and the Chrome extension to setup the scraping agents. You can just install the extension and create agents to scrape any site.
FYI : We also have plan to launch hosted solution and REST API by April, 2016 (Update - API is available now)
You may see more details on website (www.datascraping.co) now Agenty.com
Disclosure : I'm one of the founding member
I'm looking for an online tools where me and my team could collaborate on creating graphs.
The purpose is to bind related words, and generate the adjacency list. For example,
Foo----Bar----Brool
|_____Lol
will generate the following list :
Foo,[Bar]
Bar,[Foo,Brool,Lol]
Brool,[Bar]
Lol,[Bar]
The idea is to allow people to collaborate simply using graph visualization, without diving through the adjacency list directly.
There is one service wchich I believe is going to be designed to allow people to collaborate on creating a graph. It is Graph Commons. Site slogan says:
Collaborative 'network mapping' platform and knowledge base of relationships
Unfortunately at the moment you can only sign up for beta invitation on the website. And from the website it is not clear what the creation/editing mechanism would be.
You could use yfiles library to build a graph editor online, but I've never used it and I don't know if you can manage multimple sessions (hence allowing direct collaboration). But, for instance, if you use graphity, which is an implementation of yfiles flex library, and save a file on dropbox, then each collaborator has access to that file, and you can set up a rudimentary collaboration graph tool. Maybe.
It would be great to have tools like LucidChart or Draw.io, but they don't allow to export a graph file (e.g. graphML from which you can then have an edgelist with some other programs like Gephi). Those tools only allow you to export images and vectors. Draw.io exports xml, but not graphML.
I believe Linkurious let you edit your graph. Again, I've never used it, I don't know if you can manage multiple sessions > collaboration. But I would check it out. Edit: Linkurious enterprise edition (see pricing) is desegned to handle multiple user sessions.
What about building something with vis.js? The library has the ability to «listen for changes in the data» using a DataSet component. Have a look at this example.
I'm sorry if I don't have any real answer, but since your question is very interesting in these days, and the right tools would come out sooner or later (if it doesn't exists), I wanted to share these thoughts. I hope they can help. Please post when you find a solution!
I'm curious about website scraping (i.e. how it's done etc..), specifically that I'd like to write a script to perform the task for the site Hype Machine.
I'm actually a Software Engineering Undergraduate (4th year) however we don't really cover any web programming so my understanding of Javascript/RESTFul API/All things Web are pretty limited as we're mainly focused around theory and client side applications.
Any help or directions greatly appreciated.
The first thing to look for is whether the site already offers some sort of structured data, or if you need to parse through the HTML yourself. Looks like there is an RSS feed of latest songs. If that's what you're looking for, it would be good to start there.
You can use a scripting language to download the feed and parse it. I use python, but you could pick a different scripting language if you like. Here's some docs on how you might download a url in python and parse XML in python.
Another thing to be conscious of when you write a program that downloads a site or RSS feed is how often your scraping script runs. If you have it run constantly so that you'll get the new data the second it becomes available, you'll put a lot of load on the site, and there's a good chance they'll block you. Try not to run your script more often than you need to.
You may want to check the following books:
"Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL"
http://www.amazon.com/Webbots-Spiders-Screen-Scrapers-Developing/dp/1593271204
"HTTP Programming Recipes for C# Bots"
http://www.amazon.com/HTTP-Programming-Recipes-C-Bots/dp/0977320677
"HTTP Programming Recipes for Java Bots"
http://www.amazon.com/HTTP-Programming-Recipes-Java-Bots/dp/0977320669
I believe that the most important thing you must analyze is which kind of information do you want to extract. If you want to extract entire websites like google does probably your best option is to analyze tools like nutch from Apache.org or flaptor solution http://ww.hounder.org If you need to extract particular areas on unstructured data documents - websites, docs, pdf - probably you can extend nutch plugins to fit particular needs. nutch.apache.org
On the other hand if you need to extract particular text or clipping areas of a website where you set rules using DOM of the page probably what you need to check is more related to tools like mozenda.com. with those tools you will be able to set up extraction rules in order to scrap particular information on a website. You must take into consideration that any change on a webpage will give you an error on your robot.
Finally, If you are planning to develop a website using information sources you could purchase information from companies such as spinn3r.com were they sell particular niches of information ready to be consume. You will be able to save lots of money on infrastructure.
hope it helps!.
sebastian.
Python has the feedparser module, located at feedparser.org that actually handles RSS in its various flavours and ATOM in its various flavours. No reason to reinvent the wheel.
I am new to the code generation tools and I would like to know how does a tool like LLBGen Pro compares with the Entity Framework?
On top of that my boos is really looking into a tool called CodeOnTime http://codeontime.com/default.aspx because he likes their good UI support.
I am asking here because I really want an unbiased opinion.
I am not sure if LLBGen can also generate the UI. So far all the development in the house we do it the classic way coding each layer manually. However we are in need of a fast prototyping tool.
Any advice to help me choose wisely will be much appreciated
thanks in advance.
Have you taken a look at CodeSmith Generator? It's a template based generation tool with Visual Studio integration, so by definition all templates are open source, and it has advanced features such as generate on build that keep your project up to date with your data source at all times.
Also, the CodeSmith team is about to start working on an official set of EF templates, but for now they offer several different ORM options including LINQ to SQL, NHibernate, .netTiers, CLSA, etc.
The thing is that there are code generators and object relational mappers (ORM) and code generators that do object relational mapping.
Something like NHibernate is a pure (ORM) and doesn't generate any code, it just provides you with an object persistence layer.
Llblgen is a code generator that generates code that performs the functions of an ORM but you can actually see the code and can override it with custom behaviour. Llblgen won't generate your UI for you and it isn't designed to. It's heavily focused around data access.
Then you have tools like CodeSmith or the built in T4 generator that comes with visual studio which you can use to create templates and then they will generate anything you want, provided you write your own templates. I've worked for companies that have invested thousands into writing their own templates.
Finally there are complete tools like CodeOnTime or IronSpeed which generate entire applications for you. This sounds good in theory, and is great for small CRUD type applications, but you lose a lot of flexibility with tools like these as they often have conventions which you are required to work around and once you start getting into heavy customization, tend to get in your way.
You should ask yourself:
Do I just need something for accessing my data? if so, you could use an ORM
Do I need to generate a highly customized UI? if so, you'd probably be best avoiding the tool like CodeOnTime and IronSpeed
I've used both LLBLGen and Entity Framework. In my experience, they are roughly equal in capability, especially now that Entity Framework 4 has been released. NHibernate is also in this realm and should be considered if you're looking to compare the top ORM tools for .NET.
I would recommend downloading the LLBLGenPro demo to evaluate it. According to Frans Bouma's blog, LLBLGenPro offers enhanced features not present in the out-of-the-box Entity Framework tooling built into VS.NET 2010.
ORM tools like EF and LLBLGen do not generate UI. For that you will need something like IronSpeed (not recommended, I don't like the code generated) or the IdeaBlade DevForce products, which I have not used.