How to create Neo4J graphs with Spring Roo? - graph

Is thegraphpackage still available in Roo? I would like to create graphs with Neo4J but i can't find any recent documentation or examples about the feature.

No, the graph package Neo4j is not available in the current, 1.2.5, release of Roo and history informs us that any promise to restore support should be viewed skeptically
This antiquated exchange suggests, along with 3-4 others dated as recently as one year ago, that there is little ground for hope. Moreover, the GitHub activity charts (Sorry, my SO "reputation" is not adequate to post another link. You'll have to figure this one out without help.) show zero development activity related to Spring Roo during the last month. I didn't dig much deeper but the front page shows essentially no activity in the last six months. That's not a good sign. I did read that a new development partner has just signed on, so maybe things will improve. I'm very new to Roo but I'm finding bug after bug, and problem after problem, many of them outstanding, like the absence of Neo4j, for years. I really like Roo's ideas and design but I'm not sure that I'm willing to bet on its robustness let alone its longevity. I don't seem to be alone. I searched the web and asked here but can't find direct evidence (that is, code that I can examine) that it's being used on real-world projects. Folks pick it up and toy around with it. But if they write more than 40 lines or so of code they don't open their source. (Please understand that I'm no deliberate detractor; I'd be delighted to be proven wrong on this point!)
One of the saddest consequences of not having Neo4j is that one of the three existing Roo books, Getting Started with Roo, takes it on the chin beginning near the end of the first chapter. This is otherwise an outstanding book. But the absence of Neo4j kicks its ongoing-project model right upside the head. I know the labor entailed by writing a book. Josh Long must be severely disappointed.
More generally, it seems that tech authors are being implicitly encouraged to write small chunks of code that stand on their own, so that their book won't be damaged overall by one or two technological changes. Of course, readers then never get to see code of significant size. I don't have a solution to offer but I'm definitely feeling the problem.
Edited: Because at least one passerby is demonstrably incapable of understanding the implications of repeatedly broken promises to restore support for Neo4j "in the next few days" several years ago I have explicitly spelled out the fact that support is not currently available and no credible promise of soon restoration is to be found. Please note that a broken promise to restore support implies that support was not restored and subsequent promises to restore support should be treated with some degree of skepticism. So, those who suppose that this answer is "not useful" would likely benefit from the web site. readingcomprehensionconnection.com. Or perhaps the problem is one of attention deficit, which makes it difficult to read more than a few phrases before forming a necessarily hasty conclusion. In that case, one might consider cutting back on the comic books in favor of some reading material without speech balloons. I suppose that I am not alone in finding it difficult to incorporate speech balloons in my merely textual SO comments. Perhaps a future release of SO will support a communication style more familiar and therefore more to one's liking.
Demonstration that Neo4j Is Currently Absent from the Upcoming Release, 1.2.6
The method of search follows the instructions given in (Long & Mayzak, 2011), apparently by Michael Hunger of Neo Technology, Inc., the company that developed Neo4j.
/_/ |_|\____/\____/ 1.2.6.BUILD-SNAPSHOT [rev 32b413d]
Welcome to Spring Roo. For assistance press TAB or type "hint" then hit ENTER.
roo> project --topLevelPackage org.hagiasmon.gswr --projectName gswr
Created ROOT/pom.xml
Created SRC_MAIN_RESOURCES
Created SRC_MAIN_RESOURCES/log4j.properties
Created SPRING_CONFIG_ROOT
Created SPRING_CONFIG_ROOT/applicationContext.xml
roo> pgp trust -keyId 0x29c2d8fd
You must specify option 'keyId' for this command
roo> pgp trust --keyId 0x29c2d8fd
Added trust for key:
>>>> KEY ID: 0x29C2D8FD <<<<
More Info: http://keyserver.ubuntu.com/pks/lookup?fingerprint=on&op=index&search=0x29C2D8FD
Created: 2011-Jan-06 10:48:11 +0000
Fingerprint: 558eb0489fe5500c68fa8a306107f33d29c2d8fd
Algorithm: RSA_GENERAL
User ID: Michael Hunger <Michael.Hunger#jexp.de>
Signed By: Key 0x29C2D8FD (Michael Hunger <Michael.Hunger#jexp.de>)
Subkey ID: 0xDEFB5FB1 [RSA_GENERAL]
roo> addon search graph
0 found, sorted by rank; T = trusted developer; R = Roo 1.2 compatible
ID T R DESCRIPTION -------------------------------------------------------------
--------------------------------------------------------------------------------
[HINT] use 'addon info id --searchResultId ..' to see details about a search result
[HINT] use 'addon install id --searchResultId ..' to install a specific search result, or
[HINT] use 'addon install bundle --bundleSymbolicName TAB' to install a specific add-on version
roo>
P.S. I initiated Facebook correspondence with Josh Long, who along with Michael Hunger, are the two folks driving the Roo - Neo4j interface, as far as I can tell. If he responds, I'll update this comment.

Related

CSP Browser Policy + Zopim widget (+ underscore)

For the past few days, I've been trying to add properly Meteor's CSP package, browser-policy. So far, I followed these ressources:
https://dweldon.silvrback.com/browser-policy
https://themeteorchef.com/snippets/using-the-browser-policy-package/
Things were a bit rough at the beginning but we are close to something, the last piece of the puzzle being live-chat Zopim's widget not being a fan of our new policy. I tried to whitelist and put zopim's widget code into a Meteor.startup call somewhere but it still fails on load due to some unsafe-eval as you can see below.
As I don't want to loosen up more my policies, is there any workaround for this or should I just forget about Zopim and give a shot at some other tool (which I'd be glad to hear about if you have any suggestion).
Bonus: Also, I first had my policy with BrowserPolicy.content.disallowEval(); but MDG's underscore package started to fall appart and I had to allow it. Allowing eval is clearly not ideal and I'd be glad to hear any alternative.
Your're hitting the first bullet point from the "issues" section of my post. You have to decide if disallowing eval is more important to you than that particular 3rd party script. In our case, we allowed eval for a few days while the external script was modified (fortunately the creator agreed to the change). It never hurts to send an email and just explain that you think their scripts are posing a risk to your site because you can't enable a strict content security policy.
We currently have BrowserPolicy.content.disallowEval() set and haven't run into any issues. I find it hard to believe that a core package would violate that directive. Maybe some other package is causing it, but it's hard to say without a detailed analysis of your dependencies.

Lotus Smartsuite to "something newer"

I shall try and keep my scenario as brief as possible and to the point.
The office I’m currently working for uses Lotus Smartsuite on Windows 98 / XP, using lots of Lotus Script to tie together Lotus 123 and Lotus Word Pro documents. They also make heavy use of the Lotus Object Linking functions. I shall describe its behaviour below:
You can fill rows and columns in a 123 Spreadsheet with data galore, style it and format it any way you like and define it as a range (nothing unique here). However, you can then copy that range and paste it as a link in a Lotus Word Pro document. This link is then categorised by its range name, so expanding the range back in the 123 file causes the table in the Word Pro Document to expand. This link also carries with it all the formatting and styling of the cells in the 123 Spreadsheet. As I imagine you are now aware, this link is completely live, you can double click anywhere in the object and it opens up the 123 file for editing, and all changes go backward and forward between the two documents. Most of the data retrieved from testing equipment is stored in these 123 spreadsheets and then parts of that are linked into a final Lotus Word Pro report document sent to the customer.
Note: Just to be clear, this is NOT the same as a DDE link in Open Office, which seems to allow for copying of a non-defined range of cells to be imported into a document where all formatting is lost and editing back and forth is not straight forward. It also behaves differently to an OLE object, which seems to only import the entire Spreadsheet rather than a small subsection of it.
However, in recent years, support this older software (Lotus) is becoming more difficult, especially with regards to sending customers documents (Lotus word Pro files are generally unsupported by more modern Office Tools) and technical support for Lotus Smartsuite seems to be practically non-existent these days. Also, with the fear of on going development in a scripting language no-longer being practised by mainstream IT technicians, on-going development and support seems futile. Once the guys who wrote it move on to other things, we will be left with spaghetti script in a language nobody can help us with.
So, we have this goal of "modernising" our IT system by the end of the year. Linux is becoming a very viable option too (No doubt Debian or a derivative), but Open Office doesn't seem to have the linking capability mentioned above. The reason this linking is so important is because the veterans of the office are so used to working this way - storing data in the spreadsheet, linking back to it later in their Word Pro documents, etc. I think they are more than keen to keep this practice going and we have found no equivalent of it in modern office tools (as was requested of me). I can see, as a software engineer (fluent in many languages), how this practice is not the safest or best way of using and storing data (databases spring to mind), but I was wondering if someone could give me a few other good reasons as to why this is bad practice in the work place (I was always in the belief that you should keep your data away from your reporting and formatting, the two should never be entwined - this looks like spreadsheet hell to me) ... or why this is a good thing to keep doing!?
So, for those of you still with me, I guess what I am asking is:
Is this practice of storing data, formatting it in spreadsheets and importing that directly back and forth between word documents good or bad, and what can be done about it? I guess I'll need to prove my point in case either way for this.
Are there ANY modern alternatives to this linking method (regardless of weather it is good or bad practice or not) out there for Linux or Windows? This link MUST carry formatting as well as dynamic range sizes (DDE links don't seem to be the answer).
What would your solution be if you had to start from scratch? Store everything in databases and use SQL to simply ask for the data you need in your word documents? How would you do this? What software would you use?
Any help with this scenario would be more than helpful, or if you know anywhere I should go to ask for advice, that would be appreciated too.
Thank-you for reading!
My suggestion is to first take a step back. What is the benefit to the way things are done now? Is it just a habit that is tough to break? Is there any reason the documents and spreadsheets need to be maintained and linked the way they are, or is it just a requirement because 'that's how it was done before'?
If you can remove that requirement, you have a lot more options and you're building a system that's easier to understand and maintain.
Regarding question 1, I believe there's nothing wrong with storing data in spreadsheets, especially if the end-users need to create and maintain them and development staff is limited. Some questions are whether that data needs to be secured, is related between spreadsheets, is duplicated across the company, or should be shared in a better way across the company. If any of those are true then a centralized database would make more sense. Personally I'd want any valuable data safely stored in a database where it can be managed, access to it can be controlled, it can be easily backed-up, etc.
Regarding question 2, you can do the same thing in Microsoft Office. You can either link the documents, so that the data stays in the source excel doc but appears in the word doc, or you can embed the excel spreadsheet within the word doc.
You might want to look at Microsoft Access for storing the data and generating reports. Or you could build an application using a relational database back-end and reporting front-end. The possibilities are wide-open. It really depends on where the expertise lies within the company.
If it were me I'd probably use a SQL Express back-end (it's free) and a custom ASP.NET MVC application for generating the reports, but that's just where my expertise lies.

Handling asynchronous edits of documents on the web

I am writing a Web application that has a user interface for editing data. The idea is something similar to a wiki where there are edits to chunks of text. What is the best way to handle asynchronous edits from multiple users? The situation I am considering is this:
There is a document that is version 0. User A is editing it when it is version 0. A few minutes later but before user A saves his changes, user B opens up the same document and starts editing. How should the server treat the two different edits to version 0 of the document? Also what is this problem called and where can I get more information about similar problems?
Wikipedia addresses this problem in the following way:
Assume that person A and person B are both editing the same document. Also assume person A submits their edits slightly before person B.
First the media wiki software runs a traditional diffing algorithm over both edits.
Next, the results of the diffing algorithm are used to merge the text.
If the diffing algorithm finds that there are merge conflicts (i.e. person A and B edited the same piece of text), then person B is asked to resolve the conflicts since they submitted their edits last.
Wikipedia handles merge conflicts much like conflicts in a code repository.
If you want to allow multiple people to edit a document simultaneously and in real-time (like with google wave or etherpad), then I'd recommend looking into operational transforms (aka OT). Though the OT algorithm is no harder or simpler than a traditional diffing algorithm, there is less information on it and fewer ready-made implementations.
One typical pattern is to send each user a chunk of text and also a version number indicating which version they received. The rule is that the host will only accept the first revision to the currently activer version.
That way only one person can revise each version; everyone else would be told that their version was obsolete and you can do what you want for them at that point - usually send them the current version to retry.
This only works if it's unlikely to have multiple people working on the same version. If that's likely, then you probably need to research how subversion for instance handles multiple revisions to source code.
There are also schemes for multiple people working simultaneously on the same text and being feed each others' updates - see Google Wave for one example.

"Selling" trac/buildbot/etc to upper management

My team works mostly w/ Flex-based applications. That being said, there are nearly no conventions at all (even getting them to refactor is a miracle in itself) and the like.
Coming from a .NET + CruiseControl.NET background, I've been aching to getting everyone to use some decent tracking software (we're using a todo list coded in PHP now) and CI; I figured trac+BuildBot would be a nice option.
How would you convince upper management that this is the way to go, as well as some of the rules mentioned in this post? One of my main issues is that everyone codes without thinking (You'd be amazed at the type of "logic" this spawns...)
Thanks
Is there anything you could do now that wouldn't require permission from anyone else? Could you start by just using trac/buildbot/etc for just your own work, then add in others as they are interested?
In my experience you can get quite far by doing w/out asking.
Tell the management that they'll be better able to keep their eye on progress with such a tool.
Are there specific benefits to the route that you're suggesting that you could show them without them having to buy in?
I had an experience with getting my team to accept a maven + cruisecontrol CI setup. Basically I tried to get them to go along with it for a few days and they kept balking because it was unfamiliar. Then I just did it on my own and had all broken builds emailed to the mailing list. That night the project lead made a check in that broke the build (he just forgot a file) and, of course, everybody was emailed with his screw up.
The next day he came over to me and said, "I get it now."
It required no effort from him to get involved and got to see the benefits for free.

Is Wiki Content Portable?

I'm thinking of starting a wiki, probably on a low cost LAMP hosting account. I'd like the option of exporting my content later in case I want to run it on IIS/ASP.NET down the line. I know in the weblog world, there's an open standard called BlogML which will let you export your blog content to an XML based format on one site and import it into another. Is there something similar with wikis?
The correct answer is ... "it depends".
It depends on which wiki you're using or planning to use. I've used various over the years MoinMoin was ok, used files rather than database, Ubuntu seem to like it. MediaWiki, everyone knows about and JAMWiki is a java clone(ish) of MediaWiki with the aim to be markup compatible with MediaWiki, both use databases and you can generally connect whichever database you want, JAMWiki is pre-configured to use an internal HSQLDB instance.
I recently converted about 80 pages from a MoinMoin wiki into JAMWiki pages and this was probably 90% handled by a tiny perl script I found somewhere (I'll provide a link if I can find it again). The other 10% was unfortunately a by-hand experience (they were of the utmost importance with them being recipies for the missus) ;-)
I also recently setup a Mediawiki instance for work and that took all of about 8 minutes to do. So that'd be my choice.
To answer your question I don't believe that there's such a standard as WikiML as Till called it.
As strange as it sounds, I've investigated screen scraping a wiki for a co-worker to help him port it to another wiki engine. It turned out that screen scraping would have been easier, quicker and more efficient to write to move this particular file based wiki to another one or a CMS.
Given the context that you wrote the question in I would bite the bullet now and pay the little extra for a windows hosted account and put Screwturn wiki on it. You're got the option of using file based or SQL Server based back end for it but because one of your requirements is low cost I'm guessing that you would use file based now for a cheaper hosted account and then you can always upscale the back end to SQL Server.
I haven't heard of WikiML.
I think your biggest obstacle is gonna be converting one wiki markup to another. For example, some wikis use markdown (which is what Stack Overflow uses), others use another markup syntax (e.g. BBCode, ...), etc.. The bottom line is - assuming the contents are databased it's not impossible to export and parse it to make it "fit" in another system. It might just be a pain in the ass.
And if the contents are not databased, it's gonna be a royal pain in the ass. :D
Another solution would be to stay with the same system. I am not sure what the reason is for changing the technology later on. It's not like a growing project requires IIS/ASP.NET all of the sudden. (It might just be the other way around.) But for example, if you could stick with PHP for a while, you could also run that on IIS.

Resources