Zenoss Graph results of postgresql database query - zenoss

So I just added my servers to Zenoss and installed some postgresql zenpacks. Unfortunatly I do not care for most of the postgresql monitoring tools. Instead of what they gave me, I am wondering if it is possible to send a custom query that I wrote, then graph the result using Zenoss? How do I go about doing this? Are there any good resources that you know of?
Thanks.

You will need to write your own ZenPack (or at least Template). Check Development Guide http://wiki.zenoss.org/ZenPack_Development_Guide or http://zenosslabs.readthedocs.org/en/latest/zenpack_development/
IMHO you will need zencommand datasource, which will execute your custom SQL query and this query output (number only) will be metric value, which will be processed by Zenoss.
Or you can expose metric(s) via SNMP and then it'll be only standard SNMP metric in Zenoss.
It's up to you how do you implement it. I recommend you to use community forum http://www.zenoss.org/forum for Zenoss related questions.

Related

WSO2 ESB Clustered databases and Data Services

I could setup a cluster of wso2 ESB, following the dedicated doc, with a manager and two workers.
I am not sure about two points:
Does each worker node need it's own REGISTRY_LOCAL database ?
With both workers using the same db it works, but I'm not sure it is the way to do and the doc isn't clear about that.
Adding Data Services as a feature ?
Almost no docs about that, but not being able to fetch more than one row is a big limitation for me, so is it possible to add this feature in a clustered environment or is it better to separate the Data Services Servers from the ESB ones ?
If someone has experience in that kind of stuff I will really appreciate a feedback.
Thanks
You can use the same registry database for all the members in the cluster so that they can communicate with each other using that. You may refer my blog [1] for further information. Personally I have never used local registry DBs for each member in the cluster setup.
You cannot fetch multiple records when you install the DSS feature into the ESB. Also installing the feature will incur some performance overheads in the ESB. Therefore I thoroughly recommend you to use a separate DSS instance to get your work done. It also separates the concerns clearly which sounds good.
[1] http://ravindraranwala.blogspot.com/2015/09/wso2-esb-worker-manager-cluster-without.html

Using Breeze with an ODBC Connection

I am in the beginning stages of writing an angularjs client that talks to a RESTful ASP.net Web Api server and am trying to integrate Breeze. I have full control over both client and server code, but the one non-negotiable is that I have to connect to a DBISAM database that I'm sharing with a legacy Windows desktop app; so I can not take advantage of the Entity Framework that most Breeze Server examples use. I've successfully retrieved data by setting up a controller similar to the one in the NoDB example and am now trying to figure out the best way to get real data from my database. Also, I am able to get data from the DB using the ODBC connection, but I'm just not sure where that will fit in with the Breeze way of doing things.
Given all of that, here are my specific questions:
are there any Breeze Server examples showing how to retrieve/save data using a database connected via ODBC that I have somehow overlooked?
will I need to create an adapter to make this work? And if so, is the mongodb adapter the closest thing to use as an example of what that code should look like?
Without Entity Framework, is it still better to return the metadata from the server, or should I create it on the client instead?
I understood the documentation to say that it is easier to have the server be Breeze "aware", but is that still true when needing to use ODBC? Perhaps I should just use Breeze on the client side instead, similar to the Edmunds example?
thanks for any help with figuring out the best way to proceed!

SDO vs DB Adapter oracle 11g

I publish this post in order to reveal the underlying idea of the real use of this tecnology.
I know this isn't a common question, but it doesn't mean that it isn't important.
If you were trying to work with lots of tables of a Database, and you were using lots of BPEL Services, would you choose SDO (Service Data Objects) instead of DBAdapters (DataBase Adapters)??
I have been working for few weeks with SDOs and I find these really useful, but I'm not sure if the use of SDOs is better than DB Adapters or not...
What do you think about this?? SDOs or DBAdapters??
Thanks in advance.
Basically SDO is Oracle SOA's attempt at an ORM and therefore you can simple look for information on ORM compared to JDBC. The DBAdaper is slight different from plain JDBC in it has extra features around polling and stored procedure integration.
DBAdapter for -> Simple SQL, Stored procedures, basic read write, delete and update and polling
SDO -> Highly reuseable code and everything that doesn't suit DBAdapter.
Here is a thread to look at http://forum.spring.io/forum/spring-projects/data/14117-jdbc-or-orm-framework-what-are-the-pros-and-cons

Recommendations using R with SimpleDB or BigQuery or using PHP with SimpleDB

I am currently working on system that generated product recommendations like those on Amazon : "People who bought this also bought this.."
Current Scenario:
Extract the Google Analytics data of the client and insert it in database.
On the website of the client, on load of product page the API call is made to get the recommendations of the product being viewed.
When API receives the product ID as request it looks in the database and retrieves (using association rules) the recommended product IDs and sends them as response.
The list of these product Ids will be processed to get the product details(image,price..) at the client end and displayed on website.
Currently I am using PHP and MYSQL with gapi package and REST api
storage on AMAZON EC2 .
My Question is:
Now, if I have to choose amongst the following, which will be the best choice to implement the above mentioned concept.
PHP with SimpleDB or BIGQuery.
R language with BIGQuery.
RHIPE-(R and hadoop ) with SimpleDB.
Apache Mahout.
Plese help!
This isn't so easy to answer, because the constraints are fairly specialized.
The following considerations can be made, though:
BIGQuery is not yet public. Thus, with a small usage base, even if you are in the preview population, it will be harder to get advice on improvement.
Each of your answers asked about a modeling system & a storage system. Apache Mahout is not a storage mechanism, so it won't necessarily work on its own. I used to believe that its machine learning implementations were a a pastiche of a few Google Summer of Code, but I've updated that view on the suggestion of a commenter. It still looks like it has rather uneven and spotty coverage of different algorithms, and it's not particularly clear how the components are supported or maintained. I encourage an evangelist for Mahout to address this.
As a result, this eliminates the 1st, 2nd, and 4th options.
What I don't quite get is the need for a real-time server to utilize Hadoop and RHIPE. That should be done in your batch processing for developing the recommendation models, not in real-time. I suppose you could use RHIPE as a simple one-stop front end for firing off queries.
I'd recommend using RApache instead of RHIPE, because you can get your packages and models pre-loaded. I see no advantage to using Hadoop in the front end, but it would be a very natural back end system for the model fitting.
(Update 1) Other interface options include RServe (http://www.rforge.net/Rserve/) and possibly RStudio in server mode. There are R/PHP interfaces (see comments below), but I suspect it would be better to access R through HTTP or TCP/IP.
(Update 2) Addressing the whole process, the basic idea I see is that you could query the data from PHP and pass to R or, if you wish to query from within R, look at the link in the comments (to the OmegaHat tools) or post a new question about R & SimpleDB - I'm sure someone else on SO would be able to give better insight on this particular connection. RApache would let you instantiate many R processes already prepared with packages loaded and data in RAM; thus you would only need to pass whatever data needs to be used for prediction. If your new data is a small vector then RApache should be fine, and it seems this is correct for the data being processed in real-time.
If you want a real-time API for recommendations based on data in a database, Apache Mahout does this directly. You want to use ReloadFromJDBCDataModel, put on top a GenericItemBasedRecommender, and use the servlet-based wrapper in the examples module. It's probably a day or two of work to get familiar with the code and customize it to your needs, but it's pretty simple.
When you get past about 100M data points you would need to look at distributing the computation Hadoop. That's a fair bit more complex. Mahout has a distributed recommender too which you can customize.

Best way to keyword search Amazon SimpleDB using EC2 and Asp.Net?

I am wondering if anyone has any thoughts on the best way to perform keyword searches on Amazon SimpleDB from an EC2 Asp.Net application.
A couple options I am considering are:
1) Add keywords to a multi-value attribute and search with a query like:
select id from keywordTable where keyword ='firstword' intersection keyword='secondword' intersection keyword = 'thirdword'
Amazon Query Example
2) Create a webservice frontend to Katta:
Katta on EC2
3) A queued Lucene.Net update service that periodically pushes the Lucene index to the cloud. (to get around the 'locking' issue)
Load balance Lucene(StackOverflow post)
Lucene on S3 (blog post)
If you are looking for a strictly SimpleDB solution (as per the question as stated) Katta and Lucene won't help you. If you are looking for merely an 'Amazon infrastructure' based solution then any of the choices will work.
All three options differ in terms of how much setup and management you'll have to do and deciding which is best depends on your actual requirements.
SimpleDB with a multi-valued attribute named Keyword is your best choice if you need simplicity and minimum administration. And if you don't need to sort by relevance. There is nothing to set up or administer and you'll only be charged for your actual cpu & bandwidth.
Lucene is a great choice if you need more than keyword searching but you'll have manage updates to the index yourself. You'll also have to manage the load balancing, backups and fail over that you would have gotten with SimpleDB. If you don't care about fail over and can tolerate down time while you do a restore in the event of EC2 crash then that's one less thing to worry about and one less reason to prefer SimpleDB.
With Katta on EC2 you'd be managing everything yourself. You'd have the most flexibility and the most work to do.
Just to tidy up this question... We wound up using Lightspeed's SimpleDB provider, Solr and SolrNet by writing a custom search provider for Lightspeed.
Info on implementing ISearchEngine interface for Lightspeed:
http://www.mindscape.co.nz/blog/index.php/2009/02/25/lightspeed-writing-a-custom-search-engine/
And this is the Solr Library we are using:
http://code.google.com/p/solrnet/
Since Solr can be easily scaled using EC2 machines, this made the most sense to us.
Simple Savant is an open-source .NET persistence library for SimpleDB which includes integrated support for full-text search using Lucene.NET (I'm the Simple Savant creator).
The full-text indexing approach is described here.

Resources