Does Firebase offer a way to view and update our data in a more human-readable format, rather than large trees of text?
Parse has a great browser for viewing and updating tables of data, which I found invaluable. It was easy to find specific data, and columns had helper menus for creating pointers and references to objects in other objects/tables. It was also easy to translate spreadsheet data into databases of users/places/etc. I guess I'm looking for something similar in Firebase, if it exists.
Any help would be appreciated.
Firebase is a schemaless database. Since it stores the data as JSON, it offers a visualization of the data as an editable tree in your Firebase Console. It does not offer a table view, although you could easily create that yourself.
Related
I'm currently using simple schema to manage my mongodb designs. I'm at a point where i have a number of collections and it can be a little difficult to visualize all of the collections while adding features to the app. Is there a graphical way to view the schema collections? Noob questions.
I'm not sure what you really want but if you're looking for a mongodb gui client then you can try MongoChef. But it's not interconnected to the simple schema package except if you're using collections2.
Hope this helps.
It is not possible to directly view the schemas of a collection until you have inserted a document.
I will advice seeding the collection with some sample data and then using meteortoys:allthings package to view the collections. (More info here)
I am looking to find graph software that will create a graph from a database automatically. Upon exploration of the tinkerpop documentation, the provided tutorials discuss querying ready-made graphs but there is not much about creating graphs from a database. Is it possible to use any of the tools in the tinkerpop suite to automatically convert data from a database into a graph ready for querying?
Let's say we have an event stream like this:
event_type=create_file name="filename.txt" handle=1
event_type=read handle=1 data="file content"
event_type=write handle=1 data="new file content"
event_type=close handle=1
Is there a way to convert the event stream into a graph automatically by specifying which properties to follow for creating edges? For example, by selecting the "handle" property I should get:
create_file-->read-->write-->close
All the examples I could find teach me how to do some activity like
add_node create_file
add_node read
add_node write
add_node close
followed by adding all desired edges manually.
Thank you for your help.
I just came across http://neo4j-contrib.github.io/neo4j-etl-components/ which is a very interesting tool. It takes an RDBMS schema and generates a graph representation, turning foreign keys and join tables into relationships, and (other) tables into nodes in the graph. And then it generates CSV files to load into a graph database, either for wholesale or incremental updates.
I don't know if there's an equivalent tool for Tinkerpop. But I'd hope that since much of the work is done (reading the SQL schema, mapping tables with foreign keys into vertices and edges) in this open source project, perhaps it'd be a good starting point?
The output of the tool looks like it depends on having a clean source data model, and may be naive. It looks like it's configurable so when it guesses wrongly about which tables are vertices and which are edges you can override it I think.
What you are suggesting is simply not possible. A graph database is very different from a traditional relational database. The biggest difference being that graphs are naturally unstructured which allow for more flexibility and manipulation. On the other hand, traditional relational or tabular databases are more rigidly structured which provide less flexibility but easier control and querying.
As stated by the answer provided here you also show not be using your original database as a frame of reference. You should instead be thinking about how to manipulate your data into a graph so as to take advantage of graphs.
For example, a traversal in a graph as opposed to a query in a tabular DB is a lot more flexible (and arguably powerful) but harder to construct and formalise.
There is a lot of good material providing guidelines for how to approach this problem [1] [2] [3] [4]. Unfortunately though there is no good automated migration at the moment.
I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.
I'm wondering what would be the typical scenario for using an end-user report designer.
What I'm thinking of is to have a base report with all the columns that I can have, also with a basic view of the report (formatting, order of columns, etc.) and then let the user to change that format and order, take out or add (from the available columns) data to it, etc.
Is that a common way to address what is called end-user designer for reports or I'm off track?
I know it depends on the user (if it's someone that can handle SQL or not for example), but is it common to have a scenario where the user can build everthing from the sql query to the formatting?
Thanks!
Sebastian
The first thing I would think about is to put them in a very tightly controlled sandbox, both for security and also to prevent monstrous, server-eating queries. Beyond that, I think giving them a "menu" of limited options is a good path. I would not give them direct access to SQL.
First question is do you want users creating SQL that could become a run away query (think Cartesian join gone wild).
Depending upon your tooling you might want to publish your report as Excel. Creating a pivot table or a simple spreadsheet may provide the flexibility you are looking for but in a safe environment. Most users can handle removing columns, formatting, etc, in Excel and there are lots of self-help references that you might not find in a report writer tool.
I’m thinking through some database design concepts and believe that creating sample data simulating real-world volume of my application will help solidify some design decisions.
Does any anyone know of a tool to create sample data? I’m looking for something that’s database and platform neutral if possible (from MySQL to DB/2 and Windows to UNIX) so to test the design across different systems/architectures. I’m visioning some tool that you can:
point to a database table(s) (some configuration of the DSN, etc.)
introspect the fields and based on the field... (point-and-click or add some configuration)
have a means for expressing how to create sample data (MySQL Sample Data Creator is the kind of thing I vision but I think their'd be some more options like commit frequency so to create very large data sets... millions or billions of rows... don't think this tool would scale to the volume of data I want to create)
push a button and go (depending on your parameters, this may take a long time)
Any thoughts? Sure, I could write an app to do this but it seems so generic that I shouldn’t have to reinvent the wheel.
DBMonster is fine but I prefer databene benerator as I explained it in this answer to a similar question.
Something like DBMonster?
This page also has a listing of many DB data generators.
I cannot help you with MySQL or DB/2 but, in case anyone gets to this answer with a need for MS SQL Server, I can recommend the Data Generator from Red Gate.
Our test data generator, Datanamic DB Data Generator can do this for you. Works with MySQL. It uses default "generator settings" when loading your tables the first time. You can then "fine-tune" the fields and/or choose other "generators".