Is Firestore a good choice for time series data? [closed] - firebase

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a small project involving some simple financial time-series data with some real-time components on the front end. I was hoping to use the Firebase infrastructure since it offers a lot of things without having to set up much infrastructure, but upon investigating it doesn't seem to be a good choice for storing time series data.
Admittedly, I have more experience with relational databases so it's possible I am asking an extremely basic question. If I were to use Firestore to store time-series data, could someone provide an example of how one might structure it for efficient querying?
Am I better served using something like Postgres?

Probably best bet would be to use a time-series database. Looks like Warp 10 has already been mentioned (https://www.warp10.io).
The benefit of something like warp is the ability to query on the time component of your database. I believe firebase only has simple greater/lesser than queries available for time.

Related

Communication between Different programs in Different Computers (C# or Python) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm building a program that I want to be able to exchange information with other programs running in another computer. I started using C# and a library called SimpleTCP. The main issue is that is too simple and only send and receive messages.
I'm looking for something that I can predetermine functions that one or the other can call from each other.
I looked on google and stack overflow but I was unable to find an appropriated subject to study, what should I be looking for to learn this? Thank you
The most complete protocol for what you want is gRPC. There is a learning curve but worth it in my opinion. https://github.com/grpc/grpc
There is a way but it's little bit different
Such programs like this written in tow different languages
You can make a center database between the both programs
In this situation it's very easy to communicate and receive ,send data
You can mysql ,oracl, mariadb or any Database you prefer

Definition of Data Engineering in Big Data [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I fully understand that this question may be closed as it can be more of an opinion than an actual technical question with an objective answer. However I want to ask it in case someone can help and provide a good response. I think it is important to define what you do in a succint way, so here it goes.
Q: If you are asked, "what's Data Engineering?" what would your definition be? (Not "what does a Data Engineer do?")
This one came to mind, but does someone have a better one? And I am talking in the context of Hadoop/Big Data
A:
Data engineering is the process of taking Big Data that is stored in
either a structured or unstructured format, processing it in batch or
real-time, and generating data in a new format that can be used for
further consumption, visualization, Machine Learning or Data Science
I would like to share what I think is a definition of Data Engineering related to Big Data:
Data Engineering supports and provides expertise to elaborate,
contruct, and maintain a Big Data. Data Engineering uses
tools, techniques, frameworks and skills that are essential to a good
"Data Infrastructure" or "Data Architecture" behind a Big Data.
A good way to define Data Engineering is understanting what a Data Enginer does. Here is a great infographic about: https://www.datacamp.com/community/blog/data-engineering-vs-data-science-infographic
Some responsibilities listed includes:
Develop, construct, teste and maintain architectures;
Ensure architecture will support the requirements of the business;
Develop data set process for data modeling, mining and production;
Recommend ways to improve data reliability, efficiency and quality.

Bigdata use cases [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
we are trying to create a dashboard using BigData. The Data are currently transacted in SQLServer and the front end is in MVC. As the data flow is extremely high to analyse using SQLServer itself it is decided to use BigData. I had chosen Cloudera Manager CDH, SQOOP to import data from SQLServer to HIVE and running the analytic using IMPALA. Decided to up the results with Microstrategy to provide the charts in mobile platform to the clients.
Any Ideas or suggestion are welcome to improve the process?
Looks like you're off to a great start. Remember your analytics can be done with a multitude of tools, not just Impala.
Once you're in Hadoop, Hive and Pig give a lot of power (more available with UDFS) with an easy learning curve.
If you eventually want to do some iterative use cases (and exploit machine learning), you might want to check out Spark (those two things are in its wheelhouse), which is not constrained by (to?) MapReduce.
Tons of great tools available. Enjoy the journey.
I would consider to use two stages. Data Analysis and Data Visualisation.
Use two stages makes the solution more flexible and decouple the responsiblility.
Data Analysis
Ingest the data (Include cleaning), Sqoop can do the ingest step, might require extra steps to cleaning the data.
Explore/Analyse the data, Apache Spark is a very flexible and powerful tool.
Store the analyse result in a specified format
Data Visualisation
Load the data from data analysis phase
Visualise it. Using Highcharts/Kibana/Dashing. Or use D3 create customised dashboard.

What is the scope and benefits of big data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What can you do after learning big data related concepts, Hadoop, ML, NLP etc? Where can you implement these?
Not really a software related question - but it's very relevant to current technology and why some software exists. So here is an opinion.
We now live in a world where it is possible to monitor and digitally record information on an epic scale that continues to expand with concepts like The Internet of Things.
With this information it becomes possible to look at the evidence behind decisions that previously would have been made by gut instinct or opinions. What impact does road design have on traffic flow? Which medical drugs get best results in the real world and not just in drugs trials? Is there a correlation between office temperature and productivity? and so on for millions of questions in different domains.
Around the world, organisations are using data they never previously had, to get better at whatever they do (Good or bad).
The big data concepts are the tools for managing all this information. Big Data is not just large in volume, it is often unstructured and in different forms.
So to answer your question. You can implement these concepts by working with organisations that are using Big Data. Hopefully you can see the potential of Big Data along with the mind bending headache it can create when trying to make sense of it.

Best method to work in R program [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have planned to start my R program carrier now. I have understand the usage R but i have some doubts on the procedure to be implement in R. As a startup i am planing to retrieve data from google analytics and show it to frontend using R shiny. So which will be the good flow to fetch the data. Is it good to fetch the google analytic data using R and process it or it will be good to fetch the data using other languages like php or Java. Also R is good to fetch the data from database or its a good practice to fetch the data using other commonly used languages and process using R.
Sorry the question is little bit descriptive but i am expecting a help from you guys
Thanks,
Organize it in a package(http://r-pkgs.had.co.nz/).
See https://developers.google.com/analytics/solutions/r-google-analytics?hl=en .
The database question depends on exactly what you are using, but R has a lot of resources.

Resources