It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have stock market application which will be frequently used by millions of members at a time.
What could be the best option to retrieve data from my database based on the retrieval time and Database load - DataReader or DataSet?
If you will have a large amount of people reading the data, then you should use the DataReader paradigm. This way you can quickly get in and out with the trouble of schema inference. I would also recommend that once you pull the data from the server that you cache it. Even if it is only cached for 1 second, that will improve the number of connections from the database that will retrieve the same data. Otherwise, you could quickly saturate your connection pool if you are not careful as well as some possible locking.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
how can I get how many people registered to my website?
in ASP.NET
You're not going to get much better then incrementing in Session_Start and decrementing in Session_End unless you use some other means such as a database.
When you are authenticating your requests you could update a timestamp in the database to show that the user has been active, and after a given time (say 10-15 minutes), the query thatcollects the number of current users ignores that row in the database (thus decrementing the count).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I've heard from many people that R is built for processing petabytes of data; however, on the other hand, I'm hearing very often as well that if you want to process for example 8 GB of data, you'd better to have at least 8 GB of memory, otherwise you'll face some problems.
My question is if I need to process like 20 GB of data (which I think is fairly common in many projects), how much Memory and also Processor do I need? If you had any previous experience I'd be happy to know how it should be for 2 petabytes of data as well.
I think you can't process 2 petabytes of data with any language at once (well maybe with some specific software and/or hardware you could). Paraller solutions or processing in smaller pieces is always needed. In R, objects are stored in the virtual memory, so there's a clear limit how much data you can have in R at the same time. Check Memory Limits in R.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
what can be the quickest way to convert a sqlite database to MS-access format, give that i dont have the table structure of the database.
my best possible guess is
connect to sqlite db, get table schemas, then duplicate the table data row by row.
i need the quickest way possible.
regards
raj
A simple way:
Install an ODBC driver for SQLite
In Access, create linked tables. You will be given the choice of linking to the table or creating a copy of the table and data in Access.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Forget the other details of the DataSet vs DataReader argument and tell me factually whether using the DataSet saves you from creating as many database queries as the DataReader does?
In my experience, it is clear that using the DataReader requires more database queries than using a DataSet. With the DataSet you can create one query (or stored procedure), disconnect, and then simply query the DataSet, leaving less back-and-forth to the database.
So factually, is the statement true that a DataReader creates more database traffic than a DataSet given the same scenario of cause.
A DataSet uses a DataReader to populate itself.
So the answer is no, a DataReader does not create more database traffic. It is the developer's understanding and usage (or mis-usage) that would create more database traffic.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I would like to know what are the best practices for building Predictive Modeling solutions organically ?
Some of the questions I have are :-
If I have multiple R model files, what are efficient ways of storing them ?
Save as .Rdata files on file system
Serialize to a DB as binary objects
Since data is processed to create an interim model specific format, is it helpful to use such paradigms as PMML ?
Also, should one consider such practices as MVC (I'm not a trained software developer, so any insights into such development practices would be very helpful)
I apologize for the open-ended nature of this question. I wish to understand even simple things as recommended folder structure for data staging, model store, scripts collection and such other elements of a data mining solution.
I would be very grateful to members of the community for sharing their experiences and recommendations.
Thank you for your time.