Is it possible to link to a Business Objects Universe in R?
I can connect R to a SQL database but that doesn't give me the Universe where all the joins are already created. I have end users that currently get there data through Business Objects but R would better suit their needs. And, since they are already familiar with field names and how the tables are linked in Business Objects, I would like for them to just link R to the existing BOBj universe.
Thanks!
Since I'm not allowed to recommend specific software (new to me, but fair is fair :)) I'll just mention two options:
1) search the internet for software that gives odbc access to business objects universes. These are good keywords on my favorite search engine: 'odbc leverages SAP BusinessObjects Security'
It has the distinct advantage of making sure that all user access is being governed by business objects, but data will have to travel through that server, and you will need to pay for licenses to SAP as well as the vendor of the new odbc...
2) give the users in question ODBC access to the same tables that the BO reports/universe run against, and make sure they are allowed to see the SQL in their dataproviders. Then tell them to 'cook up a report, and the copy&paste the SQL to R'
Option two may save you some licenses and give you better performance, but I would personally go for option 1) :)
Related
I am creating a SQLite3 database and would like to know if it possible to create a report view similar to Microsoft Access? If this is possible, could someone answer with some resources I could review to learn how?
SQLLite does not have reports. It's just a database engine. Though many people use the term "database" colloquially to cover a user interface + reports for accessing data, that is not what is generally meant by technical folks. Typically databases are just a means for storing and retrieving data in a more raw format, and user interfaces and report engines are in a separate layer of software.
There are many reporting engines out there, and Stack Overflow isn't a forum that specializes in software recommendations, but if you do a search specifically for reporting engines that can connect to SQLLite, you may find something you're looking for.
Team A has an enterprise app that uses ADO.NET for data access that executes stored procedures. The data access is encapsulated in it's own project (let's call it DAL.dll)
Team B is creating another unrelated app that's reusing the stored procedures in the enterprise app. This app is currently using the MS application block for data access. The issue we run into is that whenever Team A make any change to the input/output params in the stored procedures, there is a runtime error in Team B's app and this app needs to be updated to accommodate the additional params (or params that were removed). So, most of these go unnoticed until a user complains. At the very least, we would like to have the app throw a compilation error so that the build process warns us of the changes made.
One way to do this is to have Team B's project add a reference to the DAL.dll
I'd like to know if there are any other cleaner ways of solving the issue. We are ready to replace Team B's MS Data application block to use a different technology (Entity Framework?) if necessary.
Among the other answers, I'd strongly suggest getting those stored procedures into source control, in a Database Project. You then may be able to use the features of your source control system to do several things:
Lock some of the code so that it cannot be changed
Give you notifications if the code is changed
Warn you if the stored procedures change in a way that would prevent them from being called
Branch the stored procedures so that each team can have their own version of changed code, while keeping the unchanged stored procedures common. You of course will need to separate the different versions in the database.
I agree with the other posters on this thread that you should not share stored procedure's across different .NET DLL's, that is just a recipe for disaster. I would also shy away from ORM's like Entity Framework if you are doing anything at all complicated with your database schema because ORM's excel at getting a simple object model translated from your .NET application classes into SQL tables and SP's, but traditionally do poorly at optimizing them for performance on the database side. There will be people who claim otherwise, and they may have a valid point if you are an expert in wrangling an ORM to do waht you want like they are, but chances are you are not and it will cause you headaches in the long run.
A shared data access layer might work, but conceptually you are then just changing the implementation of the dependency from some code that a DBA wrote to some code that a .NET programmer wrote. Yes, you can use integration tests to achieve better verifiability, but the same case could be made for SQL with tools like Red Gate's SQL Test. I would shy away from this approach if the two applications are already experiencing some sort of pain from sharing SP's. That is an indication that the dependency just should be done away with.
If it were up to me, I'd just make a new schema for Team B's app. You can read more about schemas in SQL Server here: MSDN Schema description for 2008 R2. You can think of them as namespaces for SQL Server but with some additional bells and whistles like permission and access control. Separating out your different applications into separate schemas on the same shared database will probably make for the most flexible implementation in the long run.
unrelated app that's reusing the stored procedures in the enterprise app
If these two application are really unrelated why are those sharing procedures or even the same database. I know this is a long read, but I recommend you to read this: A Better Path to Enterprise Architectures
The partioning concept in there relates to the bounded context in Domain driven design:
Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.
Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside.
It is expected you end with problems when you don't explicitely deal with this. You're lucky you're seeing early failures, as it can turn into problems much harder to find on the long run.
Analyze the problem again with the above in mind. Consider if you're missing some explicit context where this common functionality should live.
My question is: which team owns the store procedured and the database shared? Usually as a good architecture/design, you should not have two different apps sharing same database / procedures.
A better way to share data/functionality between two different applications is through a services or API, so the team who owns the functionality would be responsible to maintain it.
Also, have a good communication between both teams is highly recommend.
Depending on the owner of the DAL project, you could host web services and share the API. That way, you separate the Data Access Layer from the business logic, which allows anyone to use the same DAL without having to publish it to each different location.
From my point of view, it looks like both Team A and Team B should share the same core model and look at Multitier architecture as a possible solution.
It sounds like it would make sense to create a shared DAL that both applications can share.
I would add unit tests (or really integration tests) to make sure the DAL is compatible with the apps after changes. That way your tests would fail if incompatible changes have been made
"I'd like to know if there are any other cleaner ways of solving the issue."
The cleanest way is for Team B to sit down with Team A and encapsulate the relevant business logic into a shared API. It doesn't matter so much how you implement that API; what does matter is that the API's interface is documented and versioned so everyone knows what to expect.
One reasonable mechanism for this in a .NET environment is to use Microsoft's WebAPI.
In short, the question of "how do we share a stored procedure?" is most likely looking at the wrong level of abstraction.
I need to write documentation for several small sqlite databases. Want to describe how the data is used, including table and row descriptions and sample data.
Is it possible to use MySQL Workbench? If not are there any alternatives, or any templates I could work from?
TIA!
Using MySQL Workbench is not possible since, as far as I know, it only supports exporting to SQLite. For a number of suggestions about free database managers you might want to take a look at What are good open source GUI SQLite database managers? - a number of the GUIs mentioned there support report generation.
Definitely far more feature rich, but with a significant price tag, is Navicat for SQLite, which is an excellent database manager with report generation features.
We are building a jobsite application in which we will store resumes of all the candidates, which is planned to store on file system.
Now We need to search inside that file and provide the result to the user, we need to provide that what is the best solution to implement text searching.
I have just tried to identify it and got some reference like IFilter (API or interface) and Lucene.Net (open source), but not sure that is it a right solution.
In initial phase it is expected to be around 50,000 resumes and it should be scalable enough if number increases.
I just want some case study or some analysis or your suggestions that which is the best method to handle this requirement (Technology ASP .Net)
Thanks
You can use Microsoft Search Server. There is a free version, so you can try it before buy it (or never buy, if it meets your requirements).
If, later, you do want to integrate that documents into a Sharepoint portal, Enterprise Search can also integrate with it.
One possibility would be to use the FILESTREAM feature in SQL Server 2008, combined with database-level full text index / search.
That would allow you to keep the files in the filesystem, while also providing transactional integrity and search.
SQL Express supports FILESTREAM, and the 4GB size limit doesn't apply for the files (although it does apply to the size of a full text index).
This might be naive since I'm unfamiliar with off-the-shelf search products but if nothing pre-build fit the bill I would build a simple service that crawls and indexes (or several instances to crawl different directories to increase speed) and updates a database. If the files were accessed regularly you could build a layer of isolation to prevent collisions.
Rodney
I don't know how authoritative this is but I found this:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations
and it doesn't seem good to have a lot of connections to sqlite. This seems to be bad for the web and most applications that have more than a few users. I'm having a hard time thinking of what sqlite would be used for when you don't need that many connections. Every program I can think of needs users, lots of them sometimes, so what would I use a database for that doesn't allow that many connections? I thought about prototypes but why would I use that when I can just connect to a larger database? Embedded apps maybe?
Thank you.
EDIT: Thanks everyone. I look at the page recommended below but an confused about something:
Under appropriate uses for sqlite it has:
Situations Where SQLite Works Well
•Websites
SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.
Situations Where Another RDBMS May Work Better
•Client/Server Applications
If you have many client programs accessing a common database over a network, you should consider using a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, the file locking logic of many network filesystems implementation contains bugs (on both Unix and Windows). If file locking does not work like it should, it might be possible for two or more client programs to modify the same part of the same database at the same time, resulting in database corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it.
A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem.
The Question:
I'm going to show my ignorance here but what is the difference between these two?
This is answered well by sqlite itself : Appropriate use of sqlite
Another way to look at SQLite is this:
SQLite is not designed to replace Oracle. It is designed to replace fopen().
It's good for situations where you don't have access to a "real" database and still want the power of a relational db. For example, Firefox stores a bunch of information about your settings/history/etc in an SQLite database. You can't expect everyone that runs firefox to have MySQL or postgre installed on their machine.
It's also perfectly capable of running relatively-low traffic, read-heavy websites. The performance of it is overall very good, it's more than the large majority of websites need for their traffic levels.
It's often used for embedded applications.
It can be very handy to use a database like storage when you have no access to a database service. So SQLite is used since it's just a file you store somewhere.
I also find that using SQLite is good for getting a prototype application together pretty quickly without the overhead of having a seperate DB server or bogging a development environment with an instance of MySQL/Oracle/Whatever.
Also easy to pick up and move the database to a different machine if you need to.
The iPhone uses it for call history, SMS messages, contacts, and other type of data. Like Ólafur Waage said, good for embedded applications on mobile device because it's lightweight. I have used it also on stand alone applications. Easy to use and available on most platforms.
Think about simple client or desktop apps that could make use of a db, like as a poor example, an address book. Rather than bundling a huge db engine like mysql or postgre with your deliverable, sqlite is very lightweight and easy to include with your finished app.
This FLOSS Weekly podcast episode talks with the creator of SQLite and covers among other things goes over the type of things you would use it for. Everything from file systems for mobile phones to smallish web sites.
In the simplest terms, SQLite is a public-domain software package that provides a
relational database management system, or RDBMS. Relational database systems are
used to store user-defined records in large tables. In addition to data storage and management,
a database engine can process complex query commands that combine data
from multiple tables to generate reports and data summaries. Other popular RDBMS
products include Oracle Database, IBM’s DB2, and Microsoft’s SQL Server on the
commercial side, with MySQL and PostgreSQL being popular open source products.
The “Lite” in SQLite does not refer to its capabilities. Rather, SQLite is lightweight
when it comes to setup complexity, administrative overhead, and resource usage.
For detail info and solution about SQLite visit the link below:
http://blog.developeronhire.com/what-is-sqlite-sqlite/
Thank you.
What the above two answers say. Expanding slightly on Chad Birch's answer, its teh calls to the SQLite db, and a rather poor implementation of sync() that causes FF3 to be so slow in linux.