Pre-fill SQLite database with Core Data using Django - sqlite

I'm wanting to build up a pre-filled sqlite database using Django as a front-end data entry inputter; however I've read this site and on StackOverflow that isn't that easy to do; and one needs to use CSV files.
That's fine if your database is pretty small - but what if you're pre-filling it with data that has relationships (ie: Customers, Orders, Salespeople, etc); or if the database you're building requires a lot of data input (1,000+ records).
My app/database will be pretty big and have a lot of pre-filled data with multiple references/relationships -- I really don't want to re-enter all this stuff into CSV files or into the Simulator and I thought using Django would be a really quick and dirty way of getting vast amounts of related data into an iphone app.
This kinda raises the question of whether Core Data is really worth it, the learning curve is really high, the syntax can be cumbersome and I'm considering just using FMDatabase, except I can't get the LIMIT/OFFSET to display rows of data in batches correctly (any help or hints on tips how to do this would be great!).
Therefore, if you're wanting to pre-fill a database using Core Data and a large database what is the best route forward?

You might want to have a look at the Active Record port for cocoa/cocoa touch. I've done an app like this before, and what we (the client and i), choose to do was to import the data from an XML file on the first app launch. The idea being that as the parser was built into the app, we could do OVA updates if we choose to at a later date. The data did have fairly complex relationships, but i decided Core Data was still the way forward.
I've only used raw SQLite once, and that was before we had CoreData on the iPhone. Also you should consider what your solution is for doing schema migrations which are handled by CoreData.
A Hybrid solution is to load the data into Core Data in the simulator, and then ship that SQLite database with the app, and copy it into the app's document directory, on the initial load. Best of both worlds.
Good luck

Related

IONIC2 Building a custom audio streaming app with firebase

I was thinking of making an app which plays a specific set of small audio files stored away in a firebase database, or any other database on the cloud.
The number of files may exceed hundreds, so wrapping all the files in the app might not be the best of ideas.
I'd really like to know how to go about this problem.
Downloading only the required file from firebase when it's clicked and playing as its being downloaded all the while caching it for future playback seems plausible, but I'm not quite sure how to implement it.
I'd appreciate a few pointers towards this, thanks
Yes. You are thinking in the right direction.
You might use these things.
App UI - Ionic2 will provide you a very nice and easy to implement platform to create a UI. Please refer this documentation for the basics and details about using Ionic2.
Local Storage - You can use this to store downloaded files. In any app where data is relatively large, using local storage is the smart choice. This helps you reducing the size of the initial app to be installed and download the content as and when needed.
A well defined database - Now, whether to use non-structured (No-SQL) or structured (SQL) is the first choice you have to make.
If it's just content - audio files download and play with no complex cross querying the database, then you can choose to use non-structured (No-SQL) database like Firebase database.
But, if you have good requirements for structuring data, query it with constraints like "Give me all the audio files list where a particular user has played it in last 10 days" or "Give me all the users who has played/downloaded a particular audio file more than 10 times" and so on, then you better use structured (SQL) database like PostgreSQL.
RxJS - Now, this might not be very important to do, but, if you use this from the start, it's a good choice. Advantages are, e.g. you might not have to wait for all the file to get downloaded before playing it. Use Observables and Promises for such a mechanism.
Could help with the specifics when needed. Hope this helps. :)

Evaluation of a strategy to sync Core Data with Dropbox

This question is about using Dropbox to sync an sqlite Core Data store between multiple iOS devices. Consider this arrangement:
An app utilizes a Core Data store, call it local.sql, saved in the app's own NSDocumentDirectory
The app uses the Dropbox Sync API to observe a certain file in the user's Dropbox, say, user/myapp/synced.sql
The app observes NSManagedObjectContextDidSaveNotification, and on every save it copies local.sql to user/myapp/synced.sql, thereby replacing the latter.
When the Dropbox API notifies us that synced.sql changed, we do the opposite of part 3 more or less: tear down the Core Data stack, replace local.sql with synced.sql, and recreate the Core Data stack. The user sees "Syncing" or "Loading" in the UI in the meantime.
Questions:
A. Is this arrangement hugely inefficient, to the extent where it should be completely avoided? What if we can guarantee the database is not large in size?
B. Is this arrangement conducive to file corruption? More than syncing via deltas/changelogs? If so, will you please explain in detail why?
A. Is this arrangement hugely inefficient, to the extent where it should be completely avoided? What if we can guarantee the database is not large in size?
Irrelevant, because:
B. Is this arrangement conducive to file corruption? More than syncing via deltas/changelogs? If so, will you please explain in detail why?
Yes, very much so. Virtually guaranteed. I suggest reviewing How to Corrupt An SQLite Database File. Offhand you're likely to commit at least two of the problems described in section 1, including copying the file while a transaction is active and deleting (or failing to copy, or making a useless copy of) the journal file(s). In any serious testing, your scheme is likely to fall apart almost immediately.
If that's not bad enough, consider the scenario where two devices save changes simultaneously. What then? If you're lucky you'll just get one of Dropbox's notorious "conflicted copy" duplicates of the file, which "only" means losing some data. If not, you're into full database corruption again.
And of course, tearing down the Core Data stack to sync is an enormous inconvenience to the user.
If you'd like to consider syncing Core Data via Dropbox I suggest one of the following:
Ensembles, which can sync over Dropbox (while avoiding the problems described above) or iCloud (while avoiding the problems of iOS's built-in Core Data/iCloud sync).
TICoreDataSync, which uses Dropbox file sync but which avoids putting a SQLite file in the file store.
ParcelKit, which uses Dropbox's new data store API. (Note that this is quite new and that the data store API itself is still beta).

EF and customer data separation

Is it possible to build an ASP.NET website using EF where each customer logging in has separately stored data? We have customers demanding that their data won’t be stored in the same tables as other customers’ data.
I’ve read that EF can’t work with several databases but is it possible to switch database at runtime depending on input parameters? I have a feeling it won’t be possible since the migration features are tightly connected to the database being used, but I'm not sure.
One solution could be to have a separate website deployment and database for each customer. They’ll get separate domains to access but that’s not a problem. But this solution feels a bit clumsy if you’re having many customers, especially with deployment and future upgrades.
Am I missing some smart ways of solving this or is this a very tricky issue?
is structure (of the db) the same ?
if so you could switch connections - not w/o issues though, but should work. For details on how that should be done check the long discussion we've had here (and linked previous questions etc.)...
Code first custom connection string and migrations without using IDbContextFactory

batch manipulations for online web app

A customer has a web based inventory management system. The system is proprietary and complicated. it has around 100 tables in the DB and complex relationships between them. it has ~1500000 items.
The customer is doing some reorganisations in his processes and now has the need to make massive updates and manipulation to the data (only data changes, no structural changes). The online screens do not permit such work, since they where designed at the begining without this requirement in mind.
The database is MS Sql 2005, and the application is an asp.net running on IIS.
one solution is to build for him new screens where he could visialize the data in grids and do the required job on a large amount of records. This will permit us to use the already existing functions that deal with single items (we just need to implement a loop). At this moment the customer is aware of 2 kinds of such massive manipulations he wants to do, but says there will be others.This will require design, coding, and testing everytime we have a request.
However the customer needs are urgent because of some regulatory requirements, so I am wondering if it will be more efficient to use some kind of mapping between MSSQL and Excel or Access to expose the needed informations. make the changes in Excel or Access then save in the DB. may be using SSIS to do this.
I am not familiar with SSIS or other technologies that do such things, that's why I am not able to judge if the second solution is indeed efficient and better than the first. of course the second solution will require some work and testing, but will it be quicker and less expensive?
the other question is are there any other ways to do this?
any ideas will be greatly appreciated.
Either way, you are going to need testing.
Say you export 40000 products to Excel, he re-organizes them and then you bring them back into a staging table(s) and apply the changes to your SQL table(s). Since Excel is basically a freeform system, what happens if he introduces invalid situations? Your update will need to detect it, fail and rollback or handle it in some specified way.
Anyway, both your suggestions can be made workable.
Personally, for large changes like this, I prefer to have an experienced database developer develop the changes in straight SQL (either hardcoding or table-driven), test it on production data in a test environment (doing a table compare between before and after) and deploy the same script to production. This also allows the use of the existing stored procedures (you are using SPs, to enforce a consistent interface to this complex database, right?) so that basic database logic already in place is simply re-used.
I doubt Excel will be able to deal with 1.5mil elements/rows.
When you say to visualise data in grids - how will your customer make changes? Manually or is there some automation behind it? I would strongly encourage automation (since you know about only 2 types of changes at the moment). Maybe even a simple standalone "converter" application - don't make part of the main program - it will be too tempting for them in the future to manually edit data straight in the DB tables.
Here is a strategy that I think will get you from A to B in the shortest amount of time.
one solution is to build for him new
screens where he could visialize the
data in grids and do the required job
on a large amount of records.
It's rarely a good idea to build an interface into the main system that you will only use once or twice. It takes extra time and you'll probably spend more time maintaining it than using it.
This will permit us to use the already
existing functions that deal with
single items (we just need to
implement a loop)
Hack together your own crappy little interface in a .NET Application, whose sole purpose is to fulfill this one task. Keep it around in your "stuff I might use later" folder.
Since you're dealing with such massive amounts of data, make sure you're not running your app from a remote location.
Obtain a copy of SQL 2005 and install it on a virtualization layer. Copy your production database over to this virtualized SQL server. Take a snap shot of your virtualized copy before you begin testing. Write and test your app against this virtualized copy. Roll back to your original snap shot each time you test. Keep changing your code, testing, and rolling back until your app can flawlessly perform the desired changes.
Then, when the time comes for you to change the production database, you can sit back and relax while your app does all of the changes. Since the process will likely take a while, so add some logging so you can check the status as it runs.
Oh yeah, make sure you have a fresh backup before you run your big update.

Need advice on selecting a data access method

I am in the early stages of planning a conversion of a large classic ASP database application to ASP.Net and I'm having trouble picking out which data access method to use. I have played around with Linq To SQL, Dynamic Data, strongly typed datasets, Enterprise Library (Data Access Application Blocks), and a tiny bit with Entity Framework, but none of them have jumped out to me as "the one". There are just too many choices - my head is swimming, help me choose!
Perhaps it would help to give some background on the application that I am converting along with the priorities...
The back end is Microsoft SQL Server (2005 or later) and we are committed to that, so I don't need to worry about ever supporting a different database platform.
The database is very mature and contains a great deal of the business logic. It is highly normalized and makes extensive use of stored procedures, triggers, and views. I would rather not reinvent two wheels at the same time, so I'd like to make as few changes to the database as possible. So, I need to choose a data access method that is flexible enough to let me work around any quirks in the database.
The application has many data entry forms and extensive searching and reporting capabilities (reports are another beast which I will tackle later).
The application needs to be flexible enough to deal with minor changes to the database structure. The application (and database) may be installed at different sites where minor custom modifications are made to the database. Ideally the application could identify the database extensions and react appropriately. In other words, if I need to store an O/R mapping in the application, I need to be able to swap that out (or refresh it easily) when installing the application and database at a new site.
Rapid application development is critical. Since the database is already done and the user interface is going to closely match the existing application, I'm hoping to find something where we can crank this out fairly quickly. I am willing to sacrifice not using the absolute latest and greatest technology if it will save time in development. In other words, if there is a steep learning curve to using something like Entity Framework, I'm fine with going something like strongly typed Datasets and a custom DAL if it will speed up the process.
I am a total newbie to ASP.Net but am intimately familiar with Classic ASP, T-SQL and the old ADO (e.g. disconnected recordsets). If any of the data access methods is better suited for someone coming from my background, I might lean in that direction.
Thanks for any advice that you can offer!
Look at all three articles in this series:
High Performance Data Access Layer Architecture Part 1
Great advice.
You may want to look at decoupling the database layer from the asp layer so that you can not only give more flexbility in making the decision, but when you have to make changes to a customer's database you can just swap in a new dll without changing anything else.
By using dependency injection you can use xml to tell the framework which concrete class to use for an interface.
The advantage to doing this is that you can then go with one database approach, and if you later decide to change to another, then you can just change the dll and go on without making any changes to other layers.
Since you are more familiar with it why not just go directly to the database at the moment by making your own connections? Then you can move the rest of your code and along the way you can decide which of the myriad of technologies to use.
For a new application I am working on I am starting with LINQ to SQL for it, mainly because development will be quicker, but, later, if I decide that won't meet my needs I will just swap it out.
nHibernate might be a good fit. You can store the mapping in external configuration files which would solve your needs. Another option might be using ActiveRecord, which is based upon nHibernate.
nHibernate has a neat feature which you might find helpful. It's called a Dynamic property which is basically a name value pair collection populated by pulling the column names from the mapping file. So when you add a column at your client site, you update the mapping file and you'd be able to access the data through a collection on the object.

Resources