I need to store 6 sets of passwords and usernames with Adobe Air 2.5
Should I use a SQL database or the EncryptedLocalStore for it?
Thanks. Uli
It depends on how the information will be acquired and then used. If it's info that an application user provides, and it's then being cached for future use, EncryptedLocalStore (ELS) could be a good way to do it. There are some benefits and limitations:
ELS is designed to hold small amounts of data - your example of 6 user/pw combos qualifies.
ELS data is available to all processes accessed by a particular user. So after being set in one application, it can be accessed from other AIR applications from the same publisher as long as it's the same user as authenticated by the operating system. If you don't like this access model, you should use a database instead.
ELS Data shouldn't be considered to be permanent - it can be deleted in a number of ways. You should provide a way for users to re-enter the data if it's no longer available.
Check the docs at http://goo.gl/Nd3gQ for more info on when and how to use ELS.
Related
Background info before question
I use state session a lot to store my complex objects, I am also using at max like 8 tables. On those 8 tables I am using like about 25 SP to join users based on user id and some key values that the user selects. All this is done on SQL Server database.
zip codes spatial values
male or female
has pictures
approved profile
registered account(paying for services)
I store Images in a file system on my application server. I store the path on my db table.
Use Case
Dating website, unique payloads most of the time such as searching based on a certain criteria, updating and fetching personal profile with photos. I am using asp.net MVC, and this is a website only. (separate web pages for responsive designs for other devices)
Question
Can I just use Redis as my primary data store, instead of using SQL server Database based on my use case?
Key Points
I don't on plan on having more than like 10-12 total tables in the future. The data input are mostly strings. When I want to persist a complex object like profile information and image paths I use a Session State. I love what I read about the speed of Redis, and I see it being counter productive to duplicating updates to both Redis and a DB if I stack them.
I don't think you can easily replace your database with Redis because you are missing things like FK, indexes, constraints (Redis is a NoSQL so you don't have any relational attributes). So you will end up building those yourself, especially the indexes for your 25 stored procs which can become pretty complex stuff. Also the fact that you have 25 stored procs for your 8 tables kind of tells me you have quite some logic here which will be even harder to move to Redis or your application layer.
Sure, adding Redis to your stack is not easy and it will make your application more complex, so you must weight the benefits and the drawbacks. And Redis because it keeps all the stuff in memory it is best suited for a cache layer.
i am very confused about use of db or persistent store, if i use db then i have to store it on mmc because i have read some where that all BB devices do not allow to store db in phone memory, if i make db on mmc then user can delete it, and the second approach is Persistent Store, but it is not easy to manipulate when we have large amount of data, how can manage tons of keys to retrieve and store data in persistent store, and how can i perform delete,edit operation on persistent stored data.
Do not know what to, very much confused. Which approach will be best and what will the mechanism.
kindly suggest.
Main difference on using peristance,is that it supports for devices below 5.0 till 7.1
for Sqlite,it supports from 5.0 Os and above,you can look for which Os you are targeting.
When saving in persistance Db,you can save and retreive it as Vector,I am unware of Sqlite database.
The BB documentation says:
If you only specify the database name as the parameter value to DatabaseFactory.create(), the database file is created on the SD card of the device. The default location for the database file is /SDCard/databases/<application_name>/. The name of the application that creates the database is included in the path to avoid name collisions.
You can create database files in eMMC memory, on devices that support it, by specifying the corresponding file system path.
So, to remain compatible with all devices, you have to put the database on the card.
Besides unplugging the memory card, the user can always delete and reinstall your app, so you must be prepared for your data to vanish. There is no way to force your data to be kept against the user's wishes.
The best you can do is to complain that your data is missing, and/or to reinitialize your database.
The Persistent Store indeed is not suitable for managing large amounts of data; for anything more than a simple key/data lookup, you'd have to load the data into memory and do the queries there.
I have an ASP.net application that I'm moving to Azure. In the application, there's a query that joins 9 tables to produce a user record. Each record is then serialized in json and sent back and forth with the client. To increase query performance, the first time the 9 queries run and the record is serialized in json, the resulting string is saved to a table called JsonUserCache. The table only has 2 columns: JsonUserRecordID (that's unique) and JsonRecord. Each time a user record is requested from the client, the JsonUserCache table is queried first to avoid having to do the query with the 9 joins. When the user logs off, the records he created in the JsonUserCache are deleted.
The table JsonUserCache is SQL Server. I could simply leave everything as is but I'm wondering if there's a better way. I'm thinking about creating a simple dictionary that'll store the key/values and put that dictionary in AppFabric. I'm also considering using a NoSQL provider and if there's an option for Azure or if I should just stick to a dictionary in AppFabric. Or, is there another alternative?
Thanks for your suggestions.
"There are only two hard problems in Computer Science: cache invalidation and naming things."
Phil Karlton
You are clearly talking about a cache and as a general principle, you should not persist any cached data (in SQL or anywhere else) as you have the problem of expiring the cache and having to do the deletes (as you currently are). If you insist on storing your result somewhere and don't mind the clearing up afterwards, then look at putting it in an Azure blob - this is easily accessible from the browser and doesn't require that the request be handled by your own application.
To implement it as a traditional cache, look at these options.
Use out of the box ASP.NET caching, where you cache in memory on the web role. This means that your join will be re-run on every instance that the user goes to, but depending on the number of instances and the duration of the average session may be the simplest to implement.
Use AppFabric Cache. This is an extra API to learn and has additional costs which may get quite high if you have lots of unique visitors.
Use a specialised distributed cache such as Memcached. This has the added cost/hassle of having to run it all yourself, but gives you lots of flexibility in the long run.
Edit: All are RAM based. Using ASP.NET caching is simpler to implement and is faster to retrieve the data from cache because it is on the same machine - BUT requires the cache to be populated for each instance of the web role (i.e. it is not distributed). AppFabric caching is distributed but is also a bit slower (network latency) and, depending what you mean by scalable, AppFabric caching currently behaves a bit erratically at scale - so make sure you run tests. If you want scalable, feature rich distributed caching, and it is a big part of your application, go and put in Memcached.
I'm just trying to determine if the files on the filesystem used by Raven DB are encrypted or not? Can someone just open the files on the filesystem and convert them from binary to ASCII directly, or are they encrypted?
I am trying to convince our management to give RavenDB a shot, but they have concerns about security. They gave the example that you can't just open up an MS SQL db file, convert it from binary to ASCII, and read it. So I am trying to verify if RavenDB prevented that kind of thing as well?
Well, personally I think that your management sucks if they come up with such straw-man arguments.
To answer your question: No, you can't just open any file inside ravens data folder with Notepad and expect to see something meaningful. So, for the ones that don't know how to program, yes they are encrypted.
To convice your management you can tell them that raven uses the same encryption algorithm as Microsofts Exchange Server does. If they want to dig deeper - it's called Esent.
RavenDb storage is not encrypted. You can open it with notepad and see some pieces of data. At the same time I do not think that MS SQL encrypts files by default either.
RavenDB added encryption in mid-2012. Get RavenDB's “bundle:encryption” and then make sure your key is properly encrypted in the .NET config file or whatever.
http://ravendb.net/docs/article-page/3.0/csharp/server/bundles/encryption
http://ayende.com/blog/157473/awesome-ravendb-feature-of-the-day-encryption
SQL Server 2008 does have encryption, but you need to prepare the DB instance beforehand to enable it, then create the DB with encryption enabled and then store data.
If you haven't, you could just copy the DB off the machine and open it in a tool that does have access to it.
With RavenDB, you can tick the box and off you go! (although I do not know the intricacies of moving backups to another machine and restoring them).
In relation to the point your management made, this is a relatively pointless argument.
If you had access directly to the file of a DB, it's game over. Encryption is your very last line of defence.
[I don't think hackers are going to be opening a 40GB file in Notepad .. thats just silly :-)]
So instead of ending up at the worst case, you have to look at the controls you can implement to even get to that level of concern.
You need to work out how would someone even get to that file (and the costs associated with all of the mitigation techniques):
What if they steal the server, or the disk inside it?
What if they can get to the DB via a file share?
What if they can log onto the DB server?
What if an legitimate employee syphons off the data?
Physical Access
Restricting direct access to a server mitigates stealing it. You have to think about all of the preventative controls (door locks, ID cards, iris scanners), detective controls (alarm systems, CCTV) and how much you want to spend on that.
Hence why cloud computing is so attractive!
Access Controls
You then have to get onto the machine via RDP or connect remotely to its file system via Active Directory, so that only a select few could access it - probably IT support and database administrators. Being administrators, they should be vetted and trusted within the organisation (through an Information Security Governance Framework).
If you also wanted to reduce the risk even further, maybe implement 2 Factor Authentication like banks do, so that even knowing the username and password doesn't get you to the server!
Then there's the risk of employees of your company accessing it - legitimately and illegitimately. I mean why go to all of the trouble of buying security guards, dogs and a giant fence when users can query it anyway! You would only allow certain operations on certain parts of the data.
In summary ... 'defence in depth' is how you respond to it. There is always a risk that can be identified, but you need to consider the number of controls in place, add more if the risk is too high. But adding more controls to your organisation in general makes the system less user friendly.
I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance.
So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits.
If the user then looks at their contacts again the data will just come from the local DataTable.
At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?
You worry about nothing. Basically it is a scalability dump - stupid desig. BUT: if you can throw 1gb of memory at the problem, for 20 users, storing 16mb of memory is not a problem.
The main problem starts when pepople count grows and the application needs to be rewritten.
20 concurrent users is not too many.
Clients "looks at their contacts": Depending on "contacts" table size, could you consider storing it in in-memory dataset( all contacts). You could then filter acc to primary key.
Alternative to session:Cache, Application
Cache with SqlCacheDependency and CacheItemRemovedCallback should be a good option to session.
XML files for each customer contacts.