Sorry, I'm a little unclear on the web2py manual explanation.
as an example, given app1 and app2
I want to have app2 share the database I have built in app1
So do I change the app2/models/db.py file to show: db = DAL('sqlite://storage.sqlite',migrate='false') ?
and include all other myModel.py files in app2/models directory as well?
if the database is in app1/databases/ how does app2 know how to find the correct database file?
This Thread begins to answer the question but I'm still unclear on how to define where the shared database lives.
Note, DAL(..., migrate=False) just sets the default value of migrate for each table -- it will not have any effect on the migration status of tables whose define_table() calls include their own explicit migrate argument. If you want to completely disable migrations for an entire db connection (regardless of the individual define_table() calls), instead use:
DAL(..., migrate_enabled=False)
Also, to share model definitions between applications, rather than simply copying the model files, you could put the definitions in functions or classes within modules and then import the modules. Another option is to use auto_import:
DAL(..., auto_import=True)
Note, auto_import will import the field names and types, but it will not include DAL-specific attributes, such as validators and defaults, so its usage is somewhat limited.
I can't test this right now but the answer should be:
you can override the folder in the DAL:
So both apps should point to the same file.
(see the docs and this thread).
.
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases')
yes, should need the model files in both apps too, otherwise the apps won't know how to access the db.
Related
My goal is to implement the changing values of resources and give the admin the ability to maintain the language through the portal. In order to do that, I need to be able to change resx files at the runtime, because all their values are stored on the resx files. I have 3 resx files for each different language. In my case I want the translation can be maintain later by an admin at runtime. For example admin can add, edit or delete the entry for the language at runtime.
As #Xerillio mentioned in his comment this is a lot of effort.
Recently I've created a nuget that may save your time and effort. Have a look at XLocalizer, it creates resources and uses online translation services to auto translate the missing resources and save them in XML or DB, then it provides an easy interface to export them to RESX. Finally you may see XLocalizer.Samples, it contains sample setup for different scenarios.
If you need another file/DB type to store the resources, you may create your custom resource provider and register it in startup.
The DB sample provides UI to edit resources, but with XML sample I didn't create a UI for editing resources, instead you may download the XML, do your corrections if any, then upload and use the built-in exporter to export to RESX.
With this nuget, all I have to do to add a new culture even in runtime, is just to add the culture name to the supported cultures list, then do some correction to the auto translations, all the rest is handled by XLocalizer.
Notice: it was not possible to put all this in the comment, thats why I posted as answer :)
I have more than 50 namespaces used in my Marklogic API's .The count can go on increasing further - I am looking for a way to find the feasibility if there is any way to utilise or store them in database or add them in app-server and later on how to invoke them in all the Xquery files- where they have been till now are updated manually in case of any new addition.
Yes! If you go to the Admin API (port 8001) and go under either your Group or App Servers, you'll see a Namespaces section on the left and in there you can enter your commonly used namespaces. After that they'll just exist in all the code automatically.
I think I've a good understanding of Symfony and how bundle works.
However I've never found how to solve a simple problem: make a reusable bundle that provides data like tables/Doctrine entities pre-filled with (i.e.) all country names in the world, all provinces of Italy, tax rates history in England and so on.
Of course the purpose is to provide forms, services and controllers relying on this data source, without the need to copy and paste tables and entities across projects.
How would you do that?
Data fixtures IMHO are not an option because an obvious reason: you are going to purge your database while it's running.
A custom command reading from a static data-source (json, YAML) and performing inserts/updates?
First step is declaring a Doctrine entity in your Bundle. I think you should create DataFixtures to populate your datas into db.
You maybe should consider to use Seeds instead of Fixtures.
Fixtures are fake datas, used to test your application
Seeds are the minimal datas required for your application to work.
Technically, these are exactly the same thing, you declare it under the "DataFixtures/" folder and you import them with the "doctrine:fixtures:load" command.
You can create a folder "Fixtures/", and a folder "Seeds/" under the folder "DataFixtures", then load your seeds with the command
php app/console doctrine:fixtures:load --fixtures=/path/to/seeds/folder --append
It was suggested in the comments that it may be safer, especially in production environment, to create a custom Symfony2 command to force the "--append" mode. Without this mode, your database will be purged, and you could loose your production data.
This answer assumes you're using composer to install your bundles (and you really exclude fixtures as an option).
What you can do, is make an SQL export of the data you want, and make sure it uses INSERT IGNORE INTO, and get the correct unique constraints.
Then you save that file somewhere in your bundle, in a "data" or "fixtures" folder.
so your path to that file will be like:
"vendor/company/epicbundle/data/countries.sql"
What you then can do, is add post-insert and post-update commands in your composer.json, that looks like this:
"post-install-cmd": [
"php app/console doctrine:query:sql \"$(cat vendor/company/epicbundle/data/countries.sql)\""
]
If you only want it to run on install, you only add it there, if you sometimes update the sql file, you also add it to the post-update-cmd.
Please note that this solution only works if people don't temper with the table names, otherwise the queries will fail.
If you want a more save/stable solution, you can write your own post-install script in Symfony that uses the entity manager, and there you can use, for example, a csv file, and insert/update it row by row.
Basically, anything you could implement would surely rely on persistence mechanisms used in your ORM/ODM/whatever. So, you'll end up implementing a typical fixture loading mechanism, at least partially: you'd execute code that saves some provided data; if it's serialized you'd do XML/JSON/YAML parsing (but this is just a technicality) and persist the results into the database.
Thus, it's not bad to stick with Doctrine Fixtures. They are programmable and extensible (you can even fetch your data from the web upon loading).
As stated in #paul-andrieux's answer, if you are worried about data loss (e.g. your bundle's seeds are loaded when the end user's DB is already up), you should use doctrine:fixtures:load --append and let the constraints do their job (like, in a country names table you'd have a unique constraint on country name or even a 'slug') so that inserting duplicate rows silently fails inserting a single entity, in case if your bundle has updated the country list, and the end user had a previous version.
If you really worry about your end users' data you could write a wrapper for the doctrine:fixtures:load command that would have the --append flag always on and register it as a separate command. (You could run needed migrations there, too)
#lxg's hard-coded IDs problem is solvable, too. Try using natural keys where applicable (e.g. the countries table would have a slug primary key that would be great-britain for Grean Britain). This way your searches would be pretty easy: $em->find('\MyBundle\Country', 'great-britain');. If you cannot come up with a natural key, then maybe the entity is not really needed for the end user.
UPD. Here's an article that could be useful: http://www.craftitonline.com/2014/09/doctrine-migrations-with-schema-api-without-symfony-symfony-cmf-seobundle-sylius-example/
Generally speaking, the bundle embedded the entities that will be loaded via the ORM/ODM using their built-in commands (like doctrine:schema:update, doctrine:migration:diff, ...) and provides a custom command that load the required fixtures using the ODM/ORM
This command can read the fixtures in multiple way (parsing yaml, xml, raw sql, dql, ...), it is just a matter of taste. Tones of bundles, parser, ... exist for those tasks.
In your documentation, you just have to state in a clear way that the developer must run this command after your bundle installation and schema update.
If I want to call a web service or wcf method from an orchestration, I can do it by either adding a service reference to the project or adding a generated item. What is the advantage of either approach - is there a best practice?
Steef -Jan Wiggers answers a similar question here
TL;DR - Always use the Generated Items wizard.
My 10c - Although the .xsd files imported by Add Service is added as a schema and set to BtsCompile, there are some limitations such as:
Add Service Reference will add the client proxy, which isn't needed in a BizTalk project (and which might 'tempt' your devs to do silly things like using this proxy from a Custom assembly)
Service Reference makes a mess of importing complicated WSDL (e.g. with Generics or dependencies on other Schemas), See Considerations when consuming Web Services
Using the Add Generated Items wizard does extra work for you:
Adds in a Port Type for accessing the service, already preconfigured for the correct message types. Note however that it adds the Port type to a dummy .odx - i.e. don't delete the odx until you've moved the Port type elsewhere.
Allow you to create the Send Port bindings at the same time.
One thing I would recommend with the Wizard, is to create a folder for the WCF reference and always import all the artifacts into the folder (i.e. don't do the usual separation of Schemas from Ports and leave the dummy .odx there as well). This way, if you need to regenerate the items, just delete everything in the folder and start again (sadly, the wizard doesn't have a Update Service Reference equivalent.
Also note that if you do move the generated Schemas and Port Types into a separate assembly, that you will need to change the type modifier access to Public (it is internal by default)
Folks,
I have an ASP.NET project which is pretty n-tier, by namespace, but I need to separate into three projects: Data Layer, Middle Tier and Front End.
I am doing this because...
A) It seems the right thing to do, and
B) I am having all sorts of problems running unit tests for ASP.NET hosted assemblies.
Anyway, my question is, where do you keep your config info?
Right now, for example, my middle tier classes (which uses Linq to SQL) automatically pull their connection string information from the web.config when instantiating a new data context.
If my data layer is in another project can/should it be using the web.config for configuration info?
If so, how will a unit test, (typically in a separate assembly) provide soch configuration info?
Thank you for your time!
We keep them in a global "Settings" file which happens to be XML. This file contains all the GLOBAL settings, one of which is a connection string pointing to the appropriate server as well as username and password. Then, when my applications consume it, they put the specific catalog (database) they need in the connection string.
We have a version of the file for each operating environment (prod, dev, staging, etc). Then, with two settings -- file path (with a token representing the environment) and the environment -- I can pick up the correct settings files.
This also has the nice benefit of a 30 second failover. Simple change the server name in the settings file and restart the applications (web) and you have failed over (of course you have to restore your data if necessary).
Then when the application starts, we write the correct connection string to the web.config file (if it is different). With this, we can change a website from DEV to PROD by changing one appSettings value.
As long as there isn't too much, it's convenient to have it in the web.config. Of course, your DAL should have absolutely no clue that it comes from there.
A good option is for your data layer to be given its config information when it is called upon to do something, and it will be called upon to do something when a web call comes in. Go ahead and put the information in your web.config. In my current project, I have a static dictionary of connection strings in my data layer, which I fill out like so in a routine called from my global.asax:
CAPPData.ConnectionStrings(DatabaseName.Foo) =
ConfigurationManager.ConnectionStrings("FooConnStr").ConnectionString()
CAPPData.ConnectionStrings(DatabaseName.Bar) =
ConfigurationManager.ConnectionStrings("BarConnStr").ConnectionString()
etc.
"Injecting" it like this can be good for automated testing purposes, depending on how/if you test your DAL. For me, it's just because I didn't want to make a separate configuration file.
For testing purposes don't instantiate DataContext with default ctor. Pass connection string info to constructor.
I prefer to use IoC frameworks to inject connection to data context then inject context to other classes.