I'm migrating a very big Rails 2.3 application from ruby 1.8 to 1.9. Along the way, I've had some database encoding issues that, it seemed, could only be resolved by moving from the old ruby-mysql gem to mysql2.
This has worked fine for all ActiveRecord::Base ORM like queries (#users = User.find(:all, :conditions => {...}), etc), but the application also relies heavily on querying the DB directly for some performance related issues. It's quite common to see stuff like this:
ActiveRecord::Base.connection.execute(optimized_sql).each_hash do |row|
# do some stuff with row
end
OR
# for specific connections (different servers, etc)
client = Mysql.real_connect(host, username, password, schema)
client.query(tweaked_sql).each_hash do |row|
# do some stuff with row
end
OR
# for batch inserts
client.autocommit(false)
insert_list.each { |insert| client.query(insert) }
client.commit
I should note that this querying is done mostly in designated class files written for that, so not in controllers, models and such - but mostly for stuff under app_root/lib/. I also cannot seem to find many equivalent features of stuff I used in the old gem. A good example is the #autocommit like method (to enable batch queries, like multiple INSERTs).
I would like to smooth the transition by using both gems - mysql2 for all ActiveRecord stuff, and ruby-mysql for direct client connection to the database. However, when I include both in my app's Gemfile, Rails seems to default to one or the other. Is there a way to configure the Gemfile to only include ruby-mysql but not automatically require it when the app loads?
How can I make sure both are present, and only use require 'mysql' in the files where I strictly want to use the old gem? Is there any other approach I should be taking? Converting the entire app in one stroke from is a pretty big risk, and I would like to enable my team some time to adapt and transition old code from Mysql to Mysql2.
thanks.
Although rails isn't really made for that, you definitely can
http://robbyonrails.com/articles/2007/10/05/multiple-database-connections-in-ruby-on-rails
Related
I'm currently trying to make some speed improvements to one of my sites, and I'm looking at Modernizer usage.
Previously all of my javascript (including Modernizer) was lumped into one big js file. I've now removed Modernizer and it sits inline in the head section of the page. For clarity, it is a custom build.
However, not all feature detects are equal - some features benefit from being detected quickly while others can wait.
For instance, detecting webp support is pretty important, because I assume downloading a jpeg then another webp version sort of defeats the object of the feature.
Then, there are things like pointer/touch support, which don't affect layout as such and are more to do with interaction - so they can wait.
With that in mind, the obvious thing is to put two instances of Modernizer in the page - one for the important stuff at the top, and one for the rest at the bottom.
However, I've been unable to find anything on this topic. I guess that leads me to ask two questions: is it possible? And if it is - is it a sensible idea?
It definitively is possible to have two instances of Modernizr on one page; but in order to have that you have to manually rename the global object to something else since Modernizr is exposed to window directly:
e.Modernizr = Modernizr // e is internal ref. to window object
}(window, document);
This however may be considered a dirty patch since you have to alter the production code (& to maintain that alteration through update cycles manually), download and execute exactly the same basic functionality twice, which is less optimal.
Another approach would be to build all that is needed immediately for first batch of (essential) tests and than to utilize Modernizr.addTest (it has to be included in build) later on, for non-essential functionality.
Source and doc-like comments.
Of course, you'd have to write your tests. You may relay on official Modernizr tests, but addTest called outside of the Modernizr's factory method lacks some useful methods (for example, things like Modernizr's internal createElement()).
You have to make choices since there is no way to subsequently add other tests out of the box.
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
This is a real question...
Is there a way to use less.js as it could generate static css files, those files that will be cached and reprocessed on demand and not on the fly ?
That means generate css cached files to avoid on the fly generation and leverage performance benefits of static files, but keeping the flexibility of writing in less.
This is more way interesting when using css framework like bootstrap, with the need of reusing less variables.
This is supposed to be used in a production environment (we don't own nor control) by webdesigners who are not able to install anything on the server side.
Thanks a lot for any answer !
You need some sort of compiler like this: http://wearekiss.com/simpless
I'd like to get some information regarding using MySQL alongside ASP.NET (particularly MVC 3). From what I've found and experienced, it doesn't quite seem as customizable in terms of the Membership and User classes which come with Asp.Net, especially when it comes to validation or registration.
For example, after configuring my web.config file to use MySQL, I found myself realizing that, although a fair amount of tables were auto-generated for me to use, I wasn't able to change the names of them. Because of this, it seemed as though if I were to change a column name, or add a column to the table, it wouldn't quite work with the system, since everything has been pre-built.
Yet, with ADO.Net/Entity Framework, it appears that I might actually be able to have more freedom in how I go about creating my websites using MsSQL. Is this true? Is MySQL just not meant for ASP.Net, despite the the fact that you can install and use it at your leisure. Or is it that it just requires more work to get everything working, and you kind of have to reinvent the wheel by creating your own database classes and validation tools?
I'm not trying to bash either MySql or MsSql, I'm simply looking for a good analysis on the topic, as Google hasn't helped me much in this area.
This is more an issue with the default providers, and one of the many reasons why the 1st thing I did when I learnt about them was to try and make my own. (To be clear, creating your own one from scratch does require a fair amount of work, there are a few good tutorials out there that can give you a quick start)
[It'd make all our lives easier if the .Net framework used Interfaces for the providers rather than the base class... ]
To be clear, the big thing with the auto generated providers is the sprocs they use require the specified names, if you want to change the table names then you'll have to also update all the Sprocs as well. (This is true for any custom provider you may chose to build/use)
I'm working with Drupal on a project, trying to find a way to speed up tests (we're using Cucumber and Selenium), and I'm trying to see which tables have been changed in a given series of steps, so I can just revert dump and out reset those tables between each test case.
Right now, Simpletest, the Drupal testing framework works by installing and setting up the tables for every module needed for a test, which makes for slow tests, and I'm emulating a similar approach by loading a db dump for each test.
Given that a site, if you're doing integration testing has a 'known good' state to be starting from, I think it would be faster to be able to just revert back to that point each time, instead of waiting twenty seconds or so to drop the database then pipe the dumpfile back in between each test runs.
However, when I try diffing between two dumpfiles (ie before.I.create.a.node.sql, and after.I.create.a.node.sql) the output is an unreadable load of serialised php, that I can't make sense of.
Ae there any tools I can use to help work out which tables I need to drop and rebuild between test cases, so I don't incur the 20 second hit on each test, short of reading the schema and code of every module I'm working with?
I'm following the ideas outlined here with getting cucumber to work with PHP, and yes, I have seen [this question here on a similar subject
Thanks!
Drupal does store a lot of serialized PHP in the database. But the main part of it is kept in the cache tables; like cache, cache_field, cache_menu, etc and you can safely truncate these before dumping the database.
If you have any simpletest tables you could drop those. They are all temporary and is used only for running your Simpletest test suite.
That should reduce the dump size a lot. If it's not enough I can recommend reading up on the tables in the book Pro Drupal Development, or you could skim through the .install files to read the module's schema definitions. Though most will probably be real data you'd want to revert between tests.
Because of the relational nature of the database, be sure to either know exactly what you're doing or dump/revert all the remaining tables together.