Is there a method to create an incremental .df file from 2 different .df files? Or do I have to load both files into 2 blank databases and then use the create incremental .df file feature from Data Administration tool?
I'm using Openedge 10.2B08
A Data Definition (.df) file is a listing of things to add, drop or update in a database. It is in plain text so you can view it in a text editor. You can cut-and-paste the contents of one .df file into another. However, you may run into problems if the changes from the two files conflict. For example, file 1 may say to drop field xyz, while file 2 says to update field xyz. This will cause an error and the entire .df will be backed out.
If you're sure there are no conflicts, just paste the contents of file 2 into file 1, just above the footer. The footer is the last five lines in the file:
.
PSC
cpstream=ISO8859-1
.
0000000610
The very last line is a character count. You may have trouble loading the new .df if you don't update that to match the new file length. And be sure to test the .df before trying it in production.
Loading them both in a blank database then dumping one solid DF remains the best solutions in my opinion.
Of course you could shaves a couple of minutes by appending one file to the other, I think you can even remove the footer and not bother, it should work.
As with everything it depends on the critical aspect of the situation. Are you looking at important downtime on a production database? Usually there shouldn't be much compromise on whatever will be applied to production. A solid DF is better than a "hacked 99.9% safe" one. That's the difference between a good and bad Dba. The good one may seems to work a bit slower over a decade. But once in that decade the bad one will eventually provoke some critical downtime to a business...totally offsetting the silly productivity advantage he may looks like to have.
I fixed countless mistakes all around for the past 15 years, I made one. It's not a fun feeling. Being waked up early Monday by a panicked helpdesk guy that describes the issue. Quickly realising it's related to the previous night maintenance I made. Replying to get all users out, country wide going to an halt while I'm trying to figure what's wrong and how to fix it. 2500 employees were being paid but wasn't able to work... With customers in front of them with money to spend and no time to lose.
Took me 3 hours. It wasn't a lazy mistake... Just a mistake with no easy way to notice while doing the usual post-update quick tests to make sure it runs fine. We had Gui code running code against a training database while the usual business logic was being executed on Unix and production.
Don't need a math genius to compute that a silly DBA mistake was costing more than his annual salary every few minutes.
Mistakes happens but folks, if a few minutes of added work is the root cause of one, time to leave the seat to someone else I'm afraid. No shame there, it's simply a job that requires that mindset and some people needs decades to eventually get caught off-guard and realize it. Nothing is very fun about spending precious time triple checking a 99.9% safe update...nobody will notice the added effort next Monday as it will work regardless as usual. Everybody will yell if it's not working, your fault or not.
My mutli-million mistake never ever got mentioned once it was solved. Everyone knew very well that I can count lost cash and that I've never cut corners for any reason during my whole career. It's still only money... I could now go with the mistake that almost killed 2 youngs workers lifes a few years later.
Stress and way too many hours, it can le
ad to a reflex F1 and kill mechanics working in some automation device.
Stay safe with that keyboard guys, it's serious business ;)
Related
how are you. For a while I've been working for a Gynecologist building her a data base. For the project I am using Firebase and JavaScript. The database is for her to keep track of their patients and she keeps reports on each one of them. I am almost done with the job, the UI is almost finished, the core functionalities of the database (save data, delete, retreive, and update) are up and running but I am stuck in one little thing. She asked me for a way to turn those reports she keeps in the database into a format like PDF so she can print them and give them in case needed to her patients. The thing is that Ive tried with html2pdf, a git repository that works kind of clunky, and tried looking for others but I still cant find one that works correctly. So I wanted to ask you guys if you know of some alternatives. I started thinking about using EXCEl or Word document. But either way it seems quite complicated. Thank you for your time.
Best to all.
Hi we are at a point in our wordpress website that it would seem appropriate to update the WP version. We have an existing busy site, have paid a good amount to hack plugins and core stuff to get it working the way we want it to. I'm debating the wisdom of updating the entire foundation of the site over a few minor vulnerabilities and enhancements that aren't interesting for us. My thinking is this.
The reason for upgrading appears to be because a given version may have some security issues. So you go through the painful process of updating, which usually kills all your plugins. You then spend many frustrating hours talking to plugin support people telling you 'it must be clashing with other plugins', or you take the time to / pay someone to fix everything (again).
After upgrading you have to take time to relearn the system and all the changes. You may have to adjust your workflow due to these changes, and maybe retrain your entire team. And after all that, in 5 minutes hackers find the security issues with the current version - which MAY be worse than your previous version - and you have to go through the whole operation again.
The aim of running a website is to not spend each and every day dealing with upgrade issues. The aim is to have a system that does what you need it to do, the way you want to do it. Once you have that, it's not useful to keep changing it 'just because', it's not a fashion show. It's not an iPhone, pushing users to upgrade their entire phone because they added a letter on the end of the phone name and changed the color from grey to a slightly different grey.
I am of the opinion that it is much more economical - once you have a system set up the way you need it - to just get a dev to fix any vulnerabilities in your existing WP code. And, somewhere down the line, if there was a VERY good reason you should update (e.g. major new PHP version) - then build the site from scratch and get a dev to migrate the data. This could be 7 years later or more. The time effort and money you would save doing it this way, seems a lot more logical.
Say you were building a Ford Model T in your garage. You are ordering parts from a supplier and you are halfway through the build when your supplier starts sending you parts from a Ford Capri - "Oh we are doing parts for Capris now". So you can the Model T and start building a Capri. Halfway through the build, your supplier starts sending you parts for a Mustang. And so on. You will spend your entire life half building that car and at no point end up driving the C*.
Given the performance issues of WP out the box - and the logical progression of a successful site to start migrating to a bespoke solution - it would make sense to me to take your existing WP, strip out the crap you don't need and optimize everything. At that point, it's not really WP any more, vulnerabilities fixed, and it is essentially already a bespoke solution without needing to start from the absolute ground up.
Does anyone have any thoughts on this? Serious question, we need to decide if we are going to go through all this a 4th time and I'm not really feeling it. Any input would be appreciated thanks. To make this 'it must be a specific question' I will just ask - are you a very experience WP dev, and do YOU keep jumping through the update hoop every 5 mins?
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.
I have reviewed DashCommerce, nopCommerce and DotShoppingCart for possible use and all of them seem to not allow any way to do bulk product/category/manufacturer/etc imports from existing data (DotShoppingCart seems to allow it only in the paid version).
The company I work for has some 30,000 products that we would need to load, and at least a thousand categories or so. Obviously this is ridiculous to have to manually type in, and as I've stated before in previous questions the company is insanely cheap and won't pay for software, so I need a free solution.
I don't have the time to roll my own solution by following the ASP.NET MVC Storefront series, or else I would just do that; My boss seems to think creating an online store is trivially simple (I had slapped together a Classic ASP site a few months back but we recently changed our primary vendor so I can't use most of it; it was pretty much hacked together anyway and I can't really use it without reworking a lot of it for the new supplier) and I don't want to hear him if/when I tell him I need a couple of months; he's already waited 90 days since he has some SEO expert on retainer to start blogging/marketing it and doesn't understand that writing software takes time, it's not something that can be thrown together in a week or even a month.
Is there anything out there that can meet these requirements? In a pinch I guess I could install DashCommerce or something and interrogate the database schema it creates to force an import myself to give him a quick solution that he seems to want.
It would take about 20 minutes to write a simple app that would connect to the database and insert the rows. All you need is a loop that reads a row, writes a row...