My team using Gerrit for reviewing of changes and sometimes we have to push .patch for some files. Sometimes these .patches could reach over ~1000 lines (which obviously not good for reviewing). It is very inconvenient to review it as diff between patches itself. It would be better(in some cases) to review it as diff of origin file and origin file with applied patch(not diff between .patches). Even if original file isn't under version control, committer could attach it with patch set, right?
Unfortunately, after lasting googling i didn't find anything...
Is there any way to show diff between two patch files (not patch sets) in such approach or similar?
I think the best you can do is to find a three-way merge tool to compare
the original source;
the source with the old patch; and
the source with the new patch.
To use such a tool, I think you'd need to use the command lines the Gerrit interface provides to fetch the changes locally, then apply the patches, then use the merge tool for the review.
I don't think you'll find a way to get a "three-way merge" view of the code in the Gerrit UI itself.
Related
I am currently writing a text with R bookdown and asked two friends to read my text and give comments, corrections and general feedback. My source files for the text are stored on GitHub and I would like my collaborators to make changes in the files (one for each chapter) with the help of git. However, none of us are really experts on git. This makes it hard to figure out what a suitable workflow is.
For now, we decided that each one of them creates himself a branch so that he does not directly push into the master branch. After I have read their changes I would like to decide what I merge into the master branch and what not. So far, it looks like each change needs to be in a separate commit because I am not able to merge single lines from a specific commit (not sure if that is at all possible). However, this seems like a lot of annoying and unnecessary commits to create. So, I guess I am looking for a way to avoid that and/or general pointers towards a good workflow for such kind of projects.
A useful command will be git cherry-pick, it allows you to select specific commits from a branch.
A general good practice is that commits should be self contained (if applied alone they make sense) and they target a specific feature (in the use case mentioned, that could be a paragraph or a section or a chapter).
In the end, if you would like to apply only specific changes of a commit, that would have to happen manually, someone has to decide which parts to apply and which not. A commit can be edited using git rebase -i <branch name> before being merged. This question might also be useful.
I finally found what worked for me in here. Basically, on my master branch I had to use
git merge --no-commit --no-ff branch-to-merge
This will merge all changes into my master branch but does not immediatly commit the changes so that they can still be staged/unstaged. Then, I can decide what line change to include by staging the line changes I want to keep and discard all other line changes. Finally, I commit all staged line changes et voilĂ , that's what I wanted to get.
Sidenote: I am using gitkraken and as a beginner with git I enjoy using the GUI but the merge part with the options "no-commit" and "no-fast-forwarding" had to be done via the git console (at least I could not find a way to to that using the GUI). Choosing which lines to stage and which to discard is then an easy task via the GUI.
Why some data on server-side are still stored in DBC files, not in SQL-DB? In particular - spells (spells.dbc). What for?
We have a lot of bugs in spells and it's very hard to understand what's wrong with spell, but it's harder to find it spell...
Spells, Talents, achievements, etc... Are mostly found in DBC files because that is the way Blizzard did it back in the day. It's true that in 2019 this is a pretty outdated way to work indeed. Databases are getting stronger and more versatile and having hard-coded data is proving to be hard to work with. Hell, DBCs aren't really that heavy anyways and the reason why we haven't made this change yet is that... We have no other reason other than it being a task that takes a bit of time and It is monotonous to do.
We are aware that Trinity core has already made this change but they have far more contributors than we do if that serves as an excuse!
Nonetheless, this is already in our to-do list if you check the issue tracker at the main repository.
While It's true that we can't really edit DBC files because we would lose all the progress when re-extracted or lost the files, however, we can modify spells in a C++ file called SpellMgr.
There we have a function called SpellMgr::LoadDbcDataCorrections().
The main problem while doing this change is that we have to modify the core to support this change, and the function above contains a lot of corrections. Would need intense testing to make sure nothing is screwed up in the process.
In here by altering bits you can remove or add certain properties to the desired spells instead of touching the hard coded dbc files.
If you want an example, in this link, I have changed an Archimonde spell to have no cast time.
NOTE:
In this line, the commentary about damage can be miss leading but that's because I made a mistake and I haven't finished this pull request yet as of 18/04/2019.
The work has been started, notably by Kaev. I think at least 3 DBCs are now useless server side (but probably still needed client side, they are called DataBaseClient for a reason) like item.dbc.
Also, the original philosophy (for ALL cores, not just AC) was that we would not touch DBC because we don't do custom modifications, so there was no interest in having them server side.
But we wanted to change this and started to make them available directly in the DB, if you wish to help with that, it would be nice!
Why?
Because when emulation started, dbc fields were 90% unknown. So, developers created a parser for them that just required few code changes to support new fields as soon as their functionality was discovered.
Now that we've discovered 90% of required dbc fields and we've also created some great conversion tools for DBC<->SQL, it's just a matter of "effort".
SQL conversion is useful to avoid using of client data on server (you can totally overwrite them if you don't want to go against EULA) or just extends/customize them.
Here you are the issue about DBC->SQL conversion: https://github.com/azerothcore/azerothcore-wotlk/issues/584
I need to create a dependency graph for a software suite that I am working on. In the past the company I work for has always done this manually, but I am guessing that there is a tool somewhere that will do what we need.
The software I am working with is Ada95, and has about 200 code modules/files, with about 40 packages. I need to create a map that will trace every output, individually, back to each input or constant that will have an impact on the output. Does anybody know of a tool that would accomplish this? Or even just partially accomplish it?
AdaCore's GPS (available from http://libre.adacore.com) comes with a command line tool named gnatinspect. You can use this tool to load all cross-reference information generated by the compiler (assuming you are compiling with GNAT). This creates a sqlite database (gnatinspect.db) which contains all information you need. gnatinspect itself provides a number of pre-made queries that might get you at least partially to where you want to go.
You could also look at ASIS, as a way to do this kind of queries directly on the code. I am told this is not so easy to use the first time around though.
There is also an older tool provided with gnat (gnatxref) which does something similar, although it is being superceded by gnatinspect.
Finally, you could look at gnat2xml as an alternative to ASIS if you are more comfortable parsing XML files.
I know you can use touch to create a new empty file.
I just learned that touch can be used to update the access and modification time of a file. I don't quite know in what situations and why do you need to update the access and modification time of a file , i.e. the usefulness of this particular function?
Thanks!
Some utility depends on timestamp of the file.
For example, make uses timestamp to check whether it is required to do something (usually build) based on the timestamp of the source code, and output (executable, object files, ...)
By touching followed by make, the source file, you can force rebuild.
In addition, touch has a -d option that can fake the modification time.
If one "knows what he's doing" she can avoid long build time, due to unnecessary re-compilations.
For example, when adding a declaration to a common header file,
that does not change any old API, one can fake the header real modification time,
and bypass Makefile's dependencies.
I'm using KDE, and I'm toying with the idea of hacking the code for Dolphin File Manager (and potentially Konqueror if necessary) to get context-sensitive drag and drop behaviour (i.e. files are moved within the same partition, or copied if they're moved across partitions or the source is read only).
To do this, I think I'd need to find out the containing partition of the source and destination (easy enough on Windows using the drive letter, but on Linux, as mount points can be almost anywhere, it can't be reliably derived from the file path), and compare them.
Does anyone know how I can find out the partition that contains a given file?
It must be possible - I know Nautilus provides this sort of behaviour, but I'm not familiar enough with GTK to track down the appropriate section in the source code to see how its done...
Qt doesn't provide API for this. For POSIX, have a look at stat.
For KDE, you can use KIO::stat() to get mostly the same info as POSIX' stat function but asynchronously.
The device id should be in the field UDS_DEVICE_ID of the result.