Display both Date and Time in the history pane of GitAhead - gitahead

I have been using GitAhead as my primary git client for a while. The tool is quite good as I work on both Linux and windows. One bit anoying me is the branch history windows shows only the time for the current day, not for the previous days.
My question: is it configurable to let the pane show both the date AND the time similar to other git GUI clients?
Thanks,
David

There isn't an option for that. Please add a feature request to the issue tracker.

Related

UI.ImageView in TideSDK?

A few years back I have been using Titanium Desktop to make an app for Mac. Having been satisfied I came back recently to it for another project, but apparently Titanium Desktop is now TideSDK.
Looking at the reference it seems that a lot of stuff has disappeared, I was mostly expecting more elements in UI, like ScrollView, ImageView and such.
Did they simply vanish from this new release or is it just not fully documented ?
First of all I need to make clear that TideSDK is not Titanium Desktop. While it began on legacy code, more than 1 million lines changes of code have been committed and the SDK has been in existence for almost a year now. You will find a different namespace but API compatibility.
The code base is quite different and has been undergoing major restructuring and improvements. That said, for the end user, it is just as friendly to use. We don't like to go back to discuss the past since we have contributed a body of code that allows developers to run TideSDK on today's modern operating systems. This was only the result of substantial efforts and the continued development of TideSDK by its contributors. If you experience any issues, please file them with on our issue tracker on github.

Forbid developer to commit code because of making weekly build

Our development team (about 40 developers) has a formal build every two weeks. We have a process that in the "build day", every developers are forbiden to commit code into SVN. I don't think this is a good idea because:
Build will take days (even weeks in bad time) to make and BVT.
People couldn't commit code as they will, they will not work.
People will commit all codes in a huge pack, so the common is hard to write.
I want know if your team has same policy, and if not how do you take this situation.
Thanks
Pick a revision.
Check out the code from that revision.
Build.
???
Profit.
Normally, a build is made from a labeled code.
If the label is defined (and do not move), every developer can commit as much as he/she wants: the build will go on from a fixed and defined set of code.
If fixes need to be make on that set of code being built, a branch can then be defined from that label, minor fixes can be made to achieve a correct build, before being merging back to the current development branch.
A "development effort" (like a build with its tweaks) should not ever block another development effort (the daily commits).
Step 1: svn copy /trunk/your/project/goes/here /temp/build
Step 2: Massage your sources in /temp/build
Step 3: Perform the build in /temp/build. If you encounter errors, fix them in /temp/build and build again
Step 4: If successful, svn move /temp/build /builds/product/buildnumber
This way, developers can check in whenever they want and are not disturbed by the daily/weekly/monthly/yearly build
Sounds frustrating. Is there a reason you guys are not doing Continuous Integration?
If that sounds too extreme for you, then definitely invest some time in learning how branching works in SVN. I think you could convince the team to either develop on branches and merge into trunk, or else commit the "formal build" to a particular tag/branch.
We create a branch for every ticket or new feature, even if the ticket is small (eg takes only 2 hours to fix).
At the end of each the coding part of each iteration we decide what tickets to include in the next release. We then merge those tickets into trunk and release software.
There are other steps within that process where testing is performed by another developer on each ticket branch before the ticket is merged to trunk.
Developers can always code by creating their own branch from trunk at any time. Note we are a small team with only 12 developers.
Both Kevin and VonC have well pointed out that the build should be made from a specific revision of the code and should not ever block the developers from committing in new code. If this is somehow a problem, then you should consider using another version management software which uses centralized AND local repositories. For example, in mercurial, there is a central repository just like in svn, but developers also have a local repository. This means that when a developer makes a commit, he only commits to his local repository and the changes will not be seen by other developers. Once he is ready to commit the code for other developers, then the developer just pushes the changes from his local repository to the centralized repository.
The advantage with this kind of an approach is that developers can commit smaller pieces of code, even if it would break a build, because the changes are only applied to the local repository. Once the changes are stable enough, they can be pushed to the centralized repository. This way a developer can have the advatange of source control even though the centralized repository would be down.
Oh, and you'll be looking at branches in a whole new way.
If you became interested in mercurial, check out this site: http://hginit.com
I have worked on projects with a similar policy. The reason we needed such a policy is that we were not using branches. If developers are allowed to create a branch, then they can make whatever commits they need to on that branch and not interrupt anyone else -- the policy becomes "don't merge to main" during the weekly-build period.
Another approach is to have the weekly-build split off onto a branch, so that regardless of what gets checked in (and possibly merged), the weekly build will not be affected.
Using labels, as VonC suggested, is also a good approach. However, you need to consider what happens when a labeled file needs a patch for the nightly build -- what if a developer has checked in a change to that file since it was labeled, and that developer's changes should not be included in the weekly build? In that case, you will need a branch anyway. But branching off a label can be a good approach too.
I have also worked on projects that make branches like crazy and it becomes a mess trying to figure out what's happening with any particular file. Changes may be committed to multiple branches in the same timeframe. Eventually the merge conflicts need to be resolved. This can be quite a headache. Regardless, my preference is to be able to use branches.
Wow, thats an awful way to develop.
Last time I worked in a really large team we had about 100 devs in 3 time zones: USA, UK, India, so we could effectively have 24 hour development.
Each dev would check the build tree and work on what they had to work on.
At the same time, there would be continuous builds happening. The build would make its copy from the submitted code and build it. Any failures would go back to the most recent submitter(s) for code for that build.
Result:
Lots of builds, most of which compiled OK. These builds then started automatic smoke testing scenarios to find any unexpected bugs not found during testing priot to commiting.
Build failures found early, fixed early.
Bugs found early, fixed early.
Developers only wait the minimum time to submit (they have to wait until any other dev that is submitting has finished submitting - this requirement made so that the build servers have a point at which they can grab the source tree for a new build).
Most devs had two machines so they could work on a second bug while running their tests on the other machine (the tests were very graphical and would cause all sorts of focus issues, so you really needed a different machine to do other work).
Highly productive, continuous development with no deadtime as in your scenario.
To be fair, I don't think I could work in place that you describe. It would be soul destroying to work in such an unproductive way.
I strongly believe that your organization would benefit from Continuous Integrations, where you build very often, perhaps for every checkin to your code base.
Don't know if i'll get shot for saying this, but you should really move to a decentralized solution like GIT. SVN is horrible about this and the fact that you can't commit basically stops people from working properly. At 40 people this is worth it because each can continue working on their own stuff and only push what they want. The build server can do what it wants and build without affecting everyone.
Yet another example why Linus was right when saying that in almost all cases, a decentralized solution like git works best in real life teams.

How can we improve our deployment and build systems?

We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.

Slow solution loading in visual studio 2008

I am working on an ASP.NET 3.5 project which has 55 projects in a solution. When opening the solution in Visual Studio 2008, it takes over a minute to open - about 1 second for each project. However, if I disconnect the network cable before opening the solution, it only takes about 15 seconds! Any ideas about what could be causing the slowdown?
I had this happen to me back in the days when we were using Visual Source Safe.
Could be your source control plugin asking for updates if you have the solution under source control.
You should do some investigation, fire up Wireshark, start a capture on the interface in question and see what traffic is flowing over the wire.
Can I answer a question with a question? What is the secret to getting VS to not just die with that many projects, let alone load in a phenomenally quick 60 seconds?
At about 10-12 projects the compile time on Visual Studio becomes unbearable, at about 5-8 projects Resharper will crash. The IDE is such a memory pig that even opening more projects by using multiple instances of VS usually isn't an option.
Anyhow, it's all about memory usage and the odd ball out project is probably doing it, e.g. the one with the most files.
I had the same problem this week (5 years later!!). It was caused by a huge .suo file (almost 400 Mb), deleting it fixed the problem.
A few years ago I remember a colleague having some similar problem (with a lot smaller solution, and in VS2003). Can't remember the details, but I think it was related to the local ASPNET user account (or rather, that it did not exist). Not sure though...
As a side note: I usually find it more efficient to have perhaps around a handful of projects in each solution (usually one solution produces one or two assemblies used in production code), and then have a few Visual Studio instances running at the same time. 50+ projects in the same solutions feels like asking for problems.
Might be that you have other dependencies though, just wanted to share my thoughts.
which has 55 projects in a solution
WOW. I can't imagine what type of solution needs that many projects. The answer is probably that your source control provider needs to refresh the status of each of the items, all of which take time.
For edit-merge-commit style version control systems, such as subversion, this operation doesn't take place. Try temporarily removing source control from the entire solution to see if this is the culprit.
If your solution is attached to source control, then it is trying to load up the symbols and verify which items you have checked out. So, if you have a slow connection, it is oftentimes faster to take the solution offline.
http://www.tmgirvin.com/2009/03/working-offline-with-visual-studio-2008-and-tfs.html
EDIT
Another solution which I've seen used,
create a
_webTier.sln
_database.sln
_build.sln
( is your project name)
and each of those solutions is a self-sufficient part of the entire project, but that way if you are working on the webtier and you don't need the database project or the mobile project parts to load up, you can just open the webtier solution.
The build solution contains the entire package that needs to be built, and takes a very long time to load.
I had this problem on a development machine with no internet connection and it turned out that the problem was related to a setting in IE's internet options:
Control Panel -> Internet Options ->
Advanced -> Security -> Check for
publisher's certificate revocation
After making sure this was unchecked my solutions started loading quickly again.

DVCS for a small company of remote employees

Here's the situation: at my small office, because we like to keep mobile and occasionally work from home, instead of having a central file server, we have all the office documents in an SVN repository, and each person keeps a checkout on their own laptops. A checkout weighs in at about 3GB, and the repo with revisions in it: about 6GB. This is all working great.
The problem is that soon we won't have a small office any more - all our 5 workers will be working remotely. I had considered purchasing a dedicated server and running our SVN repository from that, except two of our workers will be really remote and will be using wireless "broadband" with a 3GB/month limit, and I'm afraid that a few large updates will really rip through their monthly allowance, not to mention taking all day to complete.
Reading a few questions on Stack Overflow, it seems there's quite a community of distributed VCS aficionados who think git or mercurial is definitely the best for many situations. Given that all the employees would still be able to meet face-to-face at least once a fortnight (and hence be on a fast LAN), I'm wondering if a DVCS would work for us?
I don't know exactly what's in your repo, but unless you're changing all the files regularly, a DVCS should provide you a very desirable workflow.
You could do an svn -> git conversion, stick the repo on a DVD and mail it out to all the satellite offices, and then let them fetch from the office as things change at a fairly low incremental cost (should be smaller than the delta in general).
Checkout the Fossil DVCS, it may fit your bill. Fossil may be used like SVN or a DVCS. If you are concerned about it handling your current repository try it out. It also has a built in project wiki and bug tracking system that distribute with the repository as well. You could try it out and see if it would work for your small team.
The pain for you would be losing your revision history, at this time I don't beleive you can import a svn repository into Fossil.
Join the mailing list and you will get answers for any of your questions. The creator of SQLite is also the creator of this project as well. Hope this helps.
I can't see why not. With something like git, the repository is local to the machine, and so your remote employees can actually have a tracked changelog that can then be merged or rebased with the main repository--whatever you decide that to be--when they get the chance.
Also, git has really good compression compared to SVN, so the 3GB/mo quota may be more than enough for your remote employees.
Randal Schwartz actually gave a really good presentation on git at Google's Tech Talks: http://www.youtube.com/watch?v=8dhZ9BXQgc4
(It seems no one is answering this.) DVCS of course seems like it would work, but I have no experience with it. A centralized system like svn might also work if you are not expecting large changes daily. (to go up and back from the server) The initial get in that case would be the only real expensive issue.
Can you monitor your use now and see how much traffic goes back and forth?
The real problem here is the 3GB/mo bandwidth limitation. It's probably just better to come up with a better solution for connectivity...

Resources