I'm just starting to use Xcode 4, and I'm trying to find the file in a project where it stores all of a project's Schemes. I figured they would be stored in a file in the xcodeproj directory somewhere, but for the life of me I can't find which one.
All of my projects are stored on an SVN server, and I'd like to keep Scheme info with the project. Right now when you check out a project fresh, the Schemes don't make it along with.
EDIT: After playing with this a bit more, it appears that Schemes are stored each as separate files in xcuserdata/user.xcuserdata/xschemes/MyScheme.xscheme with a xcschememanagement.plist file to keep them all sorted.
So my new question, is there a way to store these in a per-project scope instead of a per-user scope? This way when another developer opens the same project, he'll see the same schemes I set up?
Finally found the answer on somebody's Twitter. Schemes are stored per user by default, but if you go to Manage Schemes and click the "Shared" checkbox on the far right for each one, they'll show up in the xcshareddata directory instead of your xcuserdata directory, where they'll be seen and used by everyone. Hopefully this will help someone else trying to figure out the same thing!
Related
everybody. Sorry if I ask something trivial. I looked into the previously asked questions, to no avail. If the question was already asked, I beg your excuses, and please point me in the right direction.
I have a number of single form WinForms apps that I'm in the process of converting to appx for the store.
So far so good. No issues with that.
Despite, one of my apps uses the application folder to save some data to a temporary file
Dim sw As New StreamWriter(Application.StartupPath + "\somefile.csv", True)
The .exe program of course works with no issue.
The converted .appx program complains that I have no write permission to the WindowsApps and it subfolders. I quickly solved by taking ownership of the folder and give myself full control over it.
Do I have any chance to prevent the error message to appear on a generic machine, other than trivially changing the source code to point the temporary folder to some other path?
Clearly I don't want the admin to give the user full control over WindowsApps folder.
Writing to the package folder is not allowed. You need to change your code to write to a location that is writeable for the app/user, for example to the AppData folder.
This is documented in the Desktop Bridge preparation guide:
https://learn.microsoft.com/en-us/windows/uwp/porting/desktop-to-uwp-prepare
(it's the eighth bullet point)
Opening recent workspace shows the directories and files that I opened before, then how to get the directory location of them? I tried right click, but I see no information.
I expect something like this from Code Runner.
[I'm a member of the open source Light Table team.]
Light Table doesn't provide that info as you expect.
I agree that that would be a really useful feature, especially when you have multiple directories with the same name in different paths.
Maybe a tooltip when you hover over a directory that shows the full path would be the minimal change that would be helpful.
If you're willing to contribute, or know someone that is, feel free to create new issues in the GitHub repo for each of the Code Runner features you'd like to be implemented. Otherwise, add them to the feature wishlist wiki page.
These features could be implemented as plugins but I think some of them might arguably be fine implemented in the core app code. Pull requests welcome!
I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?
As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.
I have a Flex app that does a a fair amount of network traffic, it uses ExternalInterface to make some javascript calls (for SCORM), it loads XML files, images, video, audio and it has a series of modules that it could be loading at some point...
So the problem is - we now have a requirement where the user needs to run this content locally on a machine that is not connected to the internet (which means they can't connect to Adobe's site to change their security settings.) As you can imagine, when the user doubles clicks on the html page to launch this thing, they are greeted with a security warning that the swf is trying to communicate with another domain other than the one it's in. We can't wrap it in an exe or an AIR app so I unless there is some way to tweak some obscure security settings we may be hosed. Any idea's?
What you are trying to do is exactly the problem solved by AIR. You should really give it a try, it's not that hard to pick up. If you really really can't use AIR (you didn't specify why, so I assume it's just because you don't want to have to learn a new system), then modifying the security config file will solve the problem.
Basically what you need to do is create a 'trust' file in the "Global FlashPlayerTrust" directory. This can be done by your installer (which installs all the javascript, SWF, html, etc files onto the local machine). You should create the directory if it does not exist. The directory for each OS is:
Windows - %WINDIR%\System32\Macromed\Flash\FlashPlayerTrust
Mac - /Library/Application Support/Macromedia/FlashPlayerTrust
Linux - /etc/adobe/FlashPlayerTrust
Next, you need to create the trust file. You can name it anything, so pick a unique name that would be unlikely to conflict with others. Something like CompanyName.cfg. It's a text file, with one path per line. You can trust either one SWF at a time, or an entire directory. Example:
C:\Program Files\MyCompany\CoolApp
C:\Program Files\MyCompany\OtherApp\Main.swf
To test that it's working, inside your flash movie you can check System.security.sandboxType (ActionScript 1 or 2), or Security.sandboxType (ActionScript 3). It should have the value of "localTrusted"
I hesitate to say "you can't do it", but in my experience, there's no way to do what you're describing. Anyone, if I'm wrong, I'd love to know the trick.
Sorry that I haven't actually tried this to see if it works or not ... but ...
Page 20 (and/or 26) of this document may be of help. The document is referenced here. In a nutshell it describes directories which contain cfg files which in turn contain lists of locations on disk which should be regarded as trusted. An installer for the application would then be responsible for creating appropriate .cfg files in the desired location (global or for the installing user).
The short answer is that if your swf is compiled with use-network to true, it isn't going to work.
Is it possible to compile a version with use-network to false? Or is it running on an Intranet that is closed off from the Internet and still communicating with the LMS?
It is possible. Please chek that the swfs you are calling from the main swf have the "Access local files only" property enabled or not.
Did you try to specify the authorized domain with:
System.security.allowDomain("www.yourdomain.com");