Can One Solutions Contain Many Projects in Xcode? - xcode4.5

In vb.net it can be done. One solutions, contain many projects.
One projects can refer to the other.
It's kind of useful if you want to create slightly similar projects.
Can the same thing be done on Objective-c/xcode?

You can create a workspace in Xcode, and then place and create multiple projects in that workspace, which is similar to a solution.

Related

What is the best practice for transferring objects across R projects?

I would like to use R objects (e.g., cleaned data) generated in one git-versioned R project in another git-versioned R project.
Specifically, I have multiple git-versioned R projects (that hold drake plans) that do various things for my thesis experiments (e.g., generate materials, import and clean data, generate reports/articles).
The experiment-specific projects should ideally be:
Connectable - so that I can get objects (mainly data and materials) that I generated in these projects into another git-versioned R project that generates my thesis report.
Self-contained - so that I can use them in other non-thesis projects (such as presentations, reports, and journal manuscripts). When sharing such projects, I'd ideally like not to need to share a monolithic thesis project.
Versioned - so that their use in different projects can be independent (e.g., if I make changes to the data cleaning for a manuscript after submitting the thesis, I still want the thesis to be reproducible as it was originally compiled).
At the moment I can see three ways of doing this:
Re-create the data cleaning process
But: this involves copy/paste, which I'd like to avoid, especially if things change upstream.
Access the relevant scripts/functions by changing the working directory
But: even if I used here it seems that this would introduce poor reproducibility.
Make the source projects into packages and make the objects I want to "export" into exported data (as per the data section of Hadley's R packages guide)
But: I'd like to avoid the unnecessary metadata, artefacts, and noise (e.g., see Miles McBain's "Project as an R package: An okay idea") if I can.
Is there any other way of doing this?
Edit: I tried #landau's suggestion of using a single drake plan, which worked well for a while, until (similar to #vrognas' case) I ended up with too many sub-projects (e.g., conference presentations and manuscripts) that relied on the same objects. Therefore, I added some clarifications above to my intentions with the question.
My first recommendation is to use a single drake plan to unite the stages of the overall project that need to share data. drake is designed to handle a lot of moving parts this way, and it will be more seamless when it comes to drake's decisions about what to rerun downstream. But if you really do need different plans in different sub-projects that share data, you can track each shared dataset as a file_out() file in one plan and track it with file_in() in another plan.
upstream_plan <- drake_plan(
export_file = write_csv(dataset, file_out("exported_data/dataset.csv"))
)
downstream_plan <- drake_plan(
dataset = read_csv(file_in("../upstream_project/exported_data/dataset.csv"))
)
You fundamentally misunderstood Miles McBain’s critique. He isn’t saying that you shouldn’t write reusable code nor that you shouldn’t use packages. He’s saying that you shouldn’t use packages for everything. But reusable code (i.e. code that you want to reuse) absolutely belongs in packages (or, better, modules), which can then be used in multiple projects.
That being said, first off, pay attention to Will Landau’s advice.
Secondly, you can make your RStudio projects configurable such that they can load data based on paths given in a configuration. Once that’s accomplished, nothing speaks against hard-coding paths to data in different projects inside that config file.
I am in a similar situation. I have many projects that are spawned from one raw dataset. Previously, when the project was young and small, I had it all in one version controlled project. This got out of hand as more sub-projects were spawned and my git history got cluttered from working on projects in parallel. This could be to my lack of skills with git. My folder structure looked something like this:
project/.git
project/main/
project/sub-project_1/
project/sub-project_2/
project/sub-project_n/
I contemplated having each project in its own git branch, but then I could not access them simultaneously. If I had to change something to the main dataset (eg I might have not cleaned some parts) then project 1 could become outdated and nonfunctional. Once I had finished project 1, I would have liked it to be isolated and contained for reproducibility. This is easier to achieve if the projects are separated. I don't think a drake/targets plan would solve this?
I also looked briefly into having the projects as git submodules but it seemed to add too much complexity. Again, my git ignorance might shine through here.
My current solution is to have the main data as an R-package, and each sub-project as a separate git-versioned folder (they are actually packages as well, but this is not necessary). This way I can load in a specific version of the data (using renv for package versions).
My folder structure now looks something like this:
main/.git
sub-project_1/.git
sub-project_2/.git
sub-project_n/.git
And inside each sub-project, I call library(main) to load the cleaned data. Within each sub-project, a drake/targets plan could be used.

Are there any complex application examples in Grakn & Graql?

Are there any existing complex use case examples leveraging GraQL and Grakn beyond the example on github?
https://github.com/graknlabs/examples/tree/master/python/queries
There are three main places to look for examples on Grakn:
The docs at dev.grakn.ai
Grakn's blog
The examples repo
You might also want to check out the BioGrakn project, which augments multiple biomedical datasources and looks for insights across the knowledge graph created. There are blog posts on that too.

I want to generate boiler plate code in my repository pattern project

As title suggests, I am creating open source project that is in .net core 2.0. here is the architecture of it.
Now, it's working fine with everything including code first, seeders, swagger UI, TDD etc.
But there are many places where I have to add/modify classes when I want to add new Table in Database (see SimpleCRUD.Model > Entities)
So, I think I can reduce that boilerplate code, but I am not sure what is best way to do it.
What I did so far?
I tried to create a windows app, which will check and generate code for new added entity.
What I am trying to achieve?
Is there anyway I can add some kind of code in my current project and that will check after each build? is it feasible? any other suggestion to make it working perfectly?
Reference
I have checked this working in few other frameworks like serenity, asp.net boilerplate etc.
T4 temples can help cut down the boiler plate...
https://dotnetthoughts.net/generate-your-database-entities-using-t4-templates/
You've asked for my help here
I agree with other posters that you might want to look into T4. It sounds like you also want to create an MSBuild task.
I outlined the steps to do this for a different question in post here
You can find my code generators under this folder, CodeGen.SessionProxies
The t4 example can be found here: AppSessionPartials.tt
The MSBuild task can be found here: GenerateSessionProxies.cs
I had it generating a nuget file through the CodeGen.SessionProxies.nuspec. You won't find it on nuget.com; I had a local nuget repository. It would be helpful to you to look at the corresponding install.ps1 to understand how to set the generator up as a msbuild task.
Disclaimer: All of the GitHub links are subject to break if I ever decide to clean up that repo.
Cheers

Share getter/setter classes across asp.net applications

Maybe there's something obvious that I'm missing or maybe not. Suppose I have a class that is just a representation with getters/setters and no logic. I'm going to use these structures for serialization/deserialization mostly. Suppose I use that object in many, many applications. Suppose I have dozens of these objects. What's my best approach to sharing these objects?
I understand that I can compile an object into a DLL and reference that DLL. But if I have dozens of these objects, do I compile them all separately so I can use just what I need or do I make and maintain a monster DLL with all of these objects in it. Both of those approaches seem bad. I don't want to create a class library for every single class (that's stupid) and throwing them into a giant package just seems like a bad idea.
Am I missing something simple? Doesn't java have a convention where one can create jar files of one to many classes? Does .Net do something like that?
You need a happy middle ground.
You should be grouping related objects into individual namespaces.
You can then compile each namespace into a seperate DLL. That way, whoever is using the libraries only needs to reference a single DLL per group of functionality.
You can have a master assembly containing all objects. Then also create separate assemblies for the different applications where you only add the ones you use as links.
You would then use Project->Add Existing Item, and then on the Add-button click the down-arrow and select "Add As Link" when you add the classes you want.

Flex Best Practices - Multiple Flex Projects or 1 Project, multiple Application MXML files

Having seen several different ways of setting up larger projects in flex, I'm wondering what your opinions are of how to organize projects that are going to require 2 or more different applications. For example a public and private site within the project.
The two main ways that I know of would be first, creating one flex project, and then adding different mxml application files. Both applications would be able to share code.
The other way (which I currently like, but have no way of justifying), would be to create a different flex project for each application, and any code that needs to be shared could be part of a shared flex library. I guess something about the separation of the applications I like more, especially since I'm either working on one or the other at a time.
What are your opinions, and do you have any reasons for doing it one way or the other?
I recommend the library approach. That said, you can still use multiple applications in one workspace (and I do), but it's handy to keep the "one project, one application" rule. My workspace might have 5 projects, each of which has an MXML application, and 4 library projects, which have none.
I have used common library approach, it gives more decoupled code. Common library can also be used by some other projects later. Two applications in one project are mix and poor organisation for me.
One project with per application. I agree with everyone else. I would add that common libs are a good way to go as well. If you are working for a client that is having you build 2 or 10 applications then you will for sure want to reuse features as you are likely going to do this to save time and also so that the applications share common themes and functionality.
I find that a good rule to follow is if you tend to use a feature more than two or three times then it is a good candidate to be placed in a common lib.
I usually structure my projects by features. and example would be something like ... take an MP3 player application.
I would have the following packages
com.yourdomain.applicationname.mp3controls
com.yourdomain.applicationname.albumlistings
each feature would contain commands, model, view packages to start.
then maybe you find that you really like the mp3controls feature and you can use it on some other apps like say a video player application. The mp3controls could then be put into a common lib and then maybe renamed to something like "mediacontrols" or something.

Resources