Regarding ACGAN, the auxiliary classification generation adversarial network, what are your suggestions for improvement - generative-adversarial-network

This is a good attempt by adding classified information, what else can be improved?

Related

Is having multiple features for the same data bad practice (e.g. use both ordinal and binarized time series data)?

I'm trying to train a learning model on real estate sale data that includes dates. I've looked into 1-to-K binary encoding, per the advice in this thread, however my initial assessment is that it may have the weakness of not being able to train well on data that is not predictably cyclic. While real estate value crashes are recurring, I'm concerned (maybe wrongfully so, you tell me) that doing 1-to-K encoding will inadvertently overtrain on potentially irrelevant features if the recurrence is not explainable by a combination of year-month-day.
That said, I think there is potentially value in that method. I think that there is also merit to the argument of converting time series data to ordinal, as also recommended in the same thread. Which brings me to the real question: is it bad practice to duplicate the same initial feature (the date data) in two different forms in the same training data? I'm concerned if I use methods that rely on the assumption of feature independence I may be violating this by doing so.
If so, what are suggestions for how to best get the maximal information from this date data?
Edit: Please leave a comment how I can improve this question instead of down-voting.
Is it bad practice?
No, sometimes transformations make your Feature easier accesible for your algorithm. Following this line of thought you converting Features is completely fine.
Does it scew your algorithm?
Concerning runtime it might be better to not have to transform your data everytime. Depending on your algorithm you might get worse interpretability (if that is important for you) depending on the type of transformations.
Also if you want to restrict the amount / set of Features your algorithm should use, you might add Information redundancies by adding transformed Features.
So what should you do?
Transform your data / Features as much as you want and as often as you want.
That's not hurting anyone, but rather helping by increasing the Feature space. But after you did so, do a PCA or something similar in order to find redundancies in your Features and reduce your Feature space again.
Note:
I tried to be General, obviously this is highly dependant on the Kind of algorithm you're using.

Vector norms and Finding Maximum (value and index)

I'm running some performance sensitive code and looking to improve speed. I am using vnormdiff and findmax a lot and wondered whether these are the most efficient functions around? Any thoughts greatly appreciated.
Whenever you encounter a performance problem, it's good to look at your problem from two angles. First, is my overall algorithm the best it can be? If you're using an O(N^2) algorithm but an O(N) is available, that could make an enormous difference. It sounds like you're examining neighbors, so some of the more refined nearest-neighbor algorithms (which depend on dimensionality) might be of assistance.
Second, no discussion about optimization can really get started without profiling information. There's documentation on Julia's profiler here, and a graphical tool for inspecting it here.

Usage of Difference Equations(recurrence relations) in IT fields

I'm doing a IT diploma and Mathematics also be tought there.In these days, we have to learn Difference Equations(recurrence relations) and I'm confused with what are the usages of these concepts in different areas of IT like computing, algorithms and data structures, circuit analysis, etc.
Can someone please explain why we learn these concepts and usage of them. It will be helpful my learnings.
Unutbu has an excellent explanation and link to what is a recurrence relation. Here is a small window into the why.
Computer scientists need to know how algorithms compare to one another (in terms of speed or memory space). The Recurrence Relations help provide this comparison (on an order of magnitude).
Here is a general question: How many possible moves in chess are there? If there are 318 billion moves (for the first four), could you use an exhaustive algorithm to search for the best, first move? If that is impractical, what algorithms might trim that to a reasonable size? The complexity measures give us an insight into the difficulty of the problem and into the practicality of a given algorithm.
Recurrence relations come up in complexity theory. The number of steps performed by a recursive algorithm can be expressed as a recurrence relation, such as
The goal then is to express T as (or bound T by) a (hopefully simple) function of n.
The master theorem is one example of how this is done.

What is the network analog of a recursive function?

This is an ambitious question from a Wolfram Science Conference: Is there such a thing as a network analog of a recursive function? Maybe a kind of iterative "map-reduce" pattern? If we add interaction to iteration, things become complicated: continuous iteration of a large number of interacting entities can produce very complex results. It would be nice to have a way of seeing the consequences of the myriad interactions that define a complex system. Can we find a counterpart of a recursive function in an iterative network of connected nodes which contain nested propagation loops?
One of the basic patterns of distributed computation is Map-Reduce: it can be found in Cellular Automata (CA) and Neural Networks (NN). Neurons in NN collect informations through their synapses (reduce) and send it to other neurons (map). Cells in CA act similar, they gather informations from their neighbors (reduce), apply a transition rule (reduce), and offer the result to their neighbors again. Thus >if< there is a network analog of a recursive function, then Map-Reduce is certainly an important part of it. What kind of iterative "map-reduce" patterns exist? Do certain kinds of "map-reduce" patterns result in certain kinds of streams or even vortices or whirls? Can we formulate a calculus for map-reduce patterns?
I'll take a stab at the question about recursion in neural networks, but I really don't see how map-reduce plays into this at all. I get that neural network can perform distributed computation and then reduce it to a more local representation, but the term map-reduce is a very specific brand of this distributed/local piping, mainly associated with google and Hadoop.
Anyways, the simple answer to your question is that there isn't a general method for recursion in neural networks; in fact, the very related simpler problem of implementing general purpose role-value bindings in neural networks is currently still an open question.
The general principle of why things like role-binding and recursion in neural networks (ANNs) are so hard is that ANNs are very interdependent by nature; indeed that is where most of their computational power is derived from. Whereas function calls and variable bindings are both very delineated operations; what they include is an all-or-nothing affair, and that discreteness is a valuable property in many cases. So implementing one inside the other without sacrificing any computational power is very tricky indeed.
Here is a small sampling of papers that try there hand at partial solutions. Lucky for you, a great many people find this problem very interesting!
Visual Segmentation and the Dynamic Binding Problem: Improving the Robustness of an Artificial Neural Network Plankton Classifier (1993)
A Solution to the Binding Problem for Compositional Connectionism
A (Somewhat) New Solution to the Binding Problem

ERD (Entity-relationship model) - Need an example for a prob that can not be modeled by it

I got a weird H.W assignment in requirements engineering seminar .
I was digging through the entire net yet found no example for this one...
I need to find a problem from any engineering field (mechanics,medicine,chemistry,programming etc.) that the ERD model fails to give a complete/any answer to.
Can anyone show me some examples of a loss of information/process/entity while modeling only in ERD ?
maybe a point where I will need to compromise in order to continue modeling?
Or at least what are ERD's limitations ?
I need it to be an example for the limitation/disadvantage and not an example for a common mistake or bad modeling.
Well, ERD only deals with static aspects of the problem (i.e. the structure of the data that needs to be manipulated). All the information about the dynamic aspects of the problem will be lost if you only model using ERD

Resources