Modelling Raw material shelf life constraints in a production planning model - multidimensional-array

Can someone help me formulate a mathematical formulation/constraint for the following problem:
I have a classic purchasing and production lot-sizing problem and I would like to include raw material shelf life/Perishability constraints to my model. The purchased raw materials can be stored in two different storage tanks and depending on which storage tank it is stored in, the raw materials will have respective shelf life. For e.g. Raw material r stored in storage tank P1 will have one (1) time period shelf life and whereas if stored in tank P2 will have two (2) time period of shelf life.
// Parameters
SL[r][p] = Shelf Life of raw material r if stored in tank P
E[r][i] = Proportion of raw material r required for unit production of product i
//Decision Variables
Q[r][t] = Quantity of raw material r purchased in time period t.
X[i][t] = Quantity of product i produced in time period t.
IR[r][t] = Inventory of raw material r in time period t.
W[r][t] = Quantity of raw material discarded in time period t due to shelf
life/overdue/expiry.
Now I would like to build up a (Inventory) constraint which says that the produced quantities 'X' can only be produced if the raw materials from which they are produced haven't crossed their shelf life. Meaning, I can produce a certain amount of product i only if the raw materials are still within the shelf life.
I hope I was able to explain my problem,
Thanks in advance!

Related

Retail CDT (Category Decision Tree) in R

I am trying to analyze a big data of many retail store transaction, within a particular category.
My goal is to build a market structure tree, than will be based on shopper choices - what is the most/least important attribute (attaching a photo for example).
What is the right way to do it in r?
I thought about choice modeling using mlogit, but I can't understand how to decide the ranking of the attributes.Pet treat category tree
If you have Customer or User Data I would recommend using a distance matrix based on odds ratio for product proximity in a category. This way you can see and interpret customer choices.

Plot movement over time in (preferably) Google Maps

I have a spreadsheet with columns for person, date, event, place name, latitude, and longitude. This is the result of many years of genealogical research that shows the birth, marriage, and death locations for several hundred of my direct ancestors as they migrated across the world and finally converged in South Africa for the last few generations.
I'd very much like to create an animation or video showing their movements over time, preferably with a marker flashing at the location, then fading away, with or without lines linking the markers for the duration of the person's life. At 9 generations ago this would then show 512 births happening at roughly the same time, moving on to them converging into 256 places as couples got married, then between those 256 marriages and the original 512 deaths, the 256 births of people of the next generation would flash on, and so on, finally converging on just my birth. I believe such an animation would be an excellent way to make the vast family tree accessible in a visual way, and other genealogical researchers would probably also enjoy doing this. The ability to automatically zoom in on the bounding box of the locations at any given time would be needed to show movements within a smaller geographic location, but first and foremost I simply want to plot points over time.
Does anyone know of a free or commercial tool that would allow doing this? I have explored this in most genealogical software solutions but they provide very limited tools showing one person or one couple at a time, so I suspect I'm going to have to plug this into a generic 'plot movement over time' tool in a good map service.
I have used GraphXR for plotting family tree members linked to one of their several maps, with the edges being either a birth, marriage or death date. The data is queried from Neo4j which has a seamless interface with GraphXR.
I'm now working on a Neo4j PlugIn for genealogy and collaborating with GraphXR developers to make such visualizations easier for end users.
It's not exactly what you are looking for, but it may be helpful?
http://gfg.md/blogpost/7

How to transform global attribution Markov Chains results into transaction-level for real-time revenue attribution?

I'm currently building a data-driven attribution model for my company. It relies on Markov Chains, and I have used the package ChannelAttribution in R which is great!
It gave me results at global level: what is the weight of every channel on total conversions and revenue. But now I have to move towards deployment phase in the systems.
To do that, I need that every new transaction happening gets its revenue distributed on channels in the path. My problem is that Markov Model for attribution is not a "predictive" model so there is no output at the transaction level (it basically simulates paths to calculate removal effects so no granular info).
Does anyone have an idea on how to translate the global output of the model into a set of rules that would allow for distributing revenue of new transactions in real time (or approx. real time like everyday)? I guess there can be additional hypothesis or another layer of modelling that could do the trick but I can't put my finger on it.
Any help appreciated!
Thanks,
Baptiste

Parsing RTF files into R?

Couldn't find much support for this for R. I'm trying to read a number of RTF files into R to construct a data frame, but I'm struggling to find a good way to parse the RTF file and ignore the structure/formatting of the file. There are really only two lines of text I want to pull from each file -- but it's nested within the structure of the file.
I've pasted a sample RTF file below. The two strings I'd like to capture are:
"Buy a 26 Inch LCD-TV Today or a 32 Inch Next Month? Modeling Purchases of High-tech Durable Products"
"The technology level [...] and managerial implications." (the full paragraph)
Any thoughts on how to efficiently parse this? I think regular expressions might help me, but I'm struggling to form the right expression to get the job done.
{\rtf1\ansi\ansicpg1252\cocoartf1265
{\fonttbl\f0\fswiss\fcharset0 ArialMT;\f1\froman\fcharset0 Times-Roman;}
{\colortbl;\red255\green255\blue255;\red0\green0\blue0;\red109\green109\blue109;}
\margl1440\margr1440\vieww10800\viewh8400\viewkind0
\deftab720
\itap1\trowd \taflags0 \trgaph108\trleft-108 \trbrdrt\brdrnil \trbrdrl\brdrnil \trbrdrt\brdrnil \trbrdrr\brdrnil
\clvertalt \clshdrawnil \clwWidth15680\clftsWidth3 \clbrdrt\brdrnil \clbrdrl\brdrnil \clbrdrb\brdrnil \clbrdrr\brdrnil \clpadl0 \clpadr0 \gaph\cellx8640
\itap2\trowd \taflags0 \trgaph108\trleft-108 \trbrdrt\brdrnil \trbrdrl\brdrnil \trbrdrt\brdrnil \trbrdrr\brdrnil
\clmgf \clvertalt \clshdrawnil \clwWidth14840\clftsWidth3 \clbrdrt\brdrnil \clbrdrl\brdrnil \clbrdrb\brdrnil \clbrdrr\brdrnil \clpadl0 \clpadr0 \gaph\cellx4320
\clmrg \clvertalt \clshdrawnil \clwWidth14840\clftsWidth3 \clbrdrt\brdrnil \clbrdrl\brdrnil \clbrdrb\brdrnil \clbrdrr\brdrnil \clpadl0 \clpadr0 \gaph\cellx8640
\pard\intbl\itap2\pardeftab720
\f0\b\fs26 \cf0 Buy a 26 Inch LCD-TV Today or a 32 Inch Next Month? Modeling Purchases of High-tech Durable Products\nestcell
\pard\intbl\itap2\nestcell \lastrow\nestrow
\pard\intbl\itap1\pardeftab720
\f1\b0\fs24 \cf0 \
\pard\intbl\itap1\pardeftab720
\f0\fs26 \cf0 The technology level of new high-tech durable products, such as digital cameras and LCD-TVs, continues to go up, while prices continue to go down. Consumers may anticipate these trends. In particular, a consumer faces several options. The first is to buy the current level of technology at the current price. The second is not to buy and stick with the currently owned (old) level of technology. Hence, the consumer postpones the purchase and later on buys the same level of technology at a lower price, or better technology at the same price. We develop a new model to describe consumers\'92 decisions with respect to buying these products. Our model is built on the theory of consumer expectations of price and the well-known utility maximizing framework. Since not every consumer responds the same, we allow for observed and unobserved consumer heterogeneity. We calibrate our model on a panel of several thousand consumers. We have information on the currently owned technology and on purchases in several categories of high-tech durables. Our model provides new insights in these product markets and managerial implications.\cell \lastrow\row
\pard\pardeftab720
\f1\fs24 \cf0 \
}
1) A simple way if you are on Windows is to read it in using WordPad or Word and then save it as a plain text document.
2) Alternately, to parse it directly in R, read in the rtf file, find lines with the given pattern, pat producing g. Then replace any \\' strings with single quotes producing noq. Finally remove pat and any trailing junk. This works on the sample but you might need to revise the patterns if there are additional embedded \\ strings other than the \\' which we already handle:
Lines <- readLines("myfile.rtf")
pat <- "^\\\\f0.*\\\\cf0 "
g <- grep(pat, Lines, value = TRUE)
noq <- gsub("\\\\'", "'", g)
sub("\\\\.*", "", sub(pat, "", noq))
For the indicated file this is the output:
[1] "Buy a 26 Inch LCD-TV Today or a 32 Inch Next Month? Modeling Purchases of High-tech Durable Products"
[2] "The technology level of new high-tech durable products, such as digital cameras and LCD-TVs, continues to go up, while prices continue to go down. Consumers may anticipate these trends. In particular, a consumer faces several options. The first is to buy the current level of technology at the current price. The second is not to buy and stick with the currently owned (old) level of technology. Hence, the consumer postpones the purchase and later on buys the same level of technology at a lower price, or better technology at the same price. We develop a new model to describe consumers'92 decisions with respect to buying these products. Our model is built on the theory of consumer expectations of price and the well-known utility maximizing framework. Since not every consumer responds the same, we allow for observed and unobserved consumer heterogeneity. We calibrate our model on a panel of several thousand consumers. We have information on the currently owned technology and on purchases in several categories of high-tech durables. Our model provides new insights in these product markets and managerial implications."
Revised several times. Added Wordpad/Word solution.

Ratingsystem that considers time and activity

I'm looking for a rating system that does not only weight the rating on number of votes, but also time and "activity"
To clarify a bit:
Consider a site where users produce something, like a picture.
There is another type of user that can vote on other peoples pictures (on a scale 1-5), but one picture will only recieve one vote.
The rating a productive user gets is derived from the rating his/hers pictures have recieved, but should be affected by:
How long ago the picture was made
How productive the user has been
A user who's getting 3's and 4's and still making 10 pictures per week should get higher rating than a person that have gotten 5's but only made 1 pic per week and stopped a few month ago.
I've been looking at Bayesian estimate, but that only considers the total amount of votes independent of time or productivity.
My math fu is pretty strong, so all I need is a nudge in right direction and I can probably modify something to fit my needs.
There are many things you could do here.
The obvious approach is to have your measure of the scores decay with time in your internal calculations, for example using an exponential decay with a time constant T. For example, use value = initial_score*exp(-t/T) where t is the time that's passed since picture was submitted. So if T is one month, after one month this score will contribute 1/e, or about 0.37 that it originally did. (You can also do this differentially, btw, with value -= (dt/T)*value, if that's more convenient.)
There's probably a way to work this with a Bayesian approach, but it seems forced to me. Bayesian approaches are generally about predicting something new based on a (usually large) set of prior data, which doesn't directly match your model.

Resources