What variables can be passed between Qualtrics surveys? - qualtrics

I am doing a study which examines the effect of having choices. Participants are randomly assigned to either choice condition or control condition. Those in the choice condition have choices over which game to play. Those in the control condition have no choice - they are assigned a game to play.
I want to give participants in the control condition the same games chosen by those in the choice condition. That is to say, after a participant in the choice condition chooses a game, the next person in the control condition is assigned to play the same game.
To achieve this yoked design, I set variables to record the number of each game chosen in the choice condition. So if a person in the choice condition chooses Pac-man, then 1 will be added to PACMAN variable. If a person chooses Tetris, then 1 will be added to TETRIS variable. When the next person comes in and is assigned to the control condition, if TETRIS variable is larger than zero, then assign Tetris game to this person and subtract 1 from TETRIS variable.
My question is how to let these variables be passed from one survey taker to another. As far as I know, variables such as embedded data can be passed to each other within the same survey. But they are reset each time a new survey begins.
Greatly appreciate your help!
UPDATE:
Following T. Gibbon's suggestions,
I end up using quotas to record the counts of each game being chosen in both choice condition and control condition. Through the survey logic and javascript, I compared the counts in choice condition and control condition, if the count of a game in choice condition is larger than that in control condition, then that participant gets into the control condition and assigned to play the game.
Very excited to see the yoked design can be implemented through Qualtrics!

The only data store shared across respondents are quotas. Quota counts only go up (no subtracting), so you'd have to adjust your logic a bit. You could use piped quota counts and compare them to one another in the survey flow, but it seems like that would get complex. If I were doing it, I would pass the quota counts to a web service and return flags telling me if the games were chosen of not.

Related

Firebase: How to organize data that is synced to many groups

I have a problem regarding the organization of my data. What I want to achieve:
What I want to achieve
TL/DR: One data point updated in real time in many different groups, how to organize?
Each user sets a daily goal (goal) he wants to achieve
Upon working each user increases his time to get closer to his daily goal (daily_time_spent). (say from 1 minute spent to 2 minute spent).
Each user can also be in a group with other users.
If there is a group of users, you can see each other's progress (goal/daily_time_spent) in real time (real time being every 2-5 minutes, for cost reasons).
It will later also be possible to set a daily goal for a specific group. Your own daily goal would contribute to each of the groups.
Say you are part of three groups with the goals 10m/20m/30m and you already did 10m then you would complete the first group and have done 50% of the second group and 30% of the third group. Your own progress (daily_time_spent) contributes to all groups, regardless of the individual goals (group_daily_goal).
My ideas
How would I organize that? One idea is if a user increments his/her time, the time gets written down into each group the user is part of and then, when the user increases his time, it gets increased in each group he/she is part of. But this seems to be pretty inefficient, because I would potentially write the same data in many different places (coming from the background of a SQL-Developer, it might also be expensive?).
Another option: Each user tracks his time, say under userTimes/{user} and then there are the groups: groups/{groupname} with links to userTimes. But then I don't know how to get realtime updates.
Thanks a lot for your help!
Both approach can work fine, and there is no singular best approach here - as Alex said, it all depends on the use-cases of your app, and your comfort level with the code that is required for each of them.
Duplicating the data under each relevant user will complicate the code that performs the write operation, and store more data. But in return for that, reading the data will be really simple and scale very well to many users.
Reading the data from under all followed users will complicate the code that performs the read operation, and slow it down a bit (though not nearly as much as you may expect, as Firebase can pipeline the requests). But it does keep your data minimal and your write operations simple.
If you choose to duplicate the data, that is an operation that you can usually do well in a (RTDB-triggered) Cloud Function, but it's also possible to do it through a multi-path write operation from the client.

How would I order my collection on timestamp and score

I have a collection with documents that have a createdAt timestamp and a score number. I sort all the documents on score for our leaderboard. But now I want to also have the daily best.
matchResults.orderBy("score").where("createdAt", ">", yesterday).startAt(someValue).limit(10);
But I found that there are limitations when using different fields.
https://firebase.google.com/docs/firestore/query-data/order-limit-data#limitations.
So how could I get the results of today in chuncks of 10 sorted on score?
You can use multiple orderBy(...) clauses to order on multiple fields, but this won't exactly meet your needs since you must first order by timestamp and only second by score.
A brute force option would be to fetch all the scores for the given day and truncate the list locally. But that of course won't work well if there are thousands of scores to load.
One simple answer would be to use a datestamp instead of timestamp:
matchResults.where("dayCreated", "=", "YYYY-MM-DD").orderBy("score").startAt(...).limit(10)
A second simple answer would be to run a Cloud Function on write events and maintain a daily top scores table separate from your scores data. If the results are frequently viewed, this would ultimately prove more economical and scalable as you would only need to record a small subset (say the top 100) by day, and can simply query that table ordering by score.
Scoreboards are extremely difficult at scale, so don't underestimate the complexity of handling every edge case. Start small and practical, focus on aggregating your results during writes and keep reads small and simple. Limit scope by listing only a top percentage for your "top scores" records and skip complex pagination schemas where possible.

(Abandoned) Sort 2 models based on 1 column in Qt

I've 2 QStandardItemModel where the first model holds data and the second one holds a summary of that data (earnings per day on 1st model and earnings per week on the 2nd. Each row is a productive unit and each column is a day/week).
Both models appear on separated QTableView and I'd like to be able to sort one model and affect the other, so the data of both models would always correspond to the same productive unit.
I want the user to be able to see daily data (and scroll through it) while seeing the weekly data at the same time, which is why I don't make a single model.
Currently, I'm using a QSortFilterProxyModel to handle the sorting, but that doesn't sort both models at the same time.
How can I sort them at the same time?
I found no solution for this problem. Instead I sorted it by choosing a less-than-ideal single QTableView. I'm thinking of allowing the user to set a max of summary columns to avoid them from overflowing the table (I haven't gotten to that yet).
Anyways, I just wanted to say that I consider this question abandoned

Creating fixed size groups based on matching attributes and minimizing the number of un-grouped entites

my problem is this:
I have a list of people, each person has a certain number of Facebook likes. I want to partition those people into N groups such that, for each group, every member shares at least one like (i.e. everyone in this group likes Daft Punk). The groups can't be any size other than 3 or 4 people, and I want to minimize the number of people who aren't in a group. (However, I'm willing to break the fixed-size-rule if it means that I can minimize the unmatched people even more)
I've been told to look at bin packing and cliques but they're not quite the right fit for my problem.
When searching previous questions I came across this: Categorizing input data into sets based on attribute
Something like this seems like it would work, but every member in my group has more than one value (an array of likes). Also I'm not sure if it minimizes the number of excluded people.
Thank you in advance!

How do I add KPI targets to my cube that are at a higher grain to my fact table?

I have a simple star schema with 2 dimensions; course and student. My fact table is an enrolment on a course. I have KPI Values set up which use data in the fact table (e.g. percentage of students that completed course). All is working great.
I now need to add KPI Goals though that are a different grain to the fact table. The goals are at the course level, but should also work at department level, and for whatever combination of dimension attributes are selected. I have the numerator and denominators for the KPI Goals so want to aggregate these when there are multiple courses involved - before dividing to get the actual percentage goal.
How can I implement this? From my understanding I should only have one fact table in my star schema. So in that case would I perhaps store the higher grain values in the fact table? Or would I create a dimension with these values in? Or some alternative solution?
In most cases I would expect KPI measures to be calculated from the existing measures in your cube, so can you get away from the idea of fact table changes, and just set up KPIs as calculated members in the cube or MDX?
Your issue is complicated by the KPI granularity being different, yes...but I would just hide KPI measures when such a level of granularity was being displayed. You can implement this within the calculated measure definition too.
For example, I have used ISLEAF() to detect if a measure is about to be shown at the bottom level, and return blank/NULL. Or you can check the level number of any relevant dimensions.

Resources