Terraform tries to recreate in wrong region using module - terraform-provider-aws

I am not sure HOW to best describe what it is doing, but I have a plan that built two load balancers, with target groups and listeners in East 1 and East 2. Worked beautifully, and later needed to add a tag to both, worked flawlessy. I now want to build 2 more LB in the same two regions for different targets.
When I ran plan again prior to adding the new stuff, it said 26 to add. It appears that it finds the previously created LB and associated stuff, however, now it wants to recreate them in the opposite regions, create east 1 in east 2 and visaversa.
At a total loss here since NOTHING has changed on my end. Why does it now want to create new where it shouldnt?
Example of what it states,
module.east1.aws_lb.nlb-east1: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1
module.east2.aws_lb.nlb-east2: Refreshing state... [id=arn:aws:elasticloadbalancing:us-east-1
module.east1.aws_lb.nlb-east2 will be created
So you can see it finds the previously created stuff, and now wants to create them in the opposing regions. Again, nothing has changed on my end!

Related

How to assign "fixed work" task to multiple resources taking vacations into account

Let's say you have a small project. The team has estimated all the tasks as 300 days of effort.
I have 5 developers in the team, and I want MS Project to tell me when the project will complete considering vacations and working schedule of my team member.
In order to do that:
I'm creating a Task "Development" with fixed work "300d", and task type "Fixed Work".
Then I create 5 resources, and specify a 2 week vacation for one of the developers somewhere in the middle of the schedule.
Then I assign my 5 development resources to this task.
The problem is, the 300d distributed evenly to all 5 development resources. And If one of them have a two weeks vacation in between, due to that particular resource the work will be finished 2 weeks later, where other 4 resources are sitting and doing nothing for 2 weeks. Total duration is 70 days.
what I get
What I want to get is: work is distributed accordingly through all 5 resources unevenly in a way that the whole task finishes as earlier as possible taking most of the usable time from all developers.
That's how I would expect it to work. In that particular case I was distributing hours manually.
what i would expect
Is there a possibility in MS Project to do something like this? Or am I doing something wrong?
There are a couple issues with how you are approaching the problem.
1. Rather than just planning out the manpower hours estimated to be needed for the entire project on a single line item, You should plan out the tasks that will need to be done to accomplish "Small Project"
If you discretely plan out the tasks that need to be accomplished to satisfy the scope of "Small project", you can establish dependency (predecessor/successor) relationships between your tasks and figure out what tasks need to be done before you can move on to others. When you do this it will give you a good idea of how long the total duration of the project will take and likely be more accurate than just relying on an estimate based on the manpower hours estimate your developers give you. Find out what tasks they actually need to do, not just how many hours they think the whole project will take them. This will also allow you to plan out the utilization of your resources better because you'll be able to assign specific resources to specific tasks, and not all of your resources need to be on every task.
2. In general I would avoid using the Task Usage form.
I noticed you are altering resources in the task usage form, but unless you are really experienced with Microsoft Project I would avoid ever touching that, as it's really easy to set the period of performance of resources assigned to a task to be different than the actual period of performance of the task itself. This will cause MS Project to behave unusually, and it can be hard for an unexperienced user to understand why. This usually leads to pain and frustration. This leads me to my next bit of advice:
3. If you really want to specify a resource's vacation time, it's better to adjust the calendar associated the resource to exclude those dates as working dates.
In your situation with only 5 resources on your project, this can be fairly easy to do. You can accomplish this 2 different ways (I'll start with the easiest option):
1. You can add resource specific exclusion dates to the default calendar in your project
You can accomplish this by opening the Resource Sheet table and then clicking the Project tab then Change Working Times. If you have the Resource Sheet open instead of the Gantt chart, you can specify the resource that is going to be effected by the exceptions:
In this example you can see that I would be excluding (removing) 8/23/21 thru 9/3/21 as working days for the SW Engineer resource, without needing to change the calendar used by the resource completely.
2. You can completely change the calendar used by particular resources to be different than the default calendar set for the project.
You can accomplish this by going into the Resource Sheet and opening the Base Calendar column:
From here you can assign any calendar that exists in the project to the resource. Of course this means you would need to create the calendars and assign exclusion dates to them.
To create a calendar, click the Project tab then click Change Working Times. Click Create New Calendar on the form that opens up and give it a name:
From there you can add exclusion dates and all that.
Note: In a larger project with many resources, I would recommend not messing with the calendar for the resources at all. It just gets hard to deal with when there are a lot of resources.

Metrics for Cosmos read regions

We've set up a cosmos account in North Europe with Geo replication to West Europe. Consistency is set to "Session"(Default). The intent is to use North Europe as a single write region and both regions as read. This is because the requirements are to have no performance degradation during batch data ingestion data into the database. We are using ADF to do the batch ingestion.
The question I have is how do I monitor the metrics for the read only region? When I look at the Metrics on Cosmos, I can only still see North Europe in the drop down.
Thanks. So this is not a problem.
I found that when you create a write region and 1 or more read regions, the other regions metrics will not be visible until there is some metrics to report. The replication of data does not contribute to the Metrics/throughput usage.
To test this, I wrote some python code to fetch some data and set the secondary read region as the preferred location. Just 2 minutes after executing the code, the read region appeared on the Metrics region drop down.
The python code I used to define the client is below :
client = CosmosClient(ENDPOINT, {'masterKey': MASTER_KEY}, preferred_locations = ['Central US'])
Am closing this question.
Your problem also appeared in my side.
I have a test db which set east-asia as write only and other regions read. When I reached metrics page, only east-asia in the drop down of region filter. I guess it comes from the location of the operation(all my operations are from this region so there only provides the only one choice). After I delete the east-asia region in Replicate data globally and did some query, then I can see another region in metrics.
I also tested in my another database, it doesn't enable global distribute and I haven't use the database for a long time. When I opened the metric page, I find it provides no choice for region. But after execute a query and wait for a while, the region showed in drop down.

Tracking a Search that leads to a sale in GA

This seems really basic but i am struggling with it
We have a client who runs a travel website.
They have a few different search bars eg Flights, Hotels, Carhire.
I am trying to track the performance of each... "What % of people completed a sale that ran a Flight search." Same for Hotel, and for Car hire
Any ideas for the best way to get this info in GA?
Many thanks
There are a few ways to get this information, each with their pros and cons. The options that I see immediately available are segments and goals.
Segments are great because they are retrospective and generally more flexible, with the ability to be changed if you find your criteria isn't quite right. You create here, and specify sessions that go through search results pages etc:
Then you can create another segment for booking confirmation page, and any other intermediary steps that you'd like to report on. The main con of segments is that you can only pull in 4 at a time, but if you have more you can pull them 4 at a time and copy+paste the data into an excel sheet or google sheet. Segments can also be pulled via the Core Reporting Api and DataStudio which makes them great for automating into dashboards.
Goals are cool because they pull into the default reports, and basically track sessions through a particular page, event or sequence. The main con I see and the reason is that I don't use them is that they only start tracking fro mthe time you create them , and if you change the configuration it does not impact historical data, so your data can get messed up quickly if you don't have sandbox GA views or sandbox goals for your testing before putting it into a dedicated goal slot. You can also only have 10 or 20 goals depending on your plan, so once data is tracked against that goal you can't remove or clear it.

Sabre developer api - how limited is the data in development? Am I using it wrong or is data THAT limited?

Im trying our different flight api's from sabre, I understand from reading the data Im getting back is limited in development but Im not sure if it really can be THAT limited or its me doing something wrong.
1: InstaFlights Search
First I use the citypairs lookup to show city pairs, then use them for the instasearch,
The problem is unlike I use NY or London (there were 2 other cities working fine), for almost ALL other cities, Im getting no response.
I know data is limited but since the citypairs api already returns VERY limited data, but is it really THAT limited? Feeling like I must be doing something wrong because I cannot image, that api to work (in dev) only for 3 cities on 3 different dates :-/
destination api
here I use first the supported cities api, then use results to use the multi airports api, then use that for destination api.
Again, same here, only 2/3 cities actually work. Since in the destination api, UNLIKE the instaflights api, the changes of 'matches' are higher as any destination could be shown for the picked origin. HERE AGAIN almost no results, BUT for about 3 cities.
If anyone who has some experience with sabre, could help out it would be great- just trying to figure out if its me whos using it wrong or no. Thanks!
Can you please provide the city pairs that seem to be failing for you? I just did a test of both APIs (InstaFlights and DestinationFinder) and was able to obtain results with the city pairs provided there. I changed the point of sale to FR and obtained PAR-ATH, and that worked. Also worked with ABE-MCO which is the first city pair I obtain when using POS US.
The testing environment for this API but you should be not limited to just three cities.

SCORM 1.2 suspended_data for different SCOs

I'm new to SCORM itself and I have a problem with tracking progress via Moodle's LMS API
SCORM version is 1.2
I have structure like this:
Lesson1
Module1.1
Module1.2
...
Lesson2
Module 2.1
etc
Each lesson has a set of modules of 2 types:
HTML Modules - modules that are just viewed by users
Game Modules - some games that have a medal (none, bronez, silver, gold) - as a result of module completness
The progress tracking problem is following:
I need to track progress on different Lessons based on a progress of their children Modules (sequencing?).
After all: I need to add a START to a lesson after all Game modules of the lesson are finished. Star indicates some sort of Progress on a lesson level
What I'm trying to do is to store Module's progress data (medals) in cmi.suspended_data variable as a string:
"module1.1,gold|module1.2|silver ..."
After that I want to process that thing each time page is loaded and figure out if I gain a STAR to one of lessons. For example: when I've finished last game in lesson1 with a medal so that all games are now have medals - and after that I move to lesson 2 - I should add star to lesson 1...
The problem is that moving from module to module and from lesson to module etc - RESETS suspended_data variable.
Question1: Does suspended data link to a SCO object? (which means each module/lesson has it's own suspended_data var)
Question2: What is CORRECT approach in this situation to trach sequencing progress (As I've seen, scorm 2004 has some sequencing mechanisms that can be described in Manifest. Which is correct approach in 1.2 version)
Question 1: the cmi.suspend_data is unique to each SCO, and can only be read/set from within the SCO. In your case, SCO2 cannot read SCO1's suspend_data and vice-versa.
Question 2: you'd better stick with a single SCO aproach here. All your modules and lesson will be part of a single SCO, which means you will be able to track the medals and user progress w/o any problem.

Resources