We're trying to get up and running on Teams, but running into some roadblocks about how to properly share stuff with each other.
Specifically, we have some important variables that vary from person-to-person on the team, such as their development hostname, or their credentials (login, password, OAuth tokens, etc).
It looks like if we create those in 'Environments' as dynamic values, then those values will get committed back to the repo? In which case, everyone in the company will suddenly start using someone else's development hostname? (Not to mention we don't want to share credentials with each other, etc).
Are we doing it wrong? How can we share Environments, Groups, Requests, etc without sharing per-person unique data?
Related
I have multi-vendor project which some variables should set by admin, For instance when User wants to pay his/her cart, fee should be specify and it defined by admin of system. (And it could be change passing of time.)
So what's the best approach for keeping this variables?
Note:
I'm running server with Nodejs and I use MongoDB as database.
I have following ideas which has pros and cons in my opinion:
Save these variables in document (in database), which I guess it's not good since I have to for each payment (or other actions which need these variables) send request to database. These variables seems to be fixed and can change after a while. I mean it's not like a user profile information which could change
frequently and when user wants to see his/her profile request should send to database. (further more it's just seems not good create new collection for storing just a document)
Save this in .env file (as environment variables) and I think we keep configuration variables in this file (application layer, not keeping the variables for business) and also updating this file is not good as database.
Please aware me if I make a mistake or there is common way which I don't know. (Also I searched for that and I couldn't find any proper keyword : ( )
My approach has been the following:
If the values can be updated by business administrator in normal course of operation - then they should have Admin UI and be stored in the database. Fees are a good example.
If the values hardly ever change; or changed by IT staff - put them in the configuration file. Endpoint of Vendor API, or mail server configuration would go there.
What are the pros and cons of Azure Active Directory to manage the accounts of a simple website (at most 10 accounts, just for login to control panel), instead of SQL membership provider or the new bogus identity system in my own database. I never tried this service before but I like the fact that I don't need to take care of the database and credentials of my users, it's free and very secure I guess. Someone has tested it in production? any security advise? thanks.
Among the many upsides there's that it is free, highly available, enterprise-grade and basically management-free - all flows for entering and changing passwords and similar are already implemented for you. Also, it can be used for both web sites and web API - so that if tomorrow you want to add a mobile app for the same user population, we've got you covered.
About having tested it in production... yes, there are many many (MANY) apps using AAD in production today.
I guess that one possible downside for such a small use case is that all the defaults are set to a higher security level than you'd probably use in your scenario, translating in a bit more involvement from your users: passwords templates requiring special characters, mandatory password expiration, and so on. Of course to me it's a good thing, but I am biased :-)
Another current limitation is that the user names must be of the form alias#yourADdomain, as of today you cannot use an arbitrary string. That's usually not a big deal, but calling it out in case you have preexisting user names you need to stick to.
HTH
V.
I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string.
I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here?
http://technet.microsoft.com/en-us/library/cc917711.aspx
I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube.
Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.
HTTP is probably the best option for what you sound like you are trying to do. if they are two machines on same network but not same domain, using ipaddress\username on each (same user/pass) will work, like old school windows networking in workgroups.
You could also just backup , ftp and download/restore the cube on the other machine, might work for what you are doing.
as suggested by ScaleOvenStove HTTP is the best solution for your case, but users should be synced on both the servers to get access via HTTP. Users across both the organization's network can to be synced with a AD Sync tool. User has to be created in the other organization's network with bare minimum rights, and you can define role based security for what they can access in the cube
I have a requirement for a set of asp.net MVC websites as follows:
Multiple sites, using the same codebase, but each site will have a separate database (this is a requirement), and users will login and enter data.
A single site for super users where they log in and work on data aggregated from each of the individual sites.
The number of sites in point one is liable to expand as we roll it out to more clients.
My question is about the architecture of the above - how to manage the data aggregation, given that it needs to be real time. Do we maintain this at the database level (e.g. a view that is essentially a union across the individual site databases), or at the application level.
A few infrastructure points:
We have complete control over the database server and naming of databases.
All these websites are deployed onto a server that we manage.
I'd appreciate any input/ideas from folks that may have done this before.
Does the data aggregation have to be completely real-time, or can you get away with almost real-time? If "almost real-time" is acceptable then you can write a service application that harvests the data from the sites databases into your single central database. As long as the process runs continuously and you don't have too many sites to gather data from the delay should be more or less invisible for the user.
Having a view that accumulates the data from all the databases doesn't sound like a good solution. Not only will it probably be very slow, but you will also have to update the view whenever you add a new site.
What is the intention of the super user site, btw? Is it only for reporting or should super users edit the data across all sites as well? That may affect which solution you choose.
I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.
If they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed.
My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible.
As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.
Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.
Access to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).
IF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.
I've been responsible for developing several internal web applications over the past year.
Our solution was using Windows Authentication (Active Directory or LDAP).
Our purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.
While I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.
I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.
Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.
I guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.
Then via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.