I am trying to migrate an application A to a Weblogic 10.3.6 Application server running on Oracle Enterprise Linux Operating System. An application B is already residing in that weblogic server. Application A has a GUI with user login feature. However, application B does not. Currently application A and B are interfacing through an MQ.
My queries are:
1. Is it a feasible design to have both applications A & B ears in the same server?
2. If yes, how can they internally communicate?
3. Is there a security breach risk for application B due to user login feature of application A?
1) Yes, it is perfectly reasonable to run one or more Ear files on a server. There can be performance issues, and you may need to watch the JVM/Heap memory carefully (one app causing OutOfMemoryError will take down both).
2) Depends how you want them to communicate. The easiest way is probably to use remote lookup. WebLogic provides all lookups for application context etc in a JNDI tree. As long as the methods are exposed to allow remote calls (e.g. #Stateless #Remote annotations in JEE), you can call methods remotely. You can also use other methods such as MDB (message driven beans) if you don't want fast communication, or even Web Services (just bind to the same local server with a different Context root).
3) It depends how you login. If you use weblogic state to check if someone is logged in, then logging in to one application will log you into the other. You would need to look at WebLogic Roles and Policies. Basically set certain users into one Role for one application, and possibly have two different authentication mechanisms setup. Then use #RolesAllowed (or any other ways you fancy) to block access to classes/methods to only the given roles.
If you use your own log in method, I can't see an issue from the user side. Just be careful of what code gets used, and make sure you dont allow remote lookups you arent aware of (ie watch out for methods that could be subject to injection attacks, where malicious code could do a remote lookup to the other application). If you wanted you could still use WebLogic Roles and Policies to block this off anyway.
Related
I've had some recent difficulty with SQL Server not liking the default AppIdentityUser for logins, so I went ahead and created a custom DB user with write access.
But it made me wonder - is this the best approach?
I was wondering what the best SQL Server login approach would be for Asp.Net Core. I know there's a question similar to this for normal .NET, but you can't encrypt a Core web.config/appsettings.json (well, in a quick and straightforward manner).
Here are the options as I see them:
Connect via SQL Server ID that is stored in appsettings.json.
Pro: Already configured.
Cons: Password in web.config/appsettings.json; have to specifically configure SQL Server ID. Not centrally revokable.
Connect via user NT ID via ASP.NET "AppIdentityUser".
Pro: No passwords in appsettings.json.
Cons: Not centrally revocable. Seems to be restricted to the server name for user.
Connect via Active Directory user.
Pro: Easily revokable.
Cons: Active directory user password in appsettings.json. Could be bad if somebody accidentally reuses that user in another application in the company, and that user gets breached.
Are there other options that I'm missing? Which of these options are used in which situations? Which are more standard? Are there pros and cons that I'm not thinking about?
You should absolutely use a custom SQL Login to connect to the database. Under the hood, the SQL Login could be tied to a local account, service account, network account, etc. It doesn't actually matter.
The real issue you seem to be having here is in not wanting (rightly) to expose login credentials in plain text. I'm not sure why you keep referring to Web.config here, as ASP.NET Core doesn't use that. Instead, there's various configuration providers that can be optionally utilized. By default, ASP.NET Core (at least since 2.0) adds a JSON config provider that looks for appsettings.json and appsettings.{environment}.json in your project, a command-line configuration provider, a user secrets config provider, and finally an environment variable configuration provider.
The last two are the most interesting for your circumstances. In development, you should use user secrets. In production, you should use environment variables. However, neither stores secrets in an encrypted way. The benefit to either approach is that the secrets are not in your project, and therefore also not in your source control. Even though neither is encrypted, it's not as big of a concern as you might think. Getting at the secrets in either would require direct access to the server/development machine. Additionally, user secrets is by default tied to a particular user account, accessible only to that user, and environment variables can be set up the same way. Therefore, someone would need to both gain access to the machine and gain access to the particular account. That's actually a pretty high bar, and if it were to occur, exposing a database password is really the least of your concerns at that point.
Nevertheless, if you want true encryption, you have the option of using Azure KeyVault. KeyVault can be used whether or not your application is actually hosted in Azure, and while it's not free, it's exceedingly cheap.
Finally, you can always create your own config providers or source third-party ones. For example, while the default JSON provider doesn't support encryption, you could potentially write one that does.
Problem:
We have apache shiro authentication in our two applications, let's name them application1 (context is "/") and application2 (context is "/app2"). For basic authentication, it's working fine. We deploy these two applications on two tomcats (both applications on each tomcat) and then handle context sharing via tomcat 8 context sharing and for preserving sessions, we use memcached so that even if one tomcat is down, application and sessions remain preserved.
How session is already handled?
If user logs in on any of the two applications, their session is shared between two applications with getting the root context ("/") and then an interceptor to intercept any request and find if user is already logged in or not.
New Requirement:
Recently, we had to implement a functionality where we need to expire a certain user's all sessions on the basis of a certain trigger and I was reading about Apache Shiro's Session Management. I integrated in application1 and everything worked fine but now session sharing between application1 and application2 is gone.
I explained above scenario so that I could get right directions what exactly I should look for or try.
What I want to do is:
I want to somehow share shiro session manager between these two applications so that once it's shared, we can easily handle the session sharing part and rest of the functionality (invalidating sessions of some user) is already in place.
Question:
So, please tell me, if what I want to do is exactly what I should do and what I need to understand and read? OR if I am thinking in the wrong direction, please suggest me the right path.
I am the author of the book "Pairing Apache Shiro and Java EE 7" which you can grab for free here.
The Session clustering feature can be done using the CacheManager. Shiro is natively supporting EhCache & Hazelcast. I recommend you to use Hazelcast.
Here is a sample project on github: shiro-hazelcast-web-sample
To go deep in Hazelcast features, you can override Shiro's CacheManager, and implement what do you want of Hazelcast features:
Web Session Clustering
Generic Web Session Replication
Tomcat Web Session Replication
I'm new to this old stuff... I've set up my COM+ application (Classic ASP) on Windows Server 2012, but could only get it to run by unchecking "Enforce access checks for this application" in the application properties. It now runs okay, but any time the application tries to hit the database in any way, I get nothing. I've checked access to the necessary folders (as far as I know) and the user (local user, in the identity tab) has read/write access. Any ideas? And is more information needed?
As you probably already know, Windows Server releases are an ever-changing minefield of permission issues (aka user identity issues). What worked under 2008 may no longer under 2012.
The components in a classic ASP solution pretty much all have the potential to be running as different identities in the context of Windows.
Typical examples of unexpected identities are System, Network Service, and IUSR.
Where these options bite are, for example:
In IIS your web site has an assigned app pool in which it runs. The app pool has a user identity assignment;
In IIS, your virtual folders map to physical folders under Windows and there is access security there;
With COM you get a further identity option to set - this is the 'run-as' identity, which is the effective user that executes the COM components for you.
With a database such as MS SQL Server, you get the concept of user connection security which can be set to use Windows user authentication (trust the windows user) or userid/pwd required. So if you use, for example ADODB, in your code you must supply a connection string that you have to match to the connection settings the DB expects and will allow.
From your description I assume that you have the IIS site up and running, and your issue is confined to DB access from the COM components. You need to establish how the COM components connect to the DB and check that the DB will accept the credentials in use. If you are using Windows Authentication for the DB then you need to confirm for sure the run-as identity that is in use. In my setup we create a dedicated Windows user that we set aside specifically to use for COM so that we can be absolutely sure of the identity, and in our most verbose logging from the COM components we capture the run-as identity just to confirm it is all wired up correctly.
We do the same with dedicated Windows users for the IIS app pool user too. In general you are better off being sure which identity is in use by assigning it yourself rather than taking the default. Additionally, the defaults such as Network Service seem to have a diminishing amount of privs in Windows overall.
Word of caution - on the other hand do not give your dedicated users more access than they need, for example making them members of the Administrator group when you are frustrated or feeling your way through permission issues. Sure, assign these on a very temporary basis to confirm that access privs are the issue, but be sure to remove such assignments as soon as you possible can.
EDIT: I had this half written when your comment came in. You say that there was a missing component - I had not considered that potential as you seemed to be saying that the config worked but COM did not. Well done for solving your issue. I will leave this answer in place as some of what I have written could be useful for future folks walking the same or similar path.
(A question of similar vein has been asked before but both the question and the accepted answer do not provide the detail I am looking for)
With the intention of running an asmx web service under a dedicated domain account what are the usage scenarios and/or pros and cons of using an Application Pool with the identity of the domain account versus Impersonation?
We have 3 small internal web services that run under relatively low load and we would like to switch them to running under their own domain accounts (for the purpose of integrated security with SQL Server etc). I appear to have the choice of creating dedicated app pools for each application, or having a single app pool for all the applications and using impersonation in each.
I understand app pools provide worker process isolation and there are considerations for performance when using impersonation, however those aside what else would dictate the correct option?
Typically, you will choose different identity for worker process (or do ASP.NET impersonation) because there is need to access local/network resources that needs specific permissions. Obvious dis-advantage is that your application code may run under more permissions than it may need and thereby increasing the vulnerability against malicious attacks.
ASP.NET impersonation would have more overhead because user context needs be switched for each request. I will suggest to go with separate app pool approach - only disadvantage with app pool approach is that you have process for each one of them and so there will be overhead (from OS perspective) for each process. If your applications are smaller and don't have strong memory demands then this should not be an issue,
If you want your web services to connect to SQL via Windows authentication, you will almost certainly want to set up each application with the dedicated app pool option. This requires the least amount of setup and administration.
If you go the impersonation route, you'll need to account for the "two-hop" issue. When a user calls a web service that is using impersonation, the web service can access local resources, as that user. However, if the web service tries to connect to a non-local resource (e.g., a database running on a separate server), the result will be an authentication error. The reason is that NTLM prevents your credentials from making more than one "hop". To workaround this, you would need to use Kerberos delegation. Delegation isn't difficult to set up, but it does require Domain Admin privileges, which can make things difficult in some corporate environments.
In addition, using impersonation means that you need to manage database permissions for each user that may visit your web service. The combination of database roles and AD groups will go a long way in simplifying this, but it's an extra administrative step that you may not wish to conduct. It's also a possible security risk, as certain users may end up with privileges that are greater than your web services are anticipating.
Impersonation is useful when you need a common end user experience with other Windows services that are based on Windows security.
For example, Microsoft SharePoint servers use impersonation because you can access SharePoint document libraries with web browsers and with the standard Windows shares UI (connect / disconnect to a network share, based on the SMB protocol). To ensure security is consistent between the two, in this case, you need impersonation.
Other than this kind of scenario, impersonation is most of the time not useful (but can cost a lot in terms of scalability)
Application pool pros:
You don't have to be a .Net programmer to understand what's going on.
The security aspect leaves the domain of the programmer and falls under the remit of infrastructure
Easy to change through IIS with proper saftey checks that the username is correct when setting up the app pool. I.e. It won't let you enter an incorrect username.
Impersonation pros:
Privileges can be documented and traced back through changes to configuration through source control history if configuration files are stored there.
Impersonation cons:
To change the user, you need to be familiar with .Net configuration rather than just setting up a website
Not sure I can think of much else.
My gut says to go with different application pools for each of the websites but it's your party.
I would advise you to check the following page for security details...
https://www.attosol.com/sample-aspx-page-to-show-security-details-in-asp-net/
Once you are done with this, you will see "precisely" how impersonation changes the identity.
By default ASP.NET uses the network service account, is this the account I should be using with ASP.NET in production? What are the best practices related to the account used by ASP.NET?
Regards
Edit: If this makes any difference, I'll be using ASP.NET on a Windows 2008 server
For production, you should create a service account that has only the bare minimum permissions in order to run the web application.
The Microsoft Patterns and Practices team provides the following guidance on this:
How To: Create a Service Account for an ASP.NET 2.0 Application
You're gonna get lots of "it depends" answers but here's my 2 cents anyway.
Consider password change management, potential damage through compromise, as well as application needs e.g. trusted connectivity.
In most scenarios Network Service comes out best in these dimensions.
it doesn't have a password, and never expires - no change management required
it cannot be used as interactive login on other machines
it can be used in trusted connections and ACL'd access to other hosts via the credential <domain>\<machinename>$
Of course your app may have different needs - but typically we use Network Service wherever possible - we run 10,000's of machines.
Unless you have some other need -- like a requirement to use integrated authentication to SQL Server for a database connection -- I would stick with the default account. It has fewer privileges than many other accounts, yet is enabled with the necessary privileges to run web applications. Caveat here: we typically don't make any privilege changes for the network service account and usually fire up a VM per production application (or set of related applications) rather than configuring multiple applications per server. If you are running multiple applications per server or make changes to the network service account's privileges for other reasons, you may want to consider using a separate service account for each application. If you do, make sure that this service account has the fewest privileges necessary to run ASP.NET applications and perform any additional tasks required.
You should use a lesser privileged account possible
1) Create a specific user account for each application
2) Create an Application Pool that runs under this account
3) The Website should be configured to run under this Application Pool.
4) In SQL Server, use Windows Authentication and give DB permissions to this User.
5) Use this User in a connection string (ie no passwords in connection string)
6) Use this User to assign permissions to other resources as required.