Why are these folders' share names appended with dollar signs? - networking

I've recently come into a web development position with a company who just lost their server/network admin. Though I was hired for web development, I'm being asked to do some routine maintenance tasks on the server that I'm nonetheless having trouble with, having no background in this sort of stuff. It doesn't help that we've lost all communication with the old network admin.
Here's the situation. A few dozen faculty members in this section of the university have shared directories on the server (Windows Server 2003, SP 2) such as \\servername\Jones$, \\servername\Smith$, and \\servername\Watson$. My question is this: Why are the share names appended with dollar signs? It doesn't appear to be a technical requirement, nor does it distinguish those folders from other similarly-named folders. Is this standard style, a requirement of some sort that I've failed to understand, or something that I should write off as the product of the last admin's eccentricity?
Apologies for the kinda lame question, but I haven't been able to figure this out, and I've been continuing to add new directories with dollar-sign-appended share names because I'm unsure about whether or not it's actually necessary.

The $ in SMB/CIFS URIs means that the share is hidden, and won't be displayed when browsing shared folders. This usually, but not necessarily, implies that it requires administrator privileges to access it.
This convention also applies to shared printers, as well.

Adding a dollar sign ($) to the end of the Share Name will hide the resource from the Network Neighborhood directory. This is referred to as a hidden share.

Appending the dollar sign makes the share hidden, so it won't show up if you browse to the computer in Windows Explorer or use net view on the command line.
If you want to share to show up in those cases, don't put the $ on it.

They are "admin shares" and are usually hidden if you are browsing \computer using windows Explorer or most other browsing tools.

The $ at the end of the share name hides the share from people browsing.
For instance, if you have a share "\computer1\share1$", anyone who browses to "\computer1" will not see share1 listed.
However, if you create the share "\computer1\share2", they will be able to see share2.
That's the only difference I know of.

It's a hidden share. Not only does it not show up, but it's also not searchable.

I know for a fact that in order to access non-shared directories of the target machine, you need to do something like \machinename\c$ where c$ represents the C:\ drive. I am not sure if that is the case in this case.

Related

remove a virus on wordpress

For a few days now my wordpress website has a virus.
The website is unresponsive, I get error 500 when trying to access it.
The admin panel has a popup window written in russian :
http://imghost.in/images/2018/08/27/22f42129593820fa959655c622c426d0.png
how can I remove it and get my website back?
Any help will be appreciated!
So just to sum up things,
Firstly, if it has been infected for a few days, try ask your provider for a rollback/backup. Most providers keep a backup of 7 days by default (can be longer depending on your subscription type and added security). I would recommend to keep backups yourself in the future. Even the database.
If the provider cannot provide a backup prior to the infection, try see if they can do a scan and tell you exactly what files are infected. This will speed up the debugging process. What you will be looking for are entire folders that are "out-of-place" and have random generated names like "qwewyeg". Delete all of those folders. Infected files will usually contain a class or a class extent that makes an external call. They are usually planted in the beginning of an infected file and also have randomly generated class names like "qdhjsahd" etc. Delete those sections.
Once all infections have been removed, change all of your passwords and set FTP to SFTP. Update your WordPress as well as its themes and plug-ins.
When you've done all that, you should be fine.

Is it possible to resolve a DCOM Interactive User permission error, when using DCOM with IIS and .aspx pages displaying data from a database?

There are a number of similar-sounding questions to this, but none of them quite address this specific question.
https://social.technet.microsoft.com/Forums/systemcenter/en-US/dfc465bc-7bbd-483e-b98b-2ba56fa98313/the-applicationspecific-permission-settings-do-not-grant-local-launch-permission-for-the-com-server?forum=configmgrgeneral
https://social.technet.microsoft.com/Forums/windowsserver/en-US/353d381d-0911-41c3-98fb-2475b65c32f6/dcom-was-unable-to-communicate-with-the-computer-xxxx-using-any-of-the-configured-protocols?forum=winservergen
https://social.technet.microsoft.com/Forums/windows/en-US/4aa643b6-f90d-4672-aba4-6c0a290e22d4/distributedcom-permissions?forum=win10itprosecurity
https://social.technet.microsoft.com/wiki/contents/articles/17914.fim-troubleshooting-event-id-10016-the-application-specific-permission-settings-do-not-grant-local-activation-permission-for-the-com-server-application.aspx
DCOM/IIS Issues
https://www.automation.com/pdf_articles/Troubleshooting_OPC_and_DCOM.pdf
I'm sure there are many other pages that are similar to this, but I'll stop here.
The scenario is this:
Windows 10...
...hosting an application (or cluster of applications) that takes near-real-time data from real sensors parses the numbers, writes them to a database, and presents them onto a browser .aspx page via IIS and DCOM
(this is an archaic mysterious application/set of applications for which there is limited or no source code or documentation).
Occasionally, after working through all the installation and configuration steps, you are presented with a localhost/something.aspx page and whilst you can see the variables by hovering over the fields, the values are not populated.
Looking in Windows event viewer, you may see errors like this:
Event ID 10016 - The application-specific permission settings do not grant Local Activation permission for the COM Server application
The application-specific permission settings do not grant Local Launch permission for the COM Server application with CLSID{...long hex number found in registry ...}
i.e.:
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\AppID{8D8B8E30-C451-421B-8553-D2976AFA648C}
There are two more keys, but I don't have them on this PC that I'm typing on; one is an installed component for handling DCOM, and another one is to do with the Interactive User as well.
The installed one, is usually the one that reports having no permissions, typically for the "NT AUTHORITY\NETWORK SERVICE SID (S-1-5-20)" user (formed into a group of admin and user accounts)
[https://www.experts-exchange.com/questions/24205909/NT-AUTHORITY-NETWORK-SERVICE-SID-S-1-5-20-on-Windows-Server-2003.html... can't access this page at the moment]; then when you go into the security and add specific permissions for it, you're left with it reporting the above Interactive User keys, the 8d8... one and a 726... one, which you can't edit permissions for.
What bugs me about this is that it's apparently a complete magical mystery, which is unacceptable in a computer system! :D
The current solution is to just no bother fiddling, and just format the machine, redo windows, work through all the application installation, IIS, and DCOM steps, and .aspx config again from scratch, and hope for the best - i.e.: that the magical special order that you do things in just makes it work, "because it does". I think this is a bit ridiculous, and time-consuming, more to the point, it bugs me intensely that there's not really a clear notion of what the actual problem is and what the solution might be.
There must be something specific happening or not happening that is causing the DCOM to not talk to the .aspx webpage; surely something that can be tweaked after the event without zapping the whole systems and spending ages redoing it all. It's as if something is "unplugged" or has a different identity/name/number from what is being looked at or filtered by what the .aspx page ingests, maybe something in the code? something in the IIS selections? something that needs resetting? Not sure...
I'm not asking for a silver bullet, but if anyone is willing to help work through this, that would be appreciated, it's just annoying and frustrating, and I'd like to get to the bottom of it, and hopefully create a definitive thread that others might benefit from.
Before going into these lengthy procedures and editing registry. I would you look at TCP/IPv6 in the Local Area Connection Settings. If it is enabled then disable it and flush the dns or restart your server. Hope it helps the future seekers.
Stay blessed everyone..
This appears to be the exact same issue 4 years ago.
https://answers.microsoft.com/en-us/windows/forum/windows8_1-winapps/weather-application/e4630db3-50c2-4cc5-9813-f089494a1145
Hi
Not sure if this will fix your issue but I was able to fix mine.
Open Regedit.
Go to HKEY_Classes_Root\CLSID*CLSID*.
Note: CLSID stand for the ID that appears in your event viewer error. In your case, it's {C2F03A33-21F5-47FA-B4BB-156362A2F239}.
Right click on it then select permission.
Click Advance and change the owner to administrator. Also click the box that will appear below the owner line.
Apply full control.
Close the tab then go to HKEY_LocalMachine\Software\Classes\AppID*APPID*.
Note: AppID is the ID that appears in your event viewer. In your case it's {316CDED5-E4AE-4B15-9113-7055D84DCC97}.
Right click on it then select permission.
Click Advance and change the owner to administrators.
Click the box that will appear below the owner line.
Click Apply and grant full control to Administrators.
Close all tabs and go to Administrative tool.
Open component services.
Click Computer, click my computer, then click DCOM.
Look for the corresponding service that appears on the error viewer.
Note: For this step, look for the one that appeared at the right panel of the RegEdit. For example, the AppID Registry (316CDED5-E4AE-4B15-9113-7055D84DCC97) contains the "Immersive Shell" Data with a (Default) as a name. Now look for "Immersive Shell".
Right click on it then click properties.
Click security tab then click Add User. Add SYSTEM then apply.
Tick the Activate local box.
Restart.
Hope that helps.
EDIT: I edited the step number 14 for it to be clearer. I am glad that I was able to help out.

Multiple ASP.NET MVC code-first sites using the same DB/Auth/Model/backend - how?

So I may have a bit of a strange situation here, and I need some guidance.
The company I work for has a number of small sites, with each site selling a different custom program. The sites are badly in need of an entire rewrite from top to bottom, and my boss has agreed that a login and online purchase/registration is required. This means user accounts, the ability to download a trial from within the user account, the ability to register that trial also from within the user account and the ability to view a list of previous purchases/registrations and product keys.
The thing is, I want a sign-on from one site to be usable on another. Plus, all of this will need to be administered (on our end) from one admin interface. So my thought is that this will all have to be driven by a single database.
Just to be clear: the front-end for each site needs to be different - sometimes only subtly, but in some cases by quite a bit (marketing differences). The backend (both Admin as well as for the Client interface), is identical in structure regardless of the site URL, but the Client interface needs to show different content (programs to download, lists of computers where the program has been registered on, etc.) depending on what URL is being used.
My problem is that I am not entirely sure how to do this from a code-first perspective. The sites are small, quite easy to build on their own, but I want them all within the same Solution so that a change to the model will be reflected across all of them (I will be able to see where things go sideways if the model gets altered - Intellisense is my friend!).
I have looked into Areas, but Areas seem to be a way of partitioning off gross differences within a site (things like resource files - JavaScript, CSS, etc. - remain in the root, whereas in my case each site will need different resource files). I need each section to be its own unique site, with its own unique URL. When this gets pushed to the server, I need each site to be “independent” in that they can sit in entirely different accounts on the same Windows Plesk server (Plesk was not my choice, but the company has clients that need a control panel interface to their own accounts). The only strong commonality between any of them is the database that they will make use of -- in fact, all of them will be making use of the same tables, with very few differences between the sites.
My other problem is that I do not know how to properly implement having multiple projects within a single solution, and all projects making use of the same model that is implemented/constructed only once. I have not yet made the jump to Repository Patterns, so I am completely in the dark with respect to that functionality.
If anyone has a suggestion, I would love to hear about it.
Essentially, you just need to create a class library where you will put your entities and context. If you're using Identity, you'll also put all the Identity-related entity classes here. You'll enable migrations on this class library. The other projects in your solution, then, will have a reference to this class library. You'll need to add the connection string to the individual projects' Web.config files, but other than that, everything will just work.
This is enough to ensure that all the projects are working from a common database and user store. However, in order to actually share authentication, such that signing into one, signs you into all them, you'll have to take one of two paths, depending on how they will be deployed.
If all of the sites will be on the same domain (different subdomains are fine). Then all you need to do is generate a machine key and ensure that each site uses the same machine key in its Web.config. The auth cookie will be added on the wildcard domain, and any subdomain of that domain will be able to see it. Sharing the machine key is to ensure that they can each decrypt it what any one them sets as the auth cookie.
If not all the sites will be on the same domain, then you're in for some pain. You'll have to set up a SSO system, which is non-trivial and far beyond the scope of Stack Overflow to help you with. There's entire companies devoted just to building SSO systems for organizations.

How to restrict external access to a specific sub-URL IIS7

I've currently got a reasonably large site up that i've been asked to make changes to.
Currently To login to this site you need to go to:
www.example.com/folder/loginpage.html
This site is only accessible internally at this time and it is unlikely to ever be accessible externally.
We would like to, however, be able to direct external users to a sub-directory on the site (a 'survey' form) which is located in
www.example.com/folder/subfolder/survey.html
This survey writes its results back to the main application and i believe they are integrated tightly.
We initially tried the idea of using an additional IIS7 box as a reverse proxy however it is quite confusing to me, i'm not very familiar with IIS/ARR and the other features required (i'm mostly familiar with networking). I did try and follow a number of tutorials but didn't get very far. I'd like to avoid it if possible.
How can I, using IIS7 (this site is in ASP.NET) restrict external users from accessing anything other than the survey pages (there are a few included files necessary as well)?
Is it possible to make www.example.com/folder/subfolder/survey.html a 'website' in-itself so that i can publish a URL like survey.example.com externally?
I've come across other examples where access is restricted from specific pages but the root of the site is still accessible
ie
www.eg.com/ is allowed but www.eg.com/admin.aspx is denied. I'd like to the the reverse in effect, and if possible, hide the 'true' url.
Hope someone can help! If using a reverse proxy is possible i'm happy to do it but i'd need detailed instructions.
Thanks for reading,
Much appreciated!
Edit: Sorry all, I'm new to stackoverflow, indeed I've just realised that there are several other sub-communities. Is it more appropriate to ask this in a different community? If so, which one?
Thanks!

How do you configure your PeopleSoft Test Environments?

What are some the items you change in your test environments to make them standout from production, and how do you go about make these changes? I know you can change the PS LOGO, but what else do you change in your test environments? Colors, fonts, etc.
Changing the colour schemes isn't too difficult. If you want to go down that route and need a helping hand I'll be happy to assist.
Other clients just put the environment name in the PIA Greeting. That's easy too:
http://peoplesofttipster.com/2007/06/13/using-the-pia-welcome-message/
I've not heard of anyone changing fonts though.
kind regards
Duncan
When you refresh a db, make sure you change tools tables that have dbname, web urls, and so on. The test db name will appear in the portal menu.
Changing the fonts would require changing all the style sheet definitions, a major headache. I guess you could go in and edit some basic style sheet attributes in app designer, PS_TEXT, et.al. and change those in a test environment. Maybe just changing a few would be enough.
Most test envs also open up security to some degree, simply because testers and developers likely need access to everything. So people's menus are going to be much larger than in production.
This is the relatively small change we made, to replace the "Home" link in the toolbar with the name of the environment eveywhere but prod. It was also incorporated in the refresh scripts to pull in the DBNAME dyanmically.
update psmsgcatdefn set message_text = 'DBNAME', descrlong = 'Environment name home replacement' where message_set_nbr = '95' and message_nbr = '401';
items to watch for:
file paths - if you open files for reading, and especially writing, to communicate with external systems, then you need to make sure you are not stepping on Production toes.
I would assume web services would also need the same re-routing so you don't "hit" live external production web services.
emails. workflow or other subsystems are liable to fire off emails to users or external parties. you need to make sure those emails are tester emails instead, not real addressees.

Resources