How to get HP thin clients (t420) and Windows Server 2016's Multipoint VDI work together? - windows-server-2016

So we have fourteen HP t420 and one Windows Server 2016 machine with Multipoint set up. If we create users in Multipoint Manager then the clients can connect via RDP as those users all right. But we can't get it work with VDI. We've created a Windows 10 template from an ISO, but can't figure out how to create stations from it. Clicling on the create stations option tells virtual desktop station created for -t, but nothing new appears in the list of virtual desktops. Tried that with a couple of clients connected as users - no difference.
Also, when restarting the server it gives a screen telling to press "B", but nothing happens if we press it. No such screens appear on the clients (they just disconnect if the server is restarted), no idea if they are supposed to.
Can't find any manual better then the official help file, and it just tells to use the create stations option, nothing on whether clients should be online or offline or anything, but we've tried both, nothing happens.
Ah, and we did the customize template thing where you are supposed to run a cmd file from its desktop before creating stations.
And the clients are connected via LAN. Each has a monitor, mouse and keyboard.
What else... can't get when the create stations option becomes available or not. Previosly it seemed to get it to appear we had to restart the server and then log as administrator on one of the clients. But now we suddenly see it available on the server (still doesn't work though), though we're not sure what exactly we've done for this affect.
The manual suggests it might only be available in station mode, but it doesn't explain what that is. We assumed it means log in from a client, but now it seems it's somethine else?

Okay, so by trial and error we've ended up doing this:
via Server Manager add roles for Remote Desktop Services
open Remote Desktop Services in Server Manager and get an error saying you should log in as a domain user
luckily we've already had a domain set up, so we've joined this server to the domain and created a user for it (the user should have domain admin rights!)
a group should also be created in the domain to be later selected in collection creation to give this group access to VMs, populate this group with users
open Remote Desktop Services again, now as a domain user with domain admin rights, and create a collection from the template created by multipoint
assign each station to a user (there's an option to do it automatically, but we wanted VM1 to be assigned to User1, VM2 to User2 and I imagine the automation could mess it up)
on the thin clients configure web connection to server.domain/RDweb
it connects, but the log in process is overly complicated
scrap the web connection and configure an rdp connection directly to the VMs (so in "server address" you put not server.domain, but VM1.domain, VM2.domain etc)
now it connects just by double click on the connection, also each thin client ends up linked to a particular VM as the user doesn't get to enter username and password by themselves; probably won't work if the VMs aren't already running, but that shouldn't be a big problem
open Multipoint Dashboard and see that it doesn't see any of the connections
open Multipoint Manager and click "add personal computers to control" or something along these lines, it doesn't see any, so use the "add manually" input box to input the names one by one: VM1.domain, VM2.domain etc
for each added PC it will ask a username and password, by trial and error we figured out it should be the local admin account, the one set up when creating the template in Multipoint
somehow it doesn't always work for each VM, sometimes some random ones among them would be unable to connect, in such case close the Multipoint Manager and then reopen it, it will try to connect again
once the Multipoint Manager succesfully connects to all the VMs, open Multipoint Dashboard and now it sees them all (actually, it only sees them if there is and RDP connection, it won't just show the desktop of a VM to which noone is connected via RDP; the clients might automatically disconnect after some minutes of inactivity, but with reconnecting taking just a double click it shouldn't be a problem)
and of course set up the licensing; I wasn't personally setting up this part, so can't tell any details on it
All seems to work fine now... or not. In Multipoint Dashboard some functions work and some don't:
seeing users desktops works just fine
PM works fine
remotely starting and closing apps works fine (the list from which it suggests to choose though is a list of apps on the server, but what it really does is trying to start the app by the same path on the station, it will give an error if there isn't one)
taking control over a station doesn't work: just shows a black screen on the server
sharing server's screen works, but stopping it doesn't, leaving the stations with white screens which seem to only be fixed by reconnecting them
forcing stations to disconnect works, but if you choose to disconnect all of them at once, one desktop will remain shown in the dashboard even though in thin client it will be disconnected
I wonder if there is a way to fix these?..

Related

meteor what happens when a server drops

When you have a meteor cluster (lets say 2 boxes) and a server stops responding (goes down), does all the traffic get re-routed to the other "live" server? I'm building an application for someone that it is very likely will be a fire and forget application (where it runs and just provides updates when they come in).
My concern is that if one server goes down, there won't be any traffic to any of the clients that were attached to that box.
Info about app:
The app will be a fire and forget (load page and walk away). Likely someone won't refresh the page or anything.
This app is mission critical and someone not getting a notification is really, really bad, and a difference of a few seconds does matter.
Websockets must be used. The 10 second dely in pull-logging is unacceptable.
Most Importantly....
The app must auto recover. If a server goes down, the client must switch to a good box without a page refresh or someone walking over to the box and causing the refresh.
Meteor has will always try to reconnect to the server when a connection is lost, So if the server gets back online it will reconnect. But if you need a custom logic to retry a connection to different cluster when the user disconnect should also be easily coded the docs have reactive API to see the connection status (Meteor.status) here is a new package I found it can be a great place to see how it should work: https://github.com/nspangler/autoreconnect
also if you're using meteorhacks:cluster it's possible it will retry to connect to a different server, the docs don't really say anything about it but if it's not I think aruonda might add that just by asking on git.
good luck :)

Virtual network tab on windows azure isn't available

i made a virtual network on windows azure to make site to site VPN .after creating this virtual network and adding gateway with dynamic routing i found network tab has red exclamation mark and disabled.so i can't log in this tab and can't create new network.it displays a message to reload or refresh my web browser.
The red exclamation point appears from time to time if there is an error loading data for that feature. This can happen when the portal code running in your browser requests data on the back end and that fails for some reason. Many times this is just a transient issue. When this happens I usually refresh the browser to see if it was a transient problem, and if that doesn't fix the issue I go check the status board for the particular service (http://www.windowsazure.com/en-us/support/service-dashboard/).
If it continues to be an issue and there is nothing on the status board indicating a problem you can contact support, if you have a support plan.

internal app needs to query a database on a server in dmz

I'm developing an app using asp and vb.net. Hitting a db that is sql2008 r2. There's an internal app which sends an email to a customer. the email contains a link which the customer clicks on, and then the page load of that page updates a database sitting on our dmz. I'm trying to write a service then which will query this database at various times, and then, based on that result, fire off an email to an internal group. Originally this was set up to fire the email from the box on the dmz, however our NA doesn't like having port 25 open like that, so now I have to rebuild the app internally to query that database, so that the inbound email can be generated on an internal box.
SO... my problem is making the connection in Visual Studio (2012). When you configure the sql data source to a box inside the network, all you need is the name of the server, and you'll get the drop down populated with the databases. At first VS wouldn't see the server at all. We turned on "named pipes" on the server, and then I entered the server name as ip,80 (this is the only port the NA will allow open) and now it will see it, however, before the dropdown gets populated, I get an error saying "A connection was successfully established with the server, but then an error... an existing connection was forcibly closed by the remote host." I know sql normally runs on port, what, 1443? something like that? but if I do that, it goes back to not being seen.
Is there a way to configure the sql data source to see this server? I've researched for a couple of days, but generally the topics have been working the other direction, or related to sporadic issues, which this isn't. Our NA isn't much of a programmer, so he doesn't know much about my end, only that he seems sure that using named pipes is the way I need to get in... however, beyond enabling them on the server, I don't know much about them, or if VS can even use them...
thanks in advance.
(I've been coming to this site for a long time now for answers; this is the first time I've ever had post a question)
Wow, a MS SQL Server in your DMZ???
Short answer is to tell your NA he doesn't need to open port 25 for you to SEND an email, unless there is some part of the story I am missing.
The better answer, get that server out of the DMZ and create a web service. They are easy and can be made very secure.

Automate Blackberry 10 simulator actions

I'm using the VMWare Player and the Blackberry 10 simulator image; I need to do some unit/integration tests automatically. I know I can use the VIX api to spin up a new Simulator and load the Blackberry image.
What I would love to be able to do is send 'key presses', launch specific apps, and perhaps send gestures. On android there's monkeyrunner and other similar apps. However I haven't found much with respect to BB10, I know it's new but I can't be the only one with this request.
Also, how powerful is the telnet option? I can telnet into an emulator and change directory into the apps dir, but I can't list its contents, SUDO, or run anything.
*****UPDATE*******
I've made some progress WRT to this, but not much. It seems that you can use the Windows API to send mouse_evt messages to the VMWare emulator; it's not 100% reliable but works enough to open apps. The big hole I have right now is being able to detect state after the action/swipe/touch is executed, aka "did the swipe I just execute work? Are we in the right app?". It would be hugely beneficial to query the device's process list, but the 'devuser' account given in the telnet example can't really do anything.
This gist has the basics for how to touch and swipe the screen based on my experiences.
https://gist.github.com/edgiardina/6188074
As you are on windows, have you tried Autohotkey (freeware) on the host machine that runs the VMWare Player? This software can send any key/mouse move/click combination and has several ways of analysing the VMWare Player Window output and react to it.
If in your example you want to check if a certain app has started and is visible, you can start it manually once and make a screenshot of a small part of the apps interface. Then you write a script that sends whatever mouse moves and key types are needed to start the app, make the script pause a while and then perform the ImageSearch command to search for this image on screen.
I don't know much about any of this but telnet.
When you telnet in, you're assigned a shell, which, if the shell is a restricted shell, will prevent you from doing exactly the things that you mentioned, are you able to change the default shell options for the devuser?
You can change the directory but not create files, list files anywhere but the home directory, see any of the filesystem, redirect output, set environment variables, etc.
Which shell does it give you?
Can you telnet as a different user? Create a new user with better privileges?
Dru
Srry, should be comment.

Monitoring ASP.NET and SQL Server for Security

What is the best (or any good) way to monitor an ASP.NET application to ensure that it is secure and to quickly detect intrusion? How do we know for sure that, as of right now, our application is entirely uncompromised?
We are about to launch an ASP.NET 4 web application, with the data stored on SQL Server. The web server runs in IIS on a Windows Server 2008 instance, and the database server runs on SQL Server 2008 on a separate Win 2008 instance.
We have reviewed Microsoft's security recommendations, and I think our application is very secure. We have implemented "defense in depth" and considered a range of attack vectors.
So we "feel" confident, but have no real visibility yet into the security of our system. How can we know immediately if someone has penetrated? How can we know if a package of some kind has been deposited on one of our servers? How can we know if a data leak is in progress?
What are some concepts, tools, best practices, etc.?
Thanks in advance,
Brian
Additional Thoughts 4/22/11
Chris, thanks for the very helpful personal observations and tips below.
What is a good, comprehensive approach to monitoring current application activity for security? Beyond constant vigilance in applying best practices, patches, etc., I want to know exactly what is going on inside my system right now. I want to be able to observe and analyze its activity in a way that clearly shows me which traffic is suspect and which is not. Finally, I want this information to be totally accurate and easy to digest.
How do we efficiently get close to that? Wouldn't a good solution include monitoring logins, database activity, ASP.NET activity, etc. in addition to packets on the wire? What are some examples of how to assume a strong security posture?
Brian
The term you are looking for is Intrusion Detection System (IDS). There is a related term called Intrusion Prevention System (IPS).
IDS's monitor traffic coming into your servers at the IP level and will send alerts based on sophisticated analysis of the traffic.
IPS's are the next generation of IDS which actually attempt to block certain activities.
There are many commercial and open source systems available including Snort, SourceFire, Endace, and others.
In short, you should look at adding one of these systems to your mix for real time monitoring and potentially blocking of hazardous activities.
I wanted to add a bit more information here as the comments area is just a bit small.
The main thing you need to understand are the types of attacks you will see. These are going to range from relatively unsophisticated automated scripts on up to highly sophisticated targeted attacks. They will also hit everything they can see from the web site itself to IIS, .Net, Mail server, SQL (if accessible), right down to your firewall and other exposed machines/services. A wholistic approach is the only way to really monitor what's going on.
Generally speaking, a new site/company is going to be hit with the automated scripts within a few minutes (I'd say 30 at most) of going live. Which is the number one reason new installations of MS Windows keep the network severely locked down during installation. Heck, I've seen machines nailed within 30 seconds of being turned on for the first time.
The approach hackers/worms take is to constantly scan wide ranges of IP addresses, this is followed up with machine fingerprinting for those that respond. Based on the profile they will send certain types of attacks your way. In some cases the profiling step is skipped and they attack certain ports regardless of response. Port 1443 (SQL) is a common one.
Although the most common form of attack, the automated ones are by far the easiest to deal with. Shutting down unused ports, turning off ICMP (ping response), and having a decent firewall in place will keep most of the scanners away.
For the scripted attacks, make sure you aren't exposing commonly installed packages like PhpMyAdmin, IIS's web admin tools, or even Remote Desktop outside of your firewall. Also, get rid of any accounts named "admin", "administrator", "guest", "sa", "dbo", etc Finally make sure your passwords AREN'T allowed to be someones name and are definitely NOT the default one that shipped with a product.
Along these lines make sure your database server is NOT directly accessible outside the firewall. If for some reason you have to have direct access then at the very least change the port # it responds to and enforce encryption.
Once all of this is properly done and secured the only services that are exposed should be the web ones (port 80 / 443). The items that can still be exploited are bugs in IIS, .Net, or your web application.
For IIS and .net you MUST install the windows updates from MS pretty much as soon as they are released. MS has been extremely good about pushing quality updates for windows, IIS, and .Net. Further a large majority of the updates are for vulnerabilities already being exploited in the wild. Our servers have been set to auto install updates as soon as they are available and we have never been burned on this (going back to at least when server 2003 was released).
Also you need to stay on top of the updates to your firewall. It wasn't that long ago that one of Cisco's firewalls had a bug where it could be overwhelmed. Unfortunately it let all traffic pass through when this happened. Although fixed pretty quickly, people were still being hammered over a year later because admins failed to keep up with the IOS patches. Same issue with windows updates. A lot of people have been hacked simply because they failed to apply updates that would have prevented it.
The more targeted attacks are a little harder to deal with. A fair number of hackers are going after custom web applications. Things like posting to contact us and login forms. The posts might include JavaScript that, once viewed by an administrator, could cause credentials to be transferred out or might lead to installing key loggers or Trojans on the recipients computers.
The problem here is that you could be compromised without even knowing it. Defenses include making sure HTML and JavaScript can't be submitted through your site; having rock solid (and constantly updated) spam and virus checks at the mail server, etc. Basically, you need to look at every possible way an external entity could send something to you and do something about it. A lot of Fortune 500 companies keep getting hit with things like this... Google included.
Hope the above helps someone. If so and it leads to a more secure environment then I'll be a happy guy. Unfortunately most companies don't monitor traffic so they have no idea just how much time is spent by their machines fending off this garbage.
I can say some thinks - but I will glad to hear more ideas.
How can we know immediately if someone has penetrated?
This is not so easy and in my opinion, ** an idea is to make some traps** inside your backoffice , together with monitor for double logins from different ips.
a trap can be anything you can think of, for example a non real page that say "create new administrator", or "change administrator password", on backoffice, and there anyone can gets in and try to make a new administrator is for sure a penetrator - of course this trap must be known only on you, or else there is no meaning for that.
For more security, any change to administrators must need a second password, and if some one try to make a real change on administrators account, or try to add any new administrator, and fails on this second password must be consider as a penetrator.
way to monitor an ASP.NET application
I think that any tool that monitor the pages for some text change, can help on that. For example this Network Monitor can monitor for specific text on you page and alert you, or take some actions if this text not found, that means some one change the page.
So you can add some special hiden text, and if you not found, then you can know for sure that some one change the core of your page, and probably is change files.
How can we know if a package of some kind has been deposited on one of our servers
This can be any aspx page loaded on your server and act like a file browser. For this not happens I suggest to add web.config files to the directories that used for uploading data, and on this web.config do not allow anything to run.
<configuration>
<system.web>
<authorization>
<deny users="*" />
</authorization>
</system.web>
</configuration>
I have not tried it yet, but Lenny Zeltser directed me to OSSEC, which is a host-based intrusion detection system that continuously monitors an entire server to detect any suspicious activity. This looks like exactly what I want!
I will add more information once I have a chance to fully test it.
OSSEC can be found at http://www.ossec.net/

Resources