Jump to content

Sean Massey-1

Tech Insider
  • Posts

    71
  • Joined

  • Last visited

  • Days Won

    1

About Sean Massey-1

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Sean Massey-1's Achievements

  1. I've seen this before, but I don't remember where. And it is very frustrating. I think this is set in the Connection Server properties in Horizon Admin. Can you go into Horizon Admin, go to Servers -> Connection Servers, open each server, and post a screenshot of the Con1 and Con2 settings. And post a screenshot of your locked.properties file on both servers?
  2. There is definitely something going on with that connection server that is causing issues without taking the service offline. Normally, when Horizon stops responding to requests, the entire web server component is offline, so favicon.ico fails. Have you opened a support ticket to investigate the issue that is causing the server to fail? I would start there because this is not normal behavior. For now, I would recommend disabling the bad connection server inside your AVI pool so that no connection attempts are sent to it and open a ticket to investigate.
  3. So...this is a problem. As a consultant supporting this environment, you need to understand the use cases and usage patterns. The details are very important, especially when the use case or usage pattern raises an architectural issue like you're experiencing. It is hard to provide recommendations or a solution if you don't know why some users need a session that runs for 2 days to complete a process or the impact that is has on operating the environment. You have to go back to your customer to gather these details. This is going to be hard to do within one pool. But I will provide some options below. So this is where understanding the customer's use case and the task are really important. Perhaps this specific task or process could me moved to an RDSH server, automated in some way so it doesn't rely on the desktop, or if none of that is possible, moved into a desktop pool dedicated to this task (ie - users would only log into that desktop to run this specific workflow...or these users would be moved into a pool with different settings so they can run this task without issue). But without details, you can't provide alternatives. You can't extend a session unless the user signs in or remains active. Once they disconnect, the timer starts, and the only way to stop it is to reconnect. You might be able to end a disconnected session early using DEM. I have not tested this, so it's only a concept. You would need to test this in your lab before presenting it to your customer. But the idea would be as follows: 1. Set the pool log off after disconnect timer to 2880 minutes (which you've already done) 2. Set up a policy in DEM to run a Task on Disconnect for users who are not in the group of AD Users who the longer sessions. This task would enable a scheduled task to log out the user after 2 hours from when the task runs. This would have to be done with a PowerShell script that would modify and enable a scheduled task. 3. Set up a policy in DEM to run a Task on Reconnect for users who are in the AD user group to disable the scheduled task if they reconnect to their session. This would prevent the scheduled task from running and logging them out if they rejoin their session. Another option would be to use the Horizon REST API to get all the disconnected sessions that are over 2 hours old, check each session's users against the group of users who need longer sessions, and log out those that do not. This is probably the option I would recommend because it doesn't rely on multiple steps inside of a desktop, but it would require you or the customer to write the tool to do this. I'm not aware of any application or tool that does this today.
  4. First, I would strongly recommend opening a support ticket for this issue. If you have two environments that were upgraded from 7.x to 8.x/2206, there might be an internal KB that describes this issue and how to resolve it. Or you can get advice to proceed with upgrading to a release that fixes this issue. But you would need an official answer from support on this. Second...I would STRONGLY recommend installing a 2nd CS in each of your environments to provide you with redundancy.
  5. How many connection servers do you have in your environment @Alex Karibov?
  6. @Alex Karibov - before doing the above, I'd recommend reading the linked article and doing the steps to verify the REST API in step 1 of the solution to see if you're getting this issue. You don't want to just start deleting things on an active server if you're not sure if it's an issue.
  7. First, have you rebooted the impacted connection server? You shouldn't have to do this, but sometimes it can clear up issues. Second, have you opened a ticket with support?
  8. Hi David, IIRC, the Horizon Client installer is an EXE wrapper around an MSI file. That EXE wrapper ensures that prerequisites are installed correctly and that certain advanced features can be installed in the order they're needed. The Horizon Client can be installed from the command line, and this is documented here: https://docs.omnissa.com/bundle/HorizonClient-WindowsGuideV2406/page/InstallHorizonClientFromtheCommandLine.html Many customers have used this to package the Horizon Client in Config Manager, Workspace ONE, Intune and other software distribution services. I have created Chocolatey packages for the Horizon Client in my home lab.
  9. So a couple of questions here to understand the use case and ask. First - how is the pool configured? Does it have an automatic Log-Off After Disconnect policy configured in the pool settings? Second - Why isn't a new pool an option if they're already using the same Instant Clone desktop image for both normal and these long-running sessions? A new pool using the same image wouldn't create any additional resource utilization if you shrink the regular pool by the number of desktops needed for long running jobs.
  10. I'm pretty sure I saw this question on Reddit this morning. The docs for configuring the locked.properties file are here: https://docs.omnissa.com/bundle/Horizon8InstallUpgrade/page/AllowHTMLAccessThroughaGateway.html
  11. ESXTop, which you would need to run from an SSH session on each host, or a 3rd-party monitoring tool like Liquidware Stratusphere or ControlUp are probably the best tools to grab performance information on your existing hardware. Even if you don't see the option in your vCenter performance charts, there is a method for converting between CPU Ready Summation and CPU Ready %: https://knowledge.broadcom.com/external/article/306576/converting-between-cpu-summation-and-cpu.html CPU RDY is one of two metrics you want to be looking at. Co-Stop % is another metric to consider when sizing as it will determine how often the scheduler is stopping a core on a VM so it doesn't drift too far from the other core. That said, the recommendation I've given in the past is that Core Speed trumps number of cores. As @Gerard Strouthsaid, a lot of Windows applications, and even Windows processes, are single-threaded and they benefit from having higher core speeds. Having a lot of lower-speed cores doesn't do you a lot of good if they're all taking longer to complete tasks. Ideally, you'd want to have at least 3 GHz per core base clock speed for Windows 10 or Windows 11 virtual desktops. Note that is BASE clock speed.
  12. Partially correct. You would need 3 public IPs in total, but each UAG would need to have it's own unique Blast/Tunnel URL. It would look something like this: Floating/Shared VIP for UAG HA: 100.64.1.0/horizon.any_guy.com UAG1 NAT Public IP: 100.64.1.1/horizon_uag1.any_guy.com UAG2 NAT Public IP: 100.64.1.2/horizon_uag2.any_guy.com Basically, yes. Once you authenticate, you're communicating directly with the UAG that you authenticated against. All session protocol/secondary traffic happens with that UAG, which is why each UAG needs to have its own DNS name/NAT public IP. Partially. In my experience, the UAG VIP will actually float between the two UAGs if both are up. The authentication process will always remain available, and if one UAG goes down, the VIP will remain available on the UAGs that are online. I'm not sure what you mean by this. If a UAG goes down/fails, then user sessions that are connected to that UAG will be disconnected and the user will need to reconnect. The UAG is basically a reverse proxy for Horizon, and session protocol/secondary protocol traffic is pinned to the UAG that the user authenticated against. This is only relevant if you're using a 3rd-party external load balancer like Netscaler, F5, AVI, or similar services. UAG HA is outside of the scope of that document as stated in one of the opening paragraphs: Yeah. This happened because I did not configure my UAG HA prerequisites properly, and I didn't realize it at the time. I used a single URL (horizon.lab.example) and VIP for both UAG1 and UAG2. So what would happen is that a user would connect, and the VIP would be on UAG1. So they would authenticate with UAG1 and their Blast session would go through UAG1. But at some point during that user session, the VIP would float to UAG2. And as soon as that happen, the user sessions would freeze and eventually disconnect. Now this was in a lab environment, so it didn't cause any production impacts or outages, but it was still frustrating for myself and some other lab users. That's why I say that you need to make sure you meet the prerequisites of N+1 public IP addresses and unique DNS names for each UAG because it won't work as intended without them.
  13. OK. So yes...but...typically, you'd be working with an HA pair of load balancers that sync state between them and fail over from one to the other. So the diagram would usually only show 1 load balancer because they're effectively acting as one. I wouldn't recommend doing two separate load balancers with two separate VIPs/DNS names because that's not really HA. I think you can do this with HAProxy, but I believe it requires KeepAlived and other Linux clustering services. But I also haven't tried this in my lab... It should as you're configuring Entra MFA on the UAGs.
  14. Hi Any_Guy. First, thanks for that context. I want to clarify one thing here. When it comes to Horizon, HA and load balancing are the same thing. You scale Horizon and make it highly available by putting the connection servers and UAGs behind some form of load balancer. UAG HA is a great option for the UAGs, and it can work well in a lot of environments. But it has some caveats. UAG requires N+1 public IPs for your UAGs - 1 public IP for each UAG you have in your DMZ and one floating IP/VIP that is used for your load balanced URL, and UAG should have its own public DNS name for Blast and HTTPS tunnel traffic. If you haven’t read the documentation, I would strongly suggest reading it. (I’m not saying to not use it - it works great…just be aware of the caveats because if you don’t set it up right, it might seem like its working and then your users start getting disconnected randomly over time). HAProxy should be fine for your connection servers. You don’t need UDP for connection servers. UDP would only be required if you’re sending all of your Blast or PCOIP traffic through the load balancer.
  15. Thanks for referencing my blog, @Matthew Heldstab. It’s a great place to get started with some Horizon load balancing topics, especially for UAGs. So first, I’d like to understand what you’re hoping to load balance. Are you just looking to load balance your UAGs? Both your UAGs and Connection Servers (ie - one LB in front of the UAGs and one between the UAGs and CSs), or just the CS? My second question is about the 2nd option you mention - using DNS with manual entry updates/changes in place of a load balancer. Are all your Horizon resources in the same site, or are you stretching between sites? I definitely recommend using a load balancer, even for “small” environments of 100 or fewer users. A load balancer is not overkill, and you don’t necessarily need an NSX, F5, or Netscaler ADC license. Some open source load balance options will work, but you may not get all of the features of the paid ones (HAProxy Open Source doesn’t do UDP traffic, NGINX Open Source does not have Active Health Checks, etc…)
×
×
  • Create New...