Jump to content

nda

Members
  • Posts

    7
  • Joined

  • Last visited

nda's Achievements

  1. I believe the entries in the bsg.log only appear after successful authentication, once the client connects to the pool and BLAST starts up and connects to the desktop. I don't believe failed authentications or attempts would be present in the bsg.log. The support technician assigned to my case is telling me that I'm trying to make a feature request for this functionality, despite the page here clearly and precisely detailing fields like 'ClientIP': "IP Address of the component (such as Horizon Client, load balancer, and so on) that sends a request to Unified Access Gateway appliance." and example log snippet output such as: On my UAG the "Client_Machine_IP_Address" for all log entries is 127.0.0.1, which is obviously erroneous -- is it a feature request to have this field report the actual client IP? I'd argue not.
  2. I agree with the recommendation in principal and in general. But it's not appropriate for all environments -- hence 'recommendation' and not 'requirement'. Sadly, in a sufficiently complex and diverse environment, full automation of such a deployment is not even possible. We utilize a variety of applications that don't support scripted installation, and\or don't support scripted or file- or registry-based configuration. As such, even if we could automate >80% of the build, we'd still be spending many hours installing and configuring additional software, and then testing and troubleshooting the whole stack. Such as it is, we are burdened with supporting older and\or highly niche and specialized and tailored software, and there is not much to be done about it. Anyone who has worked in healthcare, municipal environments that support public services (e.g. police or fire), manufacturing, defense contracting, etc, are likely to understand this predicament. I fully resonate with the feeling of apprehension here. I've seen plenty of upgrades go sideways in large and small ways, leading to catastrophic failure, system instability or just general weirdness. But I think the risks of such failures are largely mitigated in virtual environments (e.g. cloning, snapshots). And like I said, sometimes you just have to trust the process -- in-place upgrades have become the norm for workstation OSs (and increasingly so for Server OSs lately), and if Microsoft supports it then we should at least try and have some faith. Maybe faith in Microsoft is displaced for some, but Windows is still ~70% of global market share. I'd also like to point out that, in my environment and this instance specifically, this issue would not have been resolved with a fresh build. Since my issue was driven by DEM, as soon as I installed the agent and recomposed I would very likely have been faced with the same error message again. In this case I'm glad I didn't burn untold hours on the process of building from scratch, and instead remediated the issue with appropriate troubleshooting methodology.
  3. Our environment has a pair of Unified Access Gateways running in the native High Availability mode. Recently we are seeing high numbers of failed logon attempts and we would like to correlate these attempts back to the source IPs so we can start blocking the attempts on our perimeter firewall. However, the UAGs do not appear to be recording the true Client IP of the connecting system. From the Connection Server we see: <162>1 2024-10-17T14:51:46.127-07:00 VIEWCON2.<domain.local> View - 158 [View@6876 Severity="AUDIT_FAIL" Module="Broker" EventType="BROKER_USER_AUTHFAILED_RADIUS_WRONG_STATE" UserDisplayName="backup" ClientIpAddress="<removed for opsec but it's the UAG's public IP>" ForwardedClientIpAddress="127.0.0.1"] RADIUS access denied for user backup because of incorrect state From the UAGs we see: <13>Oct 17 15:12:52 ipv6-localhost uag-esmanager_:[nioEventLoopGroup-11-2]INFO utils.SyslogManager[putUserNameInMDC: 405][127.0.0.1][backup][Horizon][182e-***-7451-***-e3b6-***-c8bc] - UAG sessionId:182e-***-7451-***-e3b6-***-c8bc username:backup In both cases the reported Client IP is 127.0.0.1 , not the publicly routable\Internet IP of the connecting client. I have dug through the log bundles generated by the UAGs but I'm not finding the true client IP stored or reported anywhere. This seems like a major oversight. I suspect that the failure lies in having the HA mode configured, and that the HA NLB process is forwarding the client request internally but not maintaining the "ForwardedClientIpAddress" attribute. Our environment does utilize RADIUS on the connection server (not on the UAGs), and we do not NAT the incoming traffic across from the firewall (the UAGs have publicly accessible IPs configured directly on them). I do have an Omnissa Support ticket open for the issue but so far the technician is spinning his wheels and asking for all sorts of irrelevant information. Any insight into this behavior would be appreciated. I really need the true Client IP reflected in the syslog traffic so I can correlate and aggregate the data on my SIEM, and I'd like to get this done without dropping the native HA mechanism and putting these units behind my actual full-fledged NLB.
  4. I am familiar with the article here which details OS upgrade support, including support for leaving the Horizon Agent installed for Full Clones during the upgrade. However this article does not reflect that OS upgrades are prohibited for IC pools or other Horizon use cases, just that the Agent should be removed before and reinstalled afterwards. I am not familiar with any Horizon documentation that expresses a prohibition on in-place upgrades. The recommendation is always 'start fresh', of course, and again I agree that this is the best route overall. But sometimes the juice isn't worth the squeeze.
  5. Every iteration of application layering that we have implemented and tested (ThinApp, Unidesk before it was Citrix-owned, AppVolumes) has been a comical dumpster fire of added complexity. It certainly does not reduce administrative overhead -- it is just another solution to maintain and another point of contention to troubleshoot. The issues with these solutions revolve around interdependency and common\shared features between layered applications. With a large and complex application set, eventually you will find a set of two (or more) layered applications that have conflicting binaries, shared dependencies (Visual C++ or Crystal Reports, for example), or conflicting registry keys, and then you'll spend inordinate time troubleshooting these conflicts so you can attach all your possible layer combinations without breaking things. 'Layer Priority' -- I don't miss it. Not to mention the significant loss of performance that can occur when attaching another file system filter driver that has to 'think' about where to direct your I/O. Again, solutions like AppVolumes are certainly not one-size-fits-all. You're right that most users don't need most apps -- we restrict their access via AppLocker, and we don't mind that they can see the application shortcuts in the Start Menu. It's hard to understate the ease of management that comes with managing everything with a single image and no layering -- OS updates, application updates, recomposing... everything is dead simple with this approach and it's easily the lowest complexity approach for a small team to manage.
  6. Citation Needed. Gentlemen, I'm all for constructive criticism, but at some point you just have to trust the process. I generally agree that starting from scratch and deploying programmatically leads to the most reliable platform, however this deployment methodology is not appropriate for everyone -- and deferring to it as knee-jerk panacea-style solution for every issue isn't conducive to successful troubleshooting. Try to stay on topic please. In our case, we have 195 applications installed on the Gold, and years of deep customization and troubleshooting to make everything work nicely. Maybe this is technical debt, hard to say really. But building a new image is no small feat for us. We are a small team supporting a large and complex user environment, so resolving issues is preferable to starting from scratch in almost every scenario. Cattle vs Pets maybe, again, hard to say. For anyone else experiencing this issue ( @Super6VCA ) , I was able to resolve this today with some additional troubleshooting. In our environment, we found that the UEM\DEM profile for 'Start Menu' was causing the issue. We had made some OS-specific profiles for various things but we had left the default 'Start Menu' archive applied to both Windows 10 and Windows 11. By limiting the default 'Start Menu' profile to just Windows 10 via condition, and creating a new profile for Windows 11 (also limited via condition), we no longer see the issue occur. I suppose there is some old junk in the Windows 10 Start Menu archive that no longer play nice. The loss of Start Menu customization between Win10/Win11 environment transitions is acceptable for us, so this solution is a good one long-term.
  7. My environment is also experiencing this issue, and we're unsure of the cause as well. I'm going to write out the text of the message here so others might find it more easily if doing a web search: Some details: Cloned and in-place upgraded our Windows 10 v22H2 Golden Image to Windows 11 v23H2 back on 7/26/24 -- no issues post-upgrade and this message was not appearing Following Windows Updates installation and pool recompose on 9/10/2024, test users started reporting seeing the message shown above. Only non-admin users are seeing the message, admin users do not see the message a logon. The message can be dismissed and the VM will function normally afterwards. The message is not appearing on the Gold image. SFC/DISM/etc all report healthy on the Gold image. We also had upgraded our Gold image with a temporary vTPM attached, then removed it before creating and deploying new IC pools. Despite what @StephenWagner7 says, and what general feelings on best-practice indicate, the article here still states that this approach is workable: Our Gold image has no accounts that login anywhere, and nothing I can think of which should be stored in the TPM. Our Instant Clone pools are hybrid joined and setup with EntraID SSO via PRT, and the Clones appear to all be working fine except for this new issue. We do utilize UEFI & SecureBoot, but we do not enable VBS due to the presence of NVIDIA GRID. I have not tried rolling back the August\September updates to see if this would resolve the issue, but I suspect that it would. Currently we're in a holding pattern since Windows 11 v24H2 is nearing support in Horizon, so we may wait to deploy Windows 11 and upgrade to 24H2 instead, which also may resolve the issue. Somehow I suspect the issue is related to changes in the Start Menu or Explore integrations with the 'Microsoft Account' experience that was deployed around the same time, referenced here. We started seeing the message before enabling PRT issuance in our environment, however it did not resolve the issue and we continue to see the message even on VMs where the user successfully obtains a PRT.
×
×
  • Create New...