Jump to content

MattDsa

Members
  • Posts

    53
  • Joined

  • Last visited

  • Days Won

    1
  1. Hi Victor. So it now looks like we will have to force eject the pod with LMVUTIL. What we learned is this: when you have CPA and you plan powered off snapshots, you have to power off all Connections servers in all Pods and take the snapshots. What happened to us was the result of reverting snapshots that were taken of only the pod that we were doing work on. When you do that it messes the pod-to-pod replication up badly. Inside the pod, the LOCAL is fine.. the GLOBAL does break though. I have now done the same in the lab where I took powered off snaps of only one site and reverted and the global replication broke there too. The seizing of the schema master also does not work. I tried that too and when the pods have lost their head so to speak, the grab does not happen.
  2. Hi. Sorry about replying so late but yes, we confirmed no firewall issues.
  3. Hi. I have 2 pods of 4 servers each. Within each pod the replication is fine but inter-site the global replication is show lots of errors and in all cases there is a schema mismatch being reported between the servers in one pod and the servers in the second pod. In the Admin console the dashboard in one site shows healthy and sees all the servers in the other site. However from the second site, the admin console health shows that only 3 of the 4 servers can be seen and one is unreachable. We are not actually using the cloud pod architecture and would like to remove it but Omnissa support have themselves said they prefer the replication fixed first. Has anyone had a schema mismatch and fixed it? Thanks.
  4. Hi everyone. I have a 4-node cluster of Horizon 2206 to be upgraded to 2312.1 When you guys upgrade multi-node clusters do you just schedule a total down time or do you just use the disable function and do one server at a time along with getting your network engineers to disable the node as a load balancer target? Thanks very much.
  5. Hi @StephenWagner7. Thanks very much. We don't actually use FSLogix. Just DEM and I had Teams working flawlessly with 100% login success. The I ran the OSOT and reinstalled Teams and not the random 404 errors.
  6. I have a new golden image and it was working fine including being able to log in to the new MS Teams (new appx version). As a last step I ran the OS Optimization tool and it removed the new Teams along with other store apps. I then re-installed MS Teams but since then MS Teams logs in only 60-70 percent of the time and the rest is a 404 error. Repairing the app is no use and I have to log off and on and get a new VM (this is a non-persistent pool of floating assignments). Has anyone had this and does anyone know what can be unselected in the OS Optimization tool to stop this from happening? Thanks.
  7. @Chad Herman I have a an adCA.pem but I am not sure if its got to be Root on top and Intermediate next or if, as some articles say, Intermediate at the top and Root bottom. Thanks for your help.
  8. Hi. So we have 2 App Vols managers on 2206 and we need to upgrade to 2312.2. Last evening I started the upgrade and it finished and the installer went through on the first node. However, after a reboot it would load the manager's web interface but fail the login saying the credentials are wrong and then after 2-3 attempts it said too many incorrect attempts and account locked 10 mins. On the second node that was not yet upgraded I could still log in and see in the config that there was indeed a new version there in the managers. I verified that the adCA.pem is the cert for the root that issues the certs (via an intermediate) to the Domain Controller that is configured in the domain settings of App Volumes. I eventually had to roll back to a snapshot and am also opening a case with Omnissa now. If anyone has encountered the same I would be very grateful for some help please. Thanks.
×
×
  • Create New...