Jump to content

Rob Beekmans

Employee
  • Joined

  • Last visited

  1. Migration, a winding road, or a fast track? During conversations with customers the topic of migrating from vendor X to Omnissa is on the table. Sometimes someone in the room will counter that they can also move away from vSphere to another platform, to counter the cost. By doing so vendor X would replace Omnissa as provider of virtual desktops and apps. Often costs are the reason to look for a different platform, cost to run vSphere platform for the workloads. The difference between the two approaches is important to understand to make a good decision. They are vastly different and have a significant impact on budget and the IT department. In this blog I will walk through both scenarios and show why moving platforms is no sinecure. I will discuss the impact on the IT department later in the blog. The budget impact on both operational and capital acquisition budgets is something for a next blog, but it would be substantial. In the blog you will read the word “foundation.” The current virtualization platform where your backend and frontend virtual machines are running on is the foundation. Often that will be VMware vSphere, together with the networking and storage it is the foundation of your whole computer environment. The winding road.There are two roads that one can take. Migrate from vendor X to Omnissa Horizon. Migrate from Omnissa to vendor X on a new platform. Let us do a deep dive into both options and show the difference between them and see what road is the least bumpy, has the best TCO (total cost of ownership) is and is fastest way to get to Rome. For people who have never done migrations, or designed a complete network, both may look like similar paths. The reality is that this cannot be further from the truth. One is a complete redesign project while the other is a solution design. Think of weeks vs months and years. I will highlight research that was done to validate this, where the migration of a platform took years. Figure 1 Winding Road - photo by author. I am not a doom thinker but 25 years on the road bring a trove of experience that makes me a realist. On paper changes look like child’s play but the bigger the change the more departments, workshops, and evaluations are required. If we stick to road analogy, the bigger the change, the more corners there are in your road. Migration from vendor X to Omnissa HorizonLet us start with the easier one of the two, migrate from any vendor to Omnissa Horizon. This is a scenario we see with customers because there is a vendor out there with an annual license change and price increase, and their customers are not happy with that. I cannot blame these customers; We have a perfect solution and a welcome committee waiting for them. Figure 2 Omnissa Horizon solution. Make no mistake, migrating to Omnissa Horizon also involves designing an environment. The difference between the other paths is that the foundation is kept intact. You are just designing a virtual desktop and application environment, not a whole network, storage, and platform. Here below are some additional references to read: Horizon 8 configuration Horizon for Citrix practitioners Horizon 8 architecture Environment infrastructure design High availability with Omnissa Horizon The foundation here is the hypervisor platform with all its aspects, it will need configuration but as this is a migration, the foundation is configured already and just needs configuration to adapt. The configuration will look similar for Omnissa Horizon, with subtle changes due to the latest ideas or requirements from IT staff, consultants, and security experts. This will be determined in workshops. Figure 3 IT admin in server room. When migration is discussed two things need to be clear. There are components that need design because of installation (ex. new components that have been introduced) There are components that need reconfiguration (they are already there). Think of a directory service, DNS DHCP or a CA, they are installed and working but a new solution will dictate that you add an entry, create a certificate or OU. No one will fret about that as that is just part of a setup. It does require a workshop though, to plan for the changes and additions. When you migrate to Omnissa Horizon the following components need reconfiguration. vSphere platform Directory services (Active Directory) Domain Name Services (DNS) Dynamic Host Control Protocol (DHCP) Certificate Authority (CA) Client deployment and management There are grey areas to this design consideration, such as image management, if you have a properly managed image deployment process, it just needs reconfiguration. If not, you have work to do, but you know you do. There is a blog about application life cycle management (ALM) that also applies to image management, you find it here. Figure 4 Design workshop. The areas that need a design are the new components that are new to the environment. Horizon environment Remote access (You will replace your current one with our UAG, most likely) Well, that is about it, you need to design your Horizon environment. Brokering servers, Pool of Desktop(s) (PODs), and sites. It is straight forward, no funky provisioning methods as we integrate with vCenter on the best way possible. It is ready-set-go when you deploy your first connection server. A good blog on high availability (HA) is found here. Remote access will always be a networking and security workshop, but our Unified Access Gateway is no “black box,” and upgrades/updates are very controlled. It will be a breeze compared to your current solution; we do not have that Wikipedia page of security incidents. To summarize. Figure 5 design and install diagram. The Links shares earlier have details about how to design a Horizon. Consolidation will benefit your TCO massively.There is one more benefit and reason to migrate to Omnissa Horizon. Consolidating point solutions is a cost saver for licensing and management. Here is an example of a real customer where we consolidated various point solutions to the Omnissa platform and some supporting solutions. The benefit is visible instantly, isn’t it? This is material for another blog, I just wanted to make you aware of the extra benefits. Figure 6 Consolidation of solutions. Now that we have seen how a migration to Omnissa looks like, and how you can consolidate, let us look at moving platforms and solutions. Moving platform and solutionYou are thinking about moving your Omnissa Horizon environment on vSphere to vendor X on a different platform, right? This is a major project. Let us look at what is involved if you are going forward with this. You are now changing the foundation and the solution at once, there is nothing to start with, it is a greenfield design. It is even worse as you have an environment running and are introducing a new one next to it, you have a tainted greenfield environment, you need to design for a phased migration while designing a greenfield environment. I will leave the phased migration out of the story; it makes it too complicated. Let us start with the overview before we go a bit deeper. Figure 7 design and installation diagram. There are components that, from a functional level, only need configurations. The servers these components run on will need to be migrated to a new platform though. When platforms change, agents on the servers will change as well. It also involves thinking about High availability capabilities of the new platform, as most are not as advanced as vSphere. But all that is child’s play when you are designing a migration to a new virtualization platform. It is a major task because it is the foundation your whole organization runs on. All servers, all virtual desktops, and every database are dependent on this platform, this foundation. Research proves this is a huge project.Overall, it is a major project that will touch on every topic in IT. A well-known research firm recently assessed the migration of VMware vSphere to another platform for organizations that run two thousand virtual machines (VMs) or less, and at least one hundred servers to host them on. The assessment model found that it would require 18 to 48 months to complete the projects. Each VM will cost between $300 and $3000 to migrate. This is due to the involvement of external service providers. Figure 8 IT admin in server room. Not to scare anyone here, but it also would require up to ten full-time staff members for a month or more to do the initial design (back to our original IT Ops and budget impacts). Add to that up to six people provide evaluation analysis of potential replacement platforms for the rest of the year. Of course, all these factors depend on the size, complexity of your apps and infrastructure, but it is a major task. The conclusion of the advice from this research is interesting, Customer see VMware vSphere as a virtualization product. Organizations where multiple components of VMware vSphere are deployed should consider it a networking product, a storage supplier, and a management tool before thinking about virtualization. It is far more complex to untangle everything that people realize. Knowledge is key for design, installation, configuration, and troubleshooting.Regarding design, untangling and evaluating new products is one part of the problem, knowledge is a big second part. Characteristics of a platform dictate how well servers and databases perform. Everything you know about your current platform needs to be relearned for the new one. Training courses for hypervisor platforms span a week for basic knowledge and will span multiple weeks if you need to go into networking, isolation, and security. Some platforms do not have official training courses and IT admins will need to visit web resources from e.g., YouTube for their training. Figure 9 Education is key. Most Enterprise products will take at least 5 days of training to get through the basics. VMware/Broadcom vSphere, NSX, VSAN will take about 30 days per person to get the training necessary to be proficient, let alone get certification on the product. That is what it takes to learn a product, migrating to a new platform does not work without the proper knowledge. If you think you will become an expert on a product by just doing training and certification, you need to think again. To become an expert takes years of installation, design, and mostly troubleshooting failing environments. You will need external help to set up your new environment to prevent mistakes made and help with issues that will occur. It is not a new note taking program, it is the foundation for the whole virtual business. What about backup and monitoring?We covered the foundation, we covered design, evaluation, installation, and training. One thing we have not covered yet is the solutions that support the current platform. Think about backup solutions for the current platform or monitoring, to name two. Even if these backup solutions are supported by the new platform, they need to be reconfigured. I have done my share of monitoring design, installation, and configuration to know that this is not a walk in the park. The data-lake the product is working from will be erased and it will be day zero again. Each metric, each trigger, each dashboard needs reconfiguration and testing. It is not as easy as it seems depending on your timeframe for the project you may have two environments to manage side by side. The same goes for backup, archiving solutions used. vSphere has been the market leader and solutions adapted to support it. When switching to a solution that is not popular in the market, support may be lacking. I checked the market leaders for their support for e.g., XenServer and none of them do support it. When that happens, different solutions with similar functionality are looked for. Evaluation of products, testing new environments and configuring settings, and all this while also designing a new environment while keeping the current one running can be daunting, even for the most seasoned pro; at least you won’t be bored? Do not waste years of your time, resources, and money.I hope by reading this blog you go into the next meeting with a different mindset. There is no shortcut to migrate from one platform to another platform. If licensing concerns are an issue, it is good to know that the Omnissa Horizon license includes vSphere like it always did. While it may sound like another platform is as good as vSphere, please do your own investigation as they rarely are. There may be valid reasons to migrate, Omnissa supporting a new platform or moving to the cloud could be two. The point is to not take this lightly and not move out of emotion. Move if the financial reasons are higher than the time, costs spend to move. In summary, the total cost of the project to migrate from one platform to another is very high, and although there are valid reasons to go down that path, chances are that they could match or exceed the cost of the licenses about which you are concerned. Think twice and make a good calculation, do not paint sunny images of a project, there is no quick path to success, it takes blood, sweat, and tears to use a quote. Reach out to your Omnissa representative if you have any questions.
  2. I have good news to start the year with, the fix to your issue has made it to 2412.
  3. EvenGooder started following Rob Beekmans
  4. Over 30 years of user experience, from roaming profiles to DEX. User experience is nothing new, we as an EUC (End User Computing) industry cared deeply about user experience for decades already. We did not have the tools to measure it, but we did everything in our power to optimize the experience. Yes, that included the crazy scripts we wrote, Kixtart forever. In the early days of published desktops and applications, admins would set up a user environment based on what users required. Think about printers based on location, registry keys to preconfigure applications, or removing unwanted popups. Employee experience back then was focused primarily on virtual desktops, the endpoint was not managed properly at that time. That would take another 15 years at least. The tools available back then could deliver, adapt, but not measure or mitigation. With time this has improved to where we are now, measuring, and mitigating user experience and issues. This blog will dive deeper into the decades that led to DEX (Digital Employee eXperience), and why you need a comprehensive solution to accomplish good user experience, management, and delivery. We start in the early days of the 90s. Figure 1 : timeline of EUC and user management and experience. Roaming profilesWe start with Roaming profiles; it was the worst solution possible but also the only one at that time. Roaming profiles, introduced with Microsoft Windows NT in 1993. I started my “career” in IT in 1993, looking back I’m sure they introduced this to pester me. The concept of Roaming profiles is great, you capture everything what the user is doing on their desktop and save that for the next time they log on. It made it possible for users to use more than just one workstation, their profile would travel along, they could roam. The alternative was less appealing, it was a local profile or even a mandatory profile. A mandatory profile was also roaming but a profile that was not changeable by the user. IT professionals and employees were not fans of roaming profiles. A bad network connection, a user profile that grew beyond earthly expectations, or a computer that just would not function properly, all were reasons for profiles to not save your last changes. Roaming profiles could take ages to save or get corrupted, talk about bad user experience. Figure 2 Roaming profile error, copyright Patrick Hoban. The way it captures everything is like FSLogix, it just captured everything. The difference is that Roaming profiles copied that over a congested network back to a hidden folder in the user home folder. FSLogix does not copy, FSLogix is a disk solution, connected at logon and disconnected at logoff, there is no copying involved. Copying created issues, a corrupt profile was quite common, and the only solution was a fresh new one. The biggest issue was that a large profile, and all profiles grow large, would cause terribly slow logon and thus unbelievably bad user experience. Imagine that everything you do, for days, weeks, and months, is on a disk but never cleaned. That is a roaming profile. The best analogy is to go into your teenager’s room and look around, that was once a clean room as well. You now understand all the fun we had in the 90s with managing and securing user profiles. The fact that often we could not save someone’s profile, once it was corrupted, did not help in making IT admins popular. Let’s move on with User Environment Management, the tool to fix these issues. User Environment Management for the rescueThe market reacted to the mayhem created by Roaming profiles. User Environment Management solution, abbreviated UEM (Until Unified Endpoint Management raised its head) entered the market. We enter 1999, and two solutions, to become market leader, started their business, RES Software and Appsense. User Environment Management is based on the principles of not downloading and saving everything the user does on the desktop, but just what they need. Think of it as a just-in-time system that also works context-aware. Figure 3 Context awareness of user environment management solutions. Think of it like this: when you log on, you do not need a printer, you cannot print at logon. The systems would not connect printers until the logon finished. Connecting a printer at logon takes time, the desktop needs to communicate with the printer and set up a connection. The same goes for drive mappings, no one needs those at logon, you need them right after logon. That is what User Environment Management accomplished, a smart, just-in-time environment for users. But there was more to it. IT administrators, and certainly users (although they didn’t know it was called roaming profiles) wanted to get rid of roaming profiles. With User Environment Management this was possible, local profiles were used. Profiles that are created when you log on. User Environment Management would then add whatever was needed to that profile and save everything as scheduled or at logoff. No more huge profiles to copy back. The end of the 90s also saw a different phenomenon, the way people worked started to change. Users suddenly started moving around, roaming from one location to another, even working from home was a thing in the 00s. With the change of location their security posture changed, the devices close to them changed. Think of printers, you want connection to the closest printer. You also want to make sure that someone is not accessing sensitive information from a fast-food restaurant. Employees would move from a secure location to a secure place in the office. Coping with all the changes in how users work was the reason User Environment Management was such a hit. It took the weakness away from Roaming profiles. Figure 4 Omnissa DEM. Omnissa Dynamic Environment Management (DEM) is also a User Environment Management solution. It started in 2008 in The Netherlands under the name Immidio. The downside of User Environment Management was that it took time to set it up. Not surprisingly, you need to know what to save, what to copy back, where applications save settings, how to block popups, how to make sure the 1st time setup is handled. Everything to create a perfect user experience. This is one of the reasons why organizations are looking at FSLogix, there is no setup required, it is a dumping ground from profiles. Building for the best user experienceThe goal with User Environment Management solution is to deliver the best user experience possible. That is why you tune the system, to not connect a printer or a drive at logon. That is why on-demand delivery of applications is amazing. Move anything that slows down the logon process to a time it makes sense. Tuning Windows is a task itself. Windows is designed to run everything, not to be lean and mean. In a virtual desktop environment, you want a system that is lean and mean. Tuning Windows means stopping unwanted services, removing Xbox and such from the system, and making sure that applications are not updating during the day. On a non-persistent instant clone that would only be a terrible experience as it would not do anything. We are destroying that desktop anyway, why keep all these useless services running? Building for the best user experience, which is the goal, but how do you measure that? That was always the challenge. Figure 5 Instant clone, smart provisioning. Before DEX came around, we measured by looking at the logon times or application launch times. But deeper than that was not possible, asking a user about the experience is not a very smart metric. That is where DEX (Digital Employee Experience) comes in. You now understand User Environment Management, and why it has its place in EUC. Where roaming profiles were just copying of profiles, User Environment Management added logic, automation, just-in-time to it, everything to deliver the best environment for a world that was moving faster and faster. UEM but now as Unified Endpoint ManagementMid-2010s, Endpoint Management got mature and started to use the acronym UEM. This confused the market for a while, it was sorted out (not) and now we have two solutions called UEM. Figure 6 Workspace ONE UEM We need to discuss UEM (Unified Endpoint Management) here as well as it enabled proper management of endpoints. Endpoints really became part of the play when we were able to effectively manage them. Why is that important, you ask? User Experience starts and ends at the endpoint, a bad performing endpoint will never deliver a good user experience. Omnissa Workspace ONE UEM is one of the leaders in this market. In the past couple of years, we have seen a movement towards more managed desktops. Where the first 25 years of modern EUC were focused on virtual apps and desktops, with endpoint management came the possibility to properly offer managed desktops. Without elaborating too much about UEM, I feel we entered a new era of EUC when endpoint management got mature. It is one thing to manage them like inventory, it is another thing to see them as a proper endpoint that could live side-by-side with a virtual solution or even replace it. We are there now; they are on equal level. Because of proper management of endpoints, we can make sure that the performance of those endpoints is optimal. An optimal performing endpoint will perform better when the user connects to a virtual environment. You cannot have one without the other, at least not if you like to do it right. We are in fact going a step further today, UEM solutions are now also managing virtual desktops. We’ve come full circle it seems. Workspace ONE UEM supports management of persistent virtual desktops. If we want to monitor and measure the user experience, leaving out the endpoint would be a crucial mistake. The experience starts at the endpoint, there is no virtual app or desktop without an endpoint. Managing but also monitoring that endpoint is therefore crucial. But what if we only monitor the virtual environment? that will be enough, right? In my working life I have been involved in monitoring, and the one thing I still take from that is that there is no such thing as an isolated component. If you want to know the user experience, the app performance and what not, you need to honor the fact that it is all connected. Read my blog on troubleshooting to get a deeper understanding of connectivity between components. Only when you can deliver a complete digital workspace, can you honestly say that you deliver, monitor, measure and mitigate. UEM has a solid foundation in EUC and I’m sure you see why you can’t have one without the other. There is no virtual environment without an endpoint, but endpoints are useless without an application environment or a virtual environment. Everything is connected and that’s how we need to deal with that. That also brings us to the last section of this exceptionally long blog, DEX. DEX, because who does not want their users to be happyDigital Employee Experience management is a logical step forward in this market. We want to know if what we are building, delivering, and securing is also helping users. To make decisions you need data, a wise person once said, “Without data, you’re just someone with an opinion.” I could not agree more with that. You need data from various agents, components and let them swim in a data lake. What DEX brings to EUC is that it fishes with multiple angles in the data lake. Without DEX we had a VDI monitoring dashboard, a UEM dashboard and if you were lucky, you could integrate them a bit. You would never get the insight that we have today. Based on all the data in the data lake the system can determine what needs tuning, what the experience is of a user, how an application is performing, how healthy a system is, and what not. Gathering data from endpoints and virtual desktops enables that holistic view on user experience. It shows, to an admin, a complete view of the environment and how the user is experiencing it. No longer just focusing on VDI or just focusing on the endpoint, a complete view for a complete understanding of user experience. There are other blogs that go deeper into DEX, I like to refer to them for a more in-depth look at user experience monitoring. Figure 7 Omnissa DEX Omnissa DEX also uses metrics to determine the user experience. Our Horizon agent is enabled to measure metrics from virtual desktops. It does so by default and all that data goes into a data lake. The Omnissa Horizon client and intelligent hub also captures data, all within boundaries of GDPR and other privacy laws. This data also goes into the data lake. If Omnissa Workspace ONE UEM is managing and protecting the endpoint, that data collected into the Workspace ONE data lake. We got a whole lake filled with metrics and with that we can determine how that user session is going, how that endpoint is behaving and how well the virtual desktop is doing. This ends the journey for now, I hope you found the journey enjoyable to read and perhaps you picked up something of knowledge. Sections could have been written far more extensively but the blog was long enough as it was. Let’s finish off with a short view of what Omnissa is doing in this space. Omnissa, because you want to deliver, manage, protect, measure, and mitigate. I would like to end with a short view of what Omnissa is offering in this space. We discussed delivery of user profiles with user environment management, Omnissa has your back with Dynamic Environment Management (DEM). We discussed managing and securing endpoints with Unified Endpoint Management, Omnissa has your back as well with Workspace ONE UEM. We discussed how DEX can into the EUC, and again Omnissa has your back with Omnissa DEX and Freestyle Orchestrator. We also discussed delivering applications on-demand, and even there Omnissa has your back with App volumes Apps on-demand. Figure 8 - Omnissa platform. Omnissa is the most complete solution for organizations that are looking to deliver desktops and apps to virtual and physical devices, to secure them, to manage them and to measure and mitigate user experience. Check out the other blog posts on these topics or visit Tech zone for more information.
  5. Compliance certificates marketing tool or necessity? Recently, I conducted a LinkedIn poll asking, ‘Do you look at certifications of solutions before you select one?’ While the response volume wasn’t large, it revealed an interesting perspective: many view compliance and security certifications as primarily marketing tools. I am not surprised by the choice of answer, of course vendors will go to length for marketing purposes. If compliance and security certifications were for grabs, that also would be done for marketing purposes, but they are not. They really do have a purpose, are not a walk in the park, and are something to look out for. Let me explain why. This blog is not about the certificates we as Omnissa obtained or about to obtain. This blog is about why certificates are important to look for when selecting a solution. We want our data to be secure and safe.If we asked customers what they want from any solution, this would be high on the list. Customers want to know they can trust a solution, a vendor. They want to be sure a vendor does not have a backdoor in their system, unused ports open, or well-known hacked version of a component in use. Customers, these days, are even more worried about their personal data, what is being saved, how long it is saved, and who has access to it? This seems like an easy decision, but this is exactly why validating is important. Only with validation can a vendor show to customers that they are safe to use. Certifications are the way to show that validation. Let’s dive in a bit deeper. You don’t trust your money with just anyone, do you?The analogy I like to use here is banking, you don’t trust your money with just anyone, do you? Banks have strict regulations on how to manage your money, you trust them to manage it correctly and not invest it in untrustworthy stocks or businesses. They go through auditing yearly by external independent auditors but there is testing during the year as well. This is all based on specific rules and procedures on how a bank should operate. You put your money in a bank that has a good name, has passed the test. You do not want to lose your saving because of the fancy name of a bank, you want to be sure they are a well-run bank. Figure 1 "secure" access. As a vendor, to work in specific markets, we need to show that our procedures, products, and processes are up to a standard. The standard they have in that market. If we don’t have a specific certification, we are not allowed to be used in that market because our solution would not be guaranteed to prevent leaking sensitive data or something like that. They say, “the chain is as strong as the weakest link” and it is very applicable here. If everything is audited but the software used is not certified for the field it is in, what is the point of auditing? Certification matters, it truly does. Let’s look at well-known certifications and what they stand for. There is a wide range of certifications, I will only touch two to give an idea. I encourage you to look at certifications for your specific market as it may prevent the next hack or data leak. The ISO familyThere is the ISO family, we all heard of them but what are they used for? The ISO 27000 family comprises information security standards published by the International Organization for Standardization (ISO) ISO 27001 is a certification focused on information security management systems. That alone should get you interested. It outlines best practices for protecting sensitive information and how to manage the security risks associated with it. Who would need ISO27001, you may wonder? Any organization that manages sensitive data would be interested in this, also if you want to comply with regulations that are there for data protection, ISO27001 is an important one. The ISO family expanded and 27017 was added, this one address Cloud security. Important for cloud providers and customers using those services. Data that is stored in the Cloud is perhaps even more in need of good guardrails. ISO27017 address things like data encryption, incident management and access management. The next family member, it is a large family, is 27018. This is an interesting one because it narrows it down to privacy related information. Think of PII (Personal Identifiable Information). 27018 is interesting if you think of GDPR and similar regulations. For any organization managing personal data in the cloud ISO 27018 is important to obtain. I don’t want my data in the wrong hands or sold on the black market, do you? Figure 2 ISO certified Omnissa products. NIAP certificationThe ISO family is bigger than just the 2700 group, an important one is ISO 15408, it is common criteria and used for evaluating IT products. To get the NIAP certification, a solution is tested against NIAP Protection Profiles (PP). Omnissa products are verified on the following Protection Profiles. Next year the renewal will take place, the versions shown were used in the previous test. PKG_TLS_V1.1 Horizon connection server 8 2209 Horizon agent 8 2209 Horizon client 8 2209 PP_App_V1.4 Horizon connection server 8 2209 Horizon agent 8 2209 Horizon client 8 2209 (more info is found here) CPP_ND_V2.2E Unified Access Gateway 2209 PP_MDM_V4.0 Workspace ONE UEM 2209 MOD_MDM_AGE_V1.0 Workspace ONE UEM2209 Why is NIAP important? NIAP is a cybersecurity product certification that is required by federal procurement requirements for use in US NSS (National Security Systems). Simply said, it is to verify that commercial products are trusted handling sensitive data. Omnissa Horizon is the only VDI solution ever to receive this certification. Sure, you may not be in the federal business but that is not the point. Federal business has a high requirement level for software, but that means that everybody else benefits from it. The bar is raised because we want to comply with the highest standards, and you all benefit. The SOC familyThe SOC family is a family of three, the most interesting one of the family is SOC 2 Type II. SOC 2, but also SOC 1 (but who cares about that one) are attestation reports. The company attest, specific security controls are in place. There are five trust services criteria (TSC) that are included in the SOC 2 report. Security (Protecting data from unauthorized access) Availability of the system Confidentiality (limiting access to information, store it only when needed) Processing integrity (verify operation status of systems) Privacy (guard sensitive data from unauthorized access) Organizations concerned about privacy and security of information/data, find reassurance in knowing a company has SOC 2 certification. The guardrails and frameworks are in place to safeguard sensitive information. SOC 2 Type II is more comprehensive as it will evaluate the effectiveness of existence of controls over time. Cool that you can show me now how you deal with this, but let’s come back in a year and another year and see if you still act like you promised. SOC 3 is a summary of the attestation of SOC 2, the reason for this is that in a SOC 2 report confidential information of the organization is detailed. To have a customer-friendly report, SOC 3 is there. Figure 3 SOC certified Omnissa products. The process of validation and auditingTo give a little insight into the process of validating and auditing, let’s look at how one gets the NIAP certification. A vendor, in this case Omnissa, chooses an approved Common Criteria Testing Lab (CCTL). That lab will conduct the product evaluation against applicable NIAP-Approved protection profiles. The vendor, Omnissa, writes and proposed a security target for the protection profiles. That security target is proposed to the CCTL who proposes it to NIAP. Once accepted, CCTL will evaluate the product with oversight, validation and approval from NIAP. Once the tests are completed successfully the product is posted to the NIAP compatibility list and the common criteria portal. The flow diagram below shows the process. Figure 4 NIAP evaluation process. Different certifications have different processes, the goal of this short NIAP description is to show that it is not a stamp you collect at the counter. Products are thoroughly tested before they get certified. If you look at ISO 27001 for instance, you need to set up an ISMS (Information Security Management Systems) before you can register to get certified. The ISMS must meet all the requirements of the standard. Conformity with ISO 27001 (and the others) means that an organization or business has put in place a system that manages risk related to security of data owned or handled by the organization. There are more certifications that we could look at the process. FedRAMP, C5, G-Cloud, Cyber Essentials Plus and many more. FedRAMP for one requires a penetration test of your solution. Cyber Essentials Plus requires internal scans for system config, patches, tests on your gateways and public-facing servers but also external scans on any public-facing system. Test and validations are thorough and they expect continuous monitoring of systems to keep the certification. It certainly isn’t a walk in the park. Where can I find more about Omnissa compliance, security and privacy topics?Omnissa has a dedicated web page for all our privacy, security, compliance and resiliency topics. This page is found under the header Omnissa Trust center. If you are interested in privacy related matters, Omnissa has a PDF to download, on the Trust center page under Privacy, more links to various documents are found. Please read about our security programs and policies, they are found here. You will find information around our security development lifecycle but also our 3rd party vendor management. If you find a vulnerability in our software, [email protected] is available for you to contact. We will manage the conversation with confidentiality and will make sure that if applicable the researcher will receive acknowledgement for their efforts. Finally, compliance, the page is split into two pages, the Cloud solution compliance page and the on-premises certifications. Here you will find all the information about our current certifications. There is one “mind you”, we just became a new company and thus several certifications are still in name of our previous name and will be renewed in due time. I trust that this blog helped you gain understanding in why certifications are important, and that you see that Omnissa takes security, privacy, resilience and compliance very serious. Take a good look before you select a “we offer it at half price” solution, your data is worth a secure solution. Watch out for blogs from my co-worker Andrew Osborn about certifications we obtained, they could be the life-saver when it comes to data safety. ** any reference to testing in the blog is based on data found on website regarding the topic, if errors are found, please report them and we will update the blog accordingly **
  6. High availability because downtime is not on my agenda today. Power goes out, Internet connectivity goes down, it is just a way of life. We have more trustworthy grids, but outages do happen occasionally and, in some areas, more than others. With outages comes loss of productivity and negative user experience. Omnissa Horizon 8 is designed to make sure outages are not impacting your experience and productivity, learn more about how in this blog. Keep your friends close; keep your data closer.Employees are not just working from the headquarters (HQ) office but also from branch offices. Employees in branch offices need access to data just as employees in HQ. In the EUC industry it is a well-known design principle that says that employees should be close to the data they work with. The reason for that is that everything that travels distance will start to see latency, and latency equals bad user experience. Quick lesson on latency “Distance equals latency, with latency comes a lower throughput. Keep your desktops and data close. TCP is unforgiving”. To access data stored at HQ, they could connect to a desktop through the main datacenter and work with the data there. But let’s be honest, that would just be silly to do. The design principles that apply to data also apply to virtual desktops. Virtual desktops and apps should be in proximity to the users who should be close to the data. Anything to keep the latency monster at bay. Because of this, we designed Omnissa Horizon 8 to work in a multi-datacenter setup with Cloud pod architecture (CPA). We will call it CPA from here on forward. Latency and user experience aren’t the only reason for multi-datacenter design. Think of downtime and the impact that has on productivity. What if you can take the risk of downtime away by building independent datacenters? Independent but connected, or connected but independent, if we look at it from a downtime perspective. Omnissa Horizon 8 CPA is an independent but connected / connected but independent high availability solution built for enterprises. Let’s dig a bit deeper into this. What is the magic behind this?Omnissa Horizon 8 is deployed in what is called a POD. A grouping of components to deliver brokering and gateway functionality while provisioning desktops and apps. Multiple PODs can be grouped into sites, just for logical reasons of management. Figure 1 Omnissa Horizon Pod design. A POD is, at a minimum, a collection of the following components. Brokering servers, called connection server, Access Gateways called Unified Access Gateway, A platform with management to run your workloads on. Together these components enable you to create and deploy desktops and applications, allow users to connect to them from any device, any location, any time. The diagram below shows the basics, employees connect either through a gateway or directly to the broker depending on their whereabouts. Requests are brokered and desktops or apps are assigned to employees. Figure 2 Omnissa Horizon Logical components. What about high availability?The question of course is, how do I make sure that resources are available, also when there are connectivity issues between datacenters? We just keep it simple; Horizon 8 enables you to deploy up to seven (7) Connection servers. Whether you need seven is up to you, fewer will most certainly be able to manage the load as well but depending on your high availability requirements, you might want to deploy more. There is no conclusive answer, N+1 is the minimum, but you need to ask yourself how impactful the downtime is. If it is, deploy more and spread them over the hosts. Connection servers share a database to keep up with everything that is going on in the Horizon 8 environment. You can add or remove a Connection server very quickly if required. One or more Connection servers going down (depending on the number you have deployed) will not affect the working of the environment. While we never design for maximums, there is no where to go once you reach that, they are there and good to have knowledge of. One Connection server can manage 4000 instant clone desktops and a logon rate of 1 user per second. It also can manage 4000 active sessions alone, adding more Connection servers will very quickly bring up the number of sessions the environment can service. Check out this page for all config maximums. The same goes for the Unified Access Gateways, if your users are primarily internally located, then high availability beyond N+1 isn’t really necessary. If your users are hybrid, things change. There are good Tech Zone documents about this topic, please familiarize yourself with this before deploying. Horizon 8 is a very robust solution, but it still needs a proper design. Tech Zone resources are shared at the end of the blog post. Cloud Pod Architecture (CPA), where the magic happens.In the previous section I explained the components and how it works in one POD (for ease of explanation say, in one datacenter). But we live in a bigger world and organizations are larger and ever expanding. Let’s say we have a NY office with a Branch office in LA. Desktop, Apps and data are located in both datacenters. How do we connect them to make sure users can work from either location, or use desktop locally but also remotely in the other datacenters? This is when Cloud Pod Architecture walks on stage. Figure 3 Omnissa Horizon Cloud Pod Architecture The diagram shows the most basic form of CPA. Both sites have independent connection servers running. CPA enables the ability to link together multiple PODs to provide a single large desktop and application brokering and management environment. It adds a Global Data Layer that activates inter-POD communication between the connection servers of different PODs. By adding CPA, you enable entitling employees to desktops in multiple PODs, your world suddenly got bigger, and management is done centrally. No more need for locally assigned desktops. For more information about the abbreviations used in the diagram, check out the sources listed at the end of the blog post. Remote agentsThere is one more feature to talk about, it goes beyond what we talked about before. Omnissa Horizon 8 Remote agent, for when you want to connect outside of your POD to a remote location. Omnissa Horizon 8 remote agent supports connecting to resources running in locations without a connection server. This can be used when you need extra capacity fast. Figure 4 Omnissa Horizon Remote Agent design. Enough talk already, how does it work when my connection is down?Let’s go into a real-life scenario, let’s play a game of what happens when….? What happens when my primary datacenter and my branch office datacenter get disconnected? As you read earlier in this blog, the beauty of a CPA is that it is connected but independent. That means that in case of a loss of connection, nothing happens to either environment. Nothing, absolutely nothing. Desktops and applications running in LA won’t be reachable from NY but that is because the connection is down. Desktop and applications in NY will be available for employees in NY, and similar for LA desktop and applications, they are available for local employees. The only thing that is lost is the connection between the two Pods’. There is no global assignment anymore. The possibility to connect to desktops on the other side is gone, but for local usage, nothing changes. The difference cannot be bigger. Consider our competition where their management console and ability to create assignments is no longer accessible. When the connection between two Horizon PODs goes down, everything in that POD remains functional, it is just the connection to the other POD and thus global connections that are gone. The management console is still available, all virtual machines are still manageable, can be refreshed, deployed, and assignments can be made. There is nothing in that local POD that stops working with Omnissa Horizon. A vastly different outcome compared to our competition where management consoles and new assignments are unavailable, and where machines are in an unknown power state. Is latency going to be an issue?The question was asked if latency would be an issue with CPA? Latency is always an issue when your solution depends on connectivity with the main site. As mentioned before, distance will have a negative impact on latency. The good news is that Omnissa Horizon 8 CPA does not have that limitation, there is no direct communication required to keep the localized environments up and running, as said “connected but independent”. What is the catch?There is no catch, the limitations in a Cloud POD architecture are limits in connections, sessions, PODs and total connection server. But guess what? That isn’t your issue because these are the limits, you won’t reach them. As mentioned before, please don’t scale for maximums, leave headroom. And image the scalability when you can connect NY with LA, London, Berlin, Amsterdam, Atlanta and so on, 50 PODs over 15 sites all connected but independent. What if all my brokers go down?I can see the concern, which would be bad, right? We need to go back in time and talk about the birth of hypervisors and how they introduced a feature called High availability. When deploying multiple connection server instances, you make sure they are not residing on one host, that is rule 1. If a host is experiencing a failure, high availability of that hypervisor will move the VM’s on that host to another host. You can set priority for certain VM’s to be moved first, that is Rule 2. Remember, you can deploy seven and you have high availability with your hypervisor. Rule 3 is that you deploy more than one (N+1) and will deploy a number that honors your HA requirements with a maximum of seven. To summarize: Rule 1: Spread out your connection servers over your hosts, not all eggs in one basket. Let the hypervisor high availability feature do its job. Rule 2: Set priority to move the connection servers early on. Rule 3: Deploy enough connection servers to honor your HA requirements. If all hosts are down? If all hosts are down, you have bigger issues. Chances are your datacenter is down as well. Even if this happens, we have a solution, Horizon Cloud could be the way to go, with the On-Ramp feature you can have desktops assigned in Horizon Cloud. With your on-premises datacenters down, cloud desktops with universal brokering are available. We run heavy applications in our London datacenter, is that an issue?An Omnissa Horizon 8 POD is an independent running environment, there is no dependency on any other POD. We know that other solutions require different setups depending on the applications that are deployed but we don’t want to make life harder than it is, you can run whatever you like in a POD. Just make sure your data is close enough for good user experience. Extra infrastructure is worse for my TCO, right?Any solution out there that enables employees to work locally, when the connection is gone, will have infrastructure deployed on site. There is no magic in the world to connect an endpoint to a virtual desktop without components to manage that. The components required to set this up for Omnissa Horizon 8 are just two connection servers, that’s all. You can extend that with a database for event logging, a Unified Access Gateway for external connections, but you don’t have to. It will work with just two connection servers, even with one if you like to gamble on availability. Competing solutions require multiple gateway servers for internal access, connectors to manage the brokering when the broker is not reachable anymore, and external proxy servers for external access. That, to me, sounds like a bigger hit to your TCO than with Horizon 8. Enterprise ready, designed to operate under any condition.In this blog we discussed Omnissa Horizon 8 and its high availability design, how it operates and how it deals with a disaster and how to scale your environment. Omnissa Horizon 8 is built to deal with outages and disconnects of environments, deploy it and evaluate it. Resources on Tech ZoneUseful resources are available on Tech Zone, learn how the architecture works before you deploy it. Horizon 8 configuration Horizon for Citrix practitioners Horizon 8 architecture Environment infrastructure design
  7. Ruhul Azad started following Rob Beekmans
  8. Troubleshooting, an art to master One thing that always baffled me is how people troubleshoot an issue. I may have unconventional ideas around this topic, it is a truly an art to master. Why am I writing a blog on troubleshooting? Most posts in this forum are about issues that need a fix. In this blog I will explain my view on troubleshooting and share links to Omnissa articles to use. Hoping that you find joy in troubleshooting as I do. Crazy as it sounds but it was one of the things I enjoyed while being a consultant, next to design workshops. There are, in general, two types of people when we look at this. There are people who look for a (quick) solution to get going, I call them the fast-forward people, and there are people who want to understand the issue, the Need-to-know-why people. Fast-forward and need-to-know (my analogy to explain it to you) is also known as top-down and bottom-up. It reflects that GSS/Support/Engineering folks focus on logs and work upwards, while people on de the deployment/architecture side work top-down. Figure 1 - Troubleshooting approaches. I am in the second group, the need-to-know group. I need to know how things work, why things break, what is causing it and how we can prevent it from happening again. I understand that issues that halt production need a direct fix, but it should not stop IT from diving into the why. I did my fair share of issue resolving, troubleshooting in my 25 years of consultancy. I missed birthday parties, beach vacations and weekends. Let us get into the art to master, troubleshooting. Fast-forward solvingThe fast-forward method in a nutshell; when something stops working, we will restart it, kick it, restart services, restart computers, unplug cables, whatever it takes, hoping it will magically start working again. Or we create scripts to overcome the issues at hand. If a component expects value x. I will make sure value x is present when the service starts. It does solve the issue of something not working, I must give them that. Under pressure of management demanding for a fix, any fix is a fix. It will not get the price for the most beautiful solution, but we are moving again. The root cause is not going away.Something is simmering in the background, and we applied band aid on the issue so it will keep running. That simmering fire in the background may grow, my flare up again. The band aid may not be big enough this time. Nothing happens out of nothing, nothing. Something causes the service to stop, the computer to fail, the connection to drop, and all you did is create a workaround. Anytime soon it could happen again, from a different angle, with more impact. If you do not understand why it happened, the issue is not resolved. It is like replacing a power cable when it catches fire due to overheating but not taking away devices that led to the overheating. Not fixing the root cause is like blocking a river and just hoping water does not just flow around it. It will come back to haunt you, and the next issue could be more damaging to production. The image of Krka in Croatia is telling how water does not care about obstacles. I often think of fixes like a beaver dam, they can block rivers but the water behind them is not going away, it is blocked and will cause an issue elsewhere. Figure 2 - Water will find a way to flow. - Photo by author. Root cause analysis is crucial in monitoring and troubleshooting, sadly it takes time to set up and understand. The whiteboard is your friend getting this art degree.Root cause analysis is something to prepare for. Do not put the sprinklers up when the building is on fire, do that before it catches fire. The same goes for network/VDI monitoring, map out every connection/dependency of your network while there is no blocking issue. Below is a lightboard example of a diagram mapping. Figure 3 - Lightboard diagram, photo by author. Components, services, computers, network devices, and endpoints, they talk to one other. They receive information, they acknowledge or request information. It gets more complicated; every component depends on other components. Often without a direct connection. Think how DNS and AD play a vital role to make sure any components work, without directly requesting or receiving anything from them. Without DNS your network is an island. With AD you are all standing at the gates unable to log in. Map every component and the dependency, who talks to who, who listen to who, and who is depending on which other component to work? That will give you a spider web of components and connections. If you can add the data received or sent as well, you will be a root cause expert and soon receive your art degree. I included a diagram with courtesy of eG Innovations, it shows the complexity of a network. Sure, this one is scary, but it shows how complex it can become. It also shows why finding the root cause is difficult. Without mapping the whole diagram how do you know that one server is the reason the frontend is not working properly? Figure 5- Courtesy eG Innovation, network topology view. If you encounter an issue, you can now check sections/connections of the spider web. (Examples below are focusing on a Horizon environment and not the diagram above) Can the client reach the gateway? Can the gateway reach the connection server? Is my desktop getting an IP address? Is the virtual desktop getting a Horizon license? Is the virtual desktop getting an RDSCAL? We call them baby steps, break the whole connection down to smaller sections and check each individual section. This often involves other experts (hypervisor, DNS, SQL) but as a team you can help each other. On itself, every component looks like a well-behaved child, put them all in a room and you will see the change in behavior. Once you break down the complex diagram you notice it is not so complex anymore. It becomes smaller understandable pieces of communications, and one by one you build that out. You will find something out of order, data not received, data not as expected, or a route that just goes into the woods. That will guide you to another flow or a component where someone with specific expertise can help you. Together you can solve a root cause, alone you are fixing an issue. Resources to help you out.Here are resources that may be helpful in troubleshooting once you identified the culprit. · https://techzone.omnissa.com/resource/understand-and-troubleshoot-horizon-connections · https://kb.omnissa.com/s/article/89455 - How to read Blast Extreme logs and determine packet loss. · https://kb.omnissa.com/s/article/90139 - Troubleshooting Display Issues with the Horizon Blast Protocol - Black or grey screens on connect. · https://kb.omnissa.com/s/article/90243 - Guidelines when Troubleshooting Horizon Blast Protocol Performance Concerns · https://kb.omnissa.com/s/article/83088 - Unified Access Gateway (UAG): Troubleshooting Intermittent Blast Connection Issues · https://kb.omnissa.com/s/article/87457 - Horizon Blast Disconnect Codes · https://kb.omnissa.com/s/article/91181 - Horizon Client Blast Error Troubleshooting: VDPCONNECT_PEER_ERROR · https://www.stephenwagner.com/2020/04/04/vmware-horizon-blank-black-screen/ Enjoy the mapping and understanding of your environment, and any issue we all can help you with, do not hesitate to ask for help in this forum. In future blogs we will deep-dive into specific areas of Omnissa products, and how to troubleshoot.
  9. Hence why you need multiple connection servers in each site.. (not that you didn't know that 😉 )
  10. Justin Johnson recently published a very good blog about RESTFul API's, I think it is very applicable for many of you.. enjoy the reading. https://www.evengooder.com/2024/09/horizon-server-api.html
  11. just heard there is no JSON value to set this 😞
  12. I think you reply to me saying that it could perhaps be handled by google management, I removed that part as I was wrong there. it can't.. I'm not aware of a JSON value but I asked the devs if they know of any..
  13. A hot patch will be created/is in the process of being created, but it is not ready yet.
  14. Rob Beekmans changed their profile photo
  15. Rob Beekmans joined the community
  16. and a quick help from Michael Erb gave me the answer. https://omnissa.link/horizonclient
  17. The answer is yes.. let me check what we are doing there but I remember a conversation about it where we were deciding what shortener to use.
  18. Application life cycle management Applications are the gateway to data employees are working with. Proper functioning of applications is key for any organization and issues with applications will be felt instantly. It is therefore important that applications are patched, updated and maintained to ensure they are secure, performant and available. Application life cycle management is key in this, you do not want applications patched in a production environment without proper testing. Recently two incidents took place where this occurred, and the result were outages that span hours to days. Application and image management requires rigorous testing, multiple phases of testing to determine if an updated version or patch is working as expected. Skipping these phases and deploying straight into production is like jumping off a cliff blindfolded, it may work but there comes a day when there is no water down the cliff. How does this application life cycle work? Applications go through a life cycle; the first phase is the Develop phase where technical tests are conducted, the application is installed by an admin. Standard technical test to see if it opens, and no errors popup and it does not break other applications. The development phase is when the application is introduced in the organization. The Development phase is seen as a playground environment, one to get to know the application and its quirks. The second phase of life cycle management is the Testing phase where the application, update or patch is evaluated against some dummy data (or an old backup of the production data). Simple queries, a printout, opening reports are done here. If all goes well it is handed over to the acceptance phase. The third phase of life cycle management is the Acceptance phase, I would say the most crucial phase of life cycle management. Here key users of the application use a complete functional test plan to evaluate all functionalities used during the day. Only when this phase is signed off, is it released to production. If all previous phases went well, the application, new, updated or patched rolls into Production. When patches or a latest version are released, it starts all over. The TAP process is ongoing for every application, all year long. Application life-cycle management will look like a lot of effort “just” to update or patch an application. It is a lot of work, but it guarantees proper working of applications and ensures no loss of productivity when deploying an application, updated version or patched version. With the right test procedure and detailed functional test plans, it is a process to go through. Preparation is key in this, getting your test plans in order, it is one time write-up that will enhance your application management to Enterprise level. How often is a production environment image updated?That is a relevant question when thinking about application life cycle management. Can you update the desktop image when a vendor releases an important update or patch, or do you have to wait until the end of the month for the image update? Each dot on the chart is an update or patch, minor or important, functional, critical or security focused. No customer will update their image each time a patch is released for an application. That would be impossible as images contain dozens of applications at least if not close to hundreds. You’d be updating around the clock. The dilemma of every organization, how do I keep my application landscape up to date while not updating my image around the clock? App Volumes takes application management out of the image. With Omnissa App volumes, application management is no longer part of image management. The image can be managed on a regular pattern, bi-weekly, monthly, quarterly, whatever works best. Applications are managed and delivered by App volumes, On-demand even with Apps on-demand. Certain applications will be installed in the image, but the vast majority will be delivered on-demand. Your application life cycle management is enhanced with App volumes. Within App volumes an application has four packaging stages. The app volumes packaging stages are named New, Tested, Published and Retired. You could overlay this with the application life cycle phases, App volumes “Tested” phase maps with the “Test” phase of application life cycle management. It is handed off to acceptance and once all tests are successfully completed, it is “Published”. At the end of each application life cycle phase an App Volumes stage starts, once the application has been through acceptance phase it will be published (App Volumes stage) which is a production phase until it is replaced by a recent version. When the recent version enters production, the outdated version will be Retired (App Volumes stage). The diagram below shows this process in more clarity. Older versions are retired but can still be assigned if needed, and a rollback is easy to do. App Volumes has a 99%+ compatibility record that is verified with customers who have over a thousand apps deployed with App volumes. Application life cycle management, unique in the market When we look at the VDI (Virtual Desktop Infrastructure) and DaaS (Desktop as a Service) solutions in the market, there is no equivalent to what Omnissa is offering. There are solutions that offer (near-time) application delivery or application layering but none of them have life cycle management functionality with their products. What that means is, admins need to keep track of testing and packaging phases themselves. Which is fine for one application but becomes a nightmare with hundreds or thousands of applications. Looking towards the broader EUC (End User Computing) market, outside the VDI/DaaS vendors, we see vendors with an application delivery solution but again they lack life cycle management features. Omnissa App Volumes is unique in this market for its life cycle management, on-demand delivery of applications and being part of a digital workspace that includes UEM (User Endpoint Management) to manage and deploy applications outside the VDI/DaaS realm. Benefit to TCO (Total Cost of Ownership) Besides the fact that you can patch faster and thus are less exposed to security threats (that should be enough reason alone), you will see a positive effect on your TCO. Think about it, with most applications outside of the image, you need fewer images, fewer desktop pools. Fewer images and pools relieve an admin of repetitive work. Admins can be assigned to other tasks than just managing desktop images or pools. Production is hindered by image updates for application patches. Every update of the image requires an update to the pools, no matter how you spin that, it impacts production. With non-persistent desktops it is easier perhaps but with persistent desktops it does impact on user experience. With the applications out of the image, the impact is minimal. It is a win-win situation, better security, better user experience, and better TCO. What is next?If we extend application life-cycle management into modern management, the number of steps or phases extends. We define eight activities of modern management that correspond to application life-cycle management and extend them. Eight Activities of modern management is something that requires a different blog on itself. It is something that levels up a customer application management and deployment. We will discuss this in a next blog post. Summary App volumes and apps on-demand, available with most Horizon subscriptions, is a unique solution for organizations want to take application delivery and management to the next level. There is no other VDI (Virtual Desktop Infrastructure) or DaaS (Desktop as a Service) solution out there offering this level of management and delivery. If you’re a Horizon customer, you may already have access to the feature today and if not, check out Horizon by contacting us at https://www.omnissa.com/contact-us/.
  19. I haven't seen anything around that, I'm tracking the issue internally.