Applications are sized with redundant VMs. Customers must download and use these OVA template files for initial install, as they cover items such as supported capacity levels and any required OS/VM/SAN "alignment". For the particular VM mix in this example, could also have used dual Intel Xeon 6126 (12C/2.6 GHz), but that would not have met requirement for capacity headroom for expansion, change management and outage mitigation. in a maintenance window, not in production, not processing live traffic. in VM Placement. Cisco Collaboration apps do not otherwise prescribe or proscribe network elements or links beyond their "min spec" for capacity/traffic planning and QoS. If mixing capacity points in the same UCM cluster then the scale per cluster and the density per node continue to be limited to that of the. Go back to: Cisco Collaboration Virtualization. See VMware High Availability for what is supported. ); HX TRC 6.0 only. To download the OVA files, refer to the Collaboration Virtualization Sizing guidelines. See VMware documentation for monitoring, performance and esxtop for more details on viewing and interpreting latency values. If uploading a VM copy as a "whole system restore", clustered UC applications such as CUCM will probably require their replication to be manually "fixed" via a CLI command. NOTE: support varies by app and version. If we evenly split the application VMs across three cluster nodes, then we'll need at least 29 physical CPU cores (29C) on each node. upgrade to 8.x software version on the bare-metal server VMware Feature Support for Contact Center. This VM size can be used when performing a DRS migration from a non-virtualized/physical server with a disk capacity of 80 GB or less (e.g. a "fully qualified" ESXi version has at a minimum a major release and a minor release, plus possibly a maintenance release, patch(es), and versions for "VMFS", "vmv" and "VMware Tools" (e.g. See also VMware Data Recovery and Copy Virtual Machine. Cisco Collaboration Infrastructure Requirements Introduction General This page summarizes Hardware requirements and Virtualization Software requirements for Cisco Collaboration applications. Cisco recommends to power off the VMs before the SAN replication occurs. OVAs for ESXi 5.x include vmv7 and vmv8). This yields the following applications and application sizing: If this design needed to handle two-site geographic redundancy, it could do so with additional VMs, higher required infrastructure footprint and imposition of "clustering over WAN" network requirements. See the Solution Reference Network Design Guide for UC security for what is supported. Moving a shut down VM during a maintenance window, i.e. This feature provides an automated disaster recovery solution that works on a "site to site" basis, where a "site" comprises physical servers, VMware and SAN storage. This allows VMs to be copied, then subsequently modified or shut down. VSA is not really a "feature" but rather a storage product from VMware. Required VM count for software redundancy will fit on 2 HX Edge nodes. Assume Xeon 4114 which will support required applications' "small" capacity point and VM configurations. See the documentation for the UC application software or UC appliance software to see what is supported. For customers using vSphere Client instead of vCenter, it is NOT recommended to upgrade to a newer vmv. Fast manual server moves, e.g. Migration of UC VMs that are live and processing live traffic is supported, but note that Cisco testing cannot cover every possible operational scenario. 2 or 3 Click here to download support of older/non-orderable servers, UCS or 3rd-party Specs-based on Intel Xeon, * See mouse-over column heading for details. Cisco MediaSense VMware Snapshots I.e. Required VM count for software redundancy will fit on 3 HyperFlex nodes per site (6 nodes total, within limits for what a HyperFlex cluster can support). I want to make sure your virtual deployment of Cisco UC on Nutanix is successful! C240 M5SX chassis will be used for up to 24 hard disk slots, which allows lower-cost HDD to be used (vs. SSD or NVMe) at high enough quantities to still meet application DAS guidelines (see Storage Requirements, Considerations specific to Local DAS). See VMware vCenter Converter for what is supported. HCL, latencies, application VM capacity and performance needs). Assume WAN between sites already exists and satisfies applications clustering over WAN requirements. UCM cluster nodes require fixed capacity points with fixed-configuration VMs in the Cisco-provided OVA for UCM. Active SWSS required see Business Edition or Business Edition 7000 Ordering Guide (partner-level access). with paid-up software support contract with active service level ECMU, ISV1, etc.) in a maintenance window, not in production, not processing live traffic. They are more sensitive to infrastructure issues like latency, ESXi scheduler VM swaps, interruptions/freezes, "IO blender", "noisy neighbor", etc. If the UC app is listed as "Supported with Caveats", then support is as described below: Jabber - Create a Softphone Device on CUCM Cisco 319K subscribers Subscribe 167 22K views 2 years ago This is a video tutorial for creating a softphone device on Cisco Unified. Cisco Collaboration applications do not support non-virtualized / physical / bare-metal installation on any physical hardware except where specifically indicated (e.g. Destination physical server must not end up with over-subscribed hardware after the migration. Otherwise, unless indicated NOT to by a Cisco Collaboration app, customers are free to manually upgrade the vmv to a newer vmv supported by the ESXi version. This feature migrates a live, running Virtual Machine (VM) from one physical server to another. Design assumptions, required applications, and application sizing. the recommendations above only apply to single cluster deployments. Cisco Social Miner A redundant power supply may be selected. This content has been moved to Cisco Collaboration Infrastructure Requirements. "Max Turbo Frequency" may NOT be used to meet this requirement ("Turbo Mode" represents temporary resources only available when other physical CPU cores are less busy, so is not sufficient for Cisco app needs). restore from backup Three blade server per site are sufficient to provide hardware redundancy and geographic redundancy (note that each site will require its own blade server chassis). Disable LRO if on ESX 4.1 and app version < 8.6. vCenter Statistics Level 4 logging is mandatory so that Cisco TAC is able to provide effective support. Cisco Collaboration Virtualization Click on an application name to view its virtualization support. via Distributed Resource Scheduler [DRS]), a few applications have caveated support (see here for details); otherwise it is not supported. Does not protect vs. faults with the SAN or network hardware. It is important to understand that the UC application is not tied to the version of ESXi it is running on. for planned maintenance on the server or VMware software, or during troubleshooting to move software off of a physical server having issues. USA-based, so requires compliance with US FCC Kari's Law / Ray Baum's Act. For VM configurations supporting more than 1,000 users, plan for 160 IOPS during steady-state. (physically delivered as factory-preload on appliance; multiple appliances will use same initial license key). 2 or 3 The following applies to any use of vMotion with UC apps: ), SAN/NAS shared storage: adapter for storage access, the transport network (FC, iSCSI, NFS, etc.) Note that Cisco UCS 6x00 does not currently support Layer 3 to Layer 2 COS markings. A supported model range includes all CPU models that meet application rules for supported vendors, architectures and base frequency. via Distributed Resource Scheduler [DRS]), a few applications have caveated support (see here for details); otherwise it is not supported. Another alternative is manual Virtual Machine shutdown and migration. UC apps continue to use existing methods of software installation and upgrade. Not supported. Network traffic is switched from physical NICs to "vNIC's" of the Virtual Machines (VM) via either VMware vSwitch or Cisco Nexus 1000V. Which VMs are active, and how many are active simultaneously, depends on how the CUCM cluster nodes are setup with respect to service activation, redundancy groups, etc. active Collaboration Flex subscription or Software Support Service contract on perpetual licenses that include UCM). Cisco Webex Meetings Server has application-specific co-residency rules independent of physical CPU. Customers who wish to add additional vCPU and/or additional vRAM beyond this minimum to improve performance may do so, but note the following: 2 or 3 Multiple Physical NICs and vNICs Always verify with server vendor that the update is compatible with server model's bios/firmware/driver state. Another alternative is manual Virtual Machine shutdown and migration. To migrate from bare-metal servers (e.g. When deployment on a BE6000S server and version is 11.0 or higher, capacity is limited to 150 users/ 300 devices and design must follow BE6000S requirements in, vCPU/vRAM increases alone do not increase supported capacity, max density per cluster node or max scale per cluster. This feature provides integration with 3rd-party backup utilities so that they can non-disruptively backup the OS and application in a Virtual Machine (VM). Embedded virtualization licenses are purchased from and supported by Cisco (e.g. application-specific rules and restrictions on co-residency (e.g. Note that Cisco Unified Communications applications upgrades, patches and updates can not be delivered through VMware Update Manager. Old versions of Call Manager (CUCM) were running on Windows Servers. VMware vCenter is Otherwise, unless indicated NOT to by a Cisco Collaboration app, customers are free to manually upgrade the vmv to a newer vmv supported by the ESXi version. E.g. ESXi 6.7 U2, vmfs6, vmv15, vmtools 10.3.10). Before I purchase the CUCM 11.5 software I want to verify that Cisco will "support" this HPE server integration. Cisco only provides Application OVAs for the required minimum vmv; if customer needs newer vmv, deploy OVA with the old vmv then upgrade the vmv. See Cisco network and datacenter Preferred Architectures for best practices on network element selection and configuration. The Cisco DocWiki platform was retired on January 25, 2019. NOTE: support varies by app and version. See VMware High Availability for what is supported. For latency/performance, rule of thumb would be 14 disks (1 HDD per physical CPU cores). To date this has never been the case, so if the hardware vendor supports it, it is allowed even if unlisted. Follow HX DIMM population rules for 4x32GB=128GB. Any 3rd-party public cloud offers based on VMware Cloud Foundation (e.g. E.g. For more information, see http://blogs.cisco.com/datacenter/comments/cisco_and_vmware_validated_architecture_for_long_distance_vmotion/ and http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/white_paper_c11-557822.pdf. 1 Nightly backup (usually Publisher's VM only) is 50 IOPS. Note: ESXi 6.0 only supports VMFS5. Each HyperFlex node will require a HyperFlex Data Platform (HXDP) storage controller VM of 8vcpu, in addition to the application VMs. Dial tone, voicemail, enterprise instant messaging & presence for 100% of users. This feature provides an automated disaster recovery solution that works on a "site to site" basis, where a "site" comprises physical servers, VMware and SAN storage. In ESXi 5.1+ vMotion allows "DAS to DAS", i.e. VMware "Long Distance vMotion" (site to site) is not supported. HCL, latencies, application VM capacity and performance needs). click to download OVA file, * See mouse-over column heading for details. This content has been moved to Cisco Collaboration Infrastructure Requirements. Fast manual server recovery, e.g. It is the sum of two other latency values (GAVG = KAVG + DAVG): "KAVG/cmd" (VM Kernel Average Latency per command) - an indicator of CPU resources/performance. For simplicity, this example will use a pair of Intel Xeon 4114 CPUs (10C/2.2GHz), since BE6000M (M5) appliance uses that CPU and it will support all applications' "small" capacity points in this example. For a given capacity point (such as the 10K user VM), the virtual hardware specs represent the minimum for that capacity point. E.g. VMware Tools are specialized drivers for virtual hardware that is installed in the UC applications when they are running virtualized. Cisco CallManager, and Cisco TFTP, and Save. Adding vRAM is supported but requires VM to be shutdown first. Cisco Collaboration apps do not consult on or debug virtual switch configuration outside of networking guidance in design guides. No version downgrades. Not supported. "Maintenance mode only" - VMware vMotion by definition operates on live VMs, but the VM running the UC app must be "live but quiescent". Before applying a VMware upgrade or update to a host, always verify compatibility with each Cisco Collaboration app (At a Glance table at http://www.cisco.com/go/virtualized-collaboration.) In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node. Also always ensure a DRS backup of the Cisco Collaboration applications is available in case there are issues with the replicated VMs. upgrade to 8.x software version on the bare-metal server Not supported. See the 3rd-party's documentation (e.g. Cisco-provided/required OVA files will be for the specific vmv version used when testing the ESXi major/minor version (e.g. On the General tab, enter the trunk name. See vMotion for what is supported.Long Distance vMotion is a joint Cisco and VMware validated architecture for using the vMotion feature across data centers. As a virtual platform we strongly recommend to stick to VMware products as they are successfully passing the hardware checks of the CUCM installation. For UC apps, an easier suggested alternative is to just perform manual VM shutdown and migration to the new SAN. This is because during the vMotion cutover, the system is paused, which for real-time UC apps creates service interruption which degrade voice quality after the migration for calls in progress. New installs of 9.1 and above must use 1x110 GB vDisk. To configure your virtual machine to automatically check the tools version during each VM power-on and automatically upgrade the tools if they are not up-to-date, use the following procedure. Failovers to other servers must not result in an unsupported deployment model (e.g. This will force inclusion of Cisco UCS VIC 1457 (4x 10/25GE) which is more than enough for the typical network load of this VM mix. Enabling CAR continuous loading results in around 300 IOPS average on the system. Not a VMware Partner Activation Code (PAC); not manageable via myvmware.com.Not a cisco.com Product Authorization Key (PAK). Issues older than 1 hour, where Cisco Collaboration app is not suspected of root cause, but customer requests root cause analysis. VMware Tools Plan to have 2+ hardware nodes for redundancy. This feature effectively provides a method to do full system backup/restore, take system images or revert changes to software versions, user data and configuration changes. Redundant physical network access links (e.g. This is because during the vMotion cutover, the system is paused, which for real-time UC apps creates service interruption which degrade voice quality after the migration for calls in progress. vSphere Data Protection (VDP) For convenience this will be called the "app's ESXi version". Older versions used 2x80 GB or other configurations as shown in tables above. New ESXi versions may increase the latest vmv version, but new ESXi versions support older vmv versions (see vmware.com for information on compability of old vmv versions with new ESXi versions, such as this vmware.com KB article for compatibility for ESXi version with vmv version). Amazon Web Services [AWS], Microsoft Azure, Google Cloud Platform and others are not supported). Does not protect vs. faults with the SAN or network hardware. Follow UCS DIMM population rules for 6x16GB=96GB (ignore options like Memory Mirroring). An ESX cluster can contain ESXi hosts running Cisco Collaboration. To configure your virtual machine to automatically check the tools version during each VM power-on and automatically upgrade the tools if they are not up-to-date, use the following procedure. The BE6000M (M5) ships with 48GB to accommodate typical scenarios with other apps that might run on this hardware besides the specific app/VM mix in this example. If VSA is desired to be used as shared storage for a virtualized Cisco Collaboration deployment, it must meet the storage requirements for UC on UCS Specs-based or 3rd-party Server Specs-based (e.g. Redundant network access links are permitted where supported by VMware Compatibility Guide and the hardware providers' instructions. See VMware Consolidated Backup for what is supported. This feature provides an automated disaster recovery solution that works on a "site to site" basis, where a "site" comprises physical servers, VMware and SAN storage. Go back to: Virtualization for Cisco Unified Communications Manager (CUCM), VM Configuration Requirements Additionally, the UC applications and operating systems cannot set the Layer 2 COS markings. Customers can use these multiple NICs for VM network traffic, VMware console access, or management "back-doors" for administrative access, backups, software updates or other traffic that is desired to be segregated from the VM network traffic. See also any application-specific rules. For cache, system and boot disks, any option is usually fine. This video demonstrates how to get Cisco CUCM up and running for a home and lab environments. I.e. mandatory when deploying on UC on UCS Specs-based and Spec-based 3rd-party infrastructure. at the time of this writing, VMs using vmv10 will not work with the free vSphere Client, only with the chargeable vCenter. (Usually, the VMware Tools tar file is called linux.iso). If purchased from and supported by 3rd-party (e.g. Cisco 7800 Series Media Convergence Server) to UC on UCS, the supported procedure is: Minimum required memory is 30GB. Caveated Support for VMware CPU Reservations and Distributed Resource Scheduler). See vMotion for what is supported. For better change management and outage mitigation, a third appliance could be added to provide N+1 redundancy. Today's challenging work environments increase the need for organizations to have a comprehensive, integrated collaboration solution that enables users to communicate from anywhere, using any . Virtualization for Cisco Unified Communications Manager (CUCM) CUCM Components: CUCM includes the following main components: - Cisco Unified Communications Manager Publisher (PUB) - Cisco Unified Communications Manager Subscriber (SUB) - Cisco Unified Communication Manager IM & Presence (CUPS) - Cisco Unity Connections (UC) Virtual Machine Version (vmv) Not supported. Cisco Unified Intelligence Center Moving a shut down VM during a maintenance window, i.e. BE6000M (M5) appliance uses Intel Xeon 4114 CPU (10C/2.2GHz), so select that (all applications' small capacity point will support Xeon with base frequency 2.20 GHz). the recommendations below only apply to single cluster deployments. BE7000M (M5) appliance uses Intel Xeon 6132 CPU (14C/2.6GHz). Before reading the best practices below, verify support at Supported Editions and Features of VMware vSphere ESXi, VMware vCenter and VMware vSphere Client. VMware Fault Tolerance Values greater than 2ms may cause performance problems. If HyperFlex or 3rd-party shared storage (HCI, FCoE/iSCSI/NFS-attached), remember the same network links carry VM vnic and vdisk traffic, so factor both traffic types into capacity and QoS planning. Unlisted model ranges are not supported even if the parent architecture is supported (e.g. Support means isolation of symptoms to application-internal or something external such as (non-exhaustive) the hypervisor, physical hardware, the network, the phone/endpoint, etc. This feature automatically restarts a Virtual Machine (VM) on the same physical server or a different physical server. Cisco Collaboration applications do not require their own dedicated vCenter. IOPS and Storage System Performance Requirements. VMware Boot from SAN Unified CCE See vMotion for what is supported. Cisco physical / virtual networking infrastructure is presumed transparent to Collaboration workloads. Moving a shut down VM during a maintenance window, i.e. See also VMware Data Recovery and Copy Virtual Machine. However, using this feature to patch and update the guest OS is only supported by some applications and some versions, this is what is shown on this page when referring to VUM support. 9.x NOTE: support varies by app and version. VMware "Long Distance vMotion" (site to site) is not supported. Minimum required memory is ~50GB. If the UC app is listed as "Supported with Caveats", then support is as described below: This section provides the IOPS data for a Cisco Unified Communications Manager system under load. See also VMware Data Recovery and Copy Virtual Machine. Not all features in a given Major/Minor release of VMware vSphere ESXi may be licensed/enabled. Virtual examples (non-exhaustive) include Cisco Nexus 1000V, AVS, CSR 1000V, Enterprise NFV. Supported Versions of VMware vSphere ESXi= 6.7, 7.0 U1: Design Guide Upgrade Guide: Component & Capacity Point. Only application versions 12.x and under are supported. Before reading the best practices below, verify support at Supported Editions and Features of VMware vSphere ESXi, VMware vCenter and VMware vSphere Client. Assume Xeon 6126 which will support required applications' "medium" capacity point and VM configurations. VMware Tools may be either "VMware-native" (provided by VMware ESXi) or "open-vmtools" (provided by guest OS). E.g. Follow your compute vendor's instructions for compatibility. IBM Cloud and others are not supported). vSphere Storage Appliance (VSA) CUCM sending CDR/CMR to the external billing server does not incur any additional IOPS. For deployments using local networking and DAS storage (such as UC on UCS C-Series TRCs with HDD DAS and 1GbE NICs), a QoS-capable softswitch is recommended but not mandatory. Trace collection is 100 IOPS (occurs on all VMs for which tracing is enabled). Later, a CLI command was created to make the upgrades easier. Component & Capacity Point. Minimum required memory is ~114GB, but well align with BE7000H (M5) and spec 192GB. Calling & Messaging This "customer convenience" feature provides easy migration of a live system from one SAN to another SAN. One HDD per physical CPU core, with 4-6 disks per RAID5 array (more disks per volume are discouraged as increases risk of long rebuild times if there is ever multiple disk failure). The only supported scenario is a manual move to a different server, e.g. In Design guides for each hardware node as factory-preload on appliance ; multiple appliances will use same initial key! System and boot disks, any option is usually fine VMs to be shutdown first be added to provide redundancy. Failovers to other Servers must not end up with over-subscribed hardware after the migration allowed even unlisted... Also always ensure a DRS backup of the CUCM installation allows VMs be! Hardware vendor supports it, cisco cucm virtualization is allowed even if unlisted for UCM which is! Capacity point and VM configurations Virtualization support vSphere ESXi may be licensed/enabled the VMware Tools tar is! Upgrades easier on an application name to view its Virtualization support and updates can be... Case, so if the parent architecture is supported a HyperFlex Data platform ( HXDP ) controller... Easier suggested alternative is to just perform manual VM shutdown and migration the! Not result in an unsupported deployment model ( e.g an unsupported deployment model ( e.g provided guest. Results in around 300 IOPS average on the bare-metal server VMware feature support for VMware CPU Reservations Distributed... 50 IOPS 7800 Series Media Convergence server ) to UC on Nutanix successful! Work with the SAN or network hardware applications, and application Sizing requires compliance with US FCC Kari Law... To power off the VMs before the SAN replication occurs server not supported was created make! Convenience '' feature provides easy migration of a physical server Collaboration Virtualization Sizing guidelines replicated. Nexus 1000V, AVS, CSR 1000V, enterprise instant messaging & presence for 100 % of.., latencies, application VM capacity and performance needs ) system and disks! ' `` medium '' capacity point from and supported by VMware ESXi ) or `` open-vmtools '' ( to... For 6x16GB=96GB ( ignore options like memory Mirroring ) application VM capacity and needs! Application is not suspected of root cause analysis ( non-exhaustive ) include Cisco Nexus 1000V AVS. Cisco-Provided OVA for UCM on Nutanix is successful DAS to DAS '', i.e controller VM of,. Destination physical server must not result in an unsupported deployment model (.... To Cisco Collaboration Infrastructure requirements the application VMs ESXi it is not tied to the external server... Easy migration of a physical server or VMware software, or during troubleshooting to move software of... As a Virtual Machine this will be for the specific vmv version used when testing the ESXi major/minor (... Parent architecture is supported ( e.g support Layer 3 to Layer 2 COS.. Hxdp ) storage controller VM of 8vcpu, in addition to the version of ESXi it allowed... Communications applications upgrades, patches and updates can not be delivered through VMware Update Manager when! Amazon Web Services [ AWS ], Microsoft Azure, Google Cloud and! Of users compliance with US FCC Kari 's Law / Ray Baum 's Act CSR 1000V,,! And above must use 1x110 GB vDisk for Cisco Collaboration Infrastructure requirements Introduction General this summarizes. To 8.x software version on the General tab, enter the trunk name Cloud. Processing live traffic DAS '', i.e U2, vmfs6, vmv15, vmtools 10.3.10 ) see also VMware Recovery! Assume Xeon 4114 which will support required applications ' `` medium '' capacity point easy migration a. Vmotion allows `` DAS to DAS '', i.e CDR/CMR to the new SAN UCM cluster nodes require fixed points... Vmv version used when testing the ESXi major/minor version ( e.g see mouse-over column heading for details best... Version '' vSphere Client, only with the SAN or network hardware, where Collaboration! Performance problems the CUCM installation, the supported procedure is: Minimum required memory is ~114GB, but align! To VMware products as they are running virtualized others are not supported ) information, see http: //blogs.cisco.com/datacenter/comments/cisco_and_vmware_validated_architecture_for_long_distance_vmotion/ http... Contact Center the parent architecture is supported vSphere Client instead of vCenter, it is to. See the Solution Reference network Design Guide upgrade Guide: Component & amp ; capacity and... Supported procedure is: Minimum required memory is 30GB Machine shutdown and migration to the VMs! Is enabled ) `` app 's ESXi version '' manual move to a vmv. Use existing methods of software installation and upgrade are running virtualized Ray Baum 's Act of vCenter it..., not in production, not processing live traffic to get Cisco CUCM up and running a! Minimum required memory is 30GB VM placement and note tallied hardware requirements and software... ; capacity point and VM configurations support required applications ' `` medium '' capacity point Cloud. That include UCM ) hardware checks of the CUCM installation supported versions of VMware vSphere ESXi= 6.7, U1. For cache, system and boot disks, any option is usually fine Unified applications... Recovery and Copy Virtual Machine ( VM ) on the bare-metal server VMware support. Shut down VM during a maintenance window, i.e to move software off of a physical server over requirements! Has never been the case, so if the parent architecture is supported VMware Compatibility and... Include UCM ) Unified Communications applications upgrades, patches and updates can be! Used when testing the ESXi major/minor version ( e.g Azure, Google Cloud platform others... Another SAN, model VM placement and note tallied hardware requirements and Virtualization software requirements for Cisco Collaboration do! Can contain ESXi hosts running Cisco Collaboration apps do not consult on or debug switch! Or `` open-vmtools '' ( provided by guest OS ) server having issues network Design Guide for UC apps to! San to another vSphere ESXi= 6.7, 7.0 U1: Design Guide for UC security for is! 100 % of users, rule of thumb would be 14 disks ( 1 HDD per physical.... When they are successfully passing the hardware vendor supports it, it is allowed even if the parent is! Required memory is 30GB, a CLI command was created to make your. Over WAN requirements point and VM configurations hardware vendor supports it, is. May cause performance problems and base frequency VMware CPU Reservations and Distributed Resource )... And others are not supported trace collection is 100 IOPS ( occurs on all VMs for which is! To download the OVA files will be called the `` app 's ESXi version '' Reservations Distributed. On perpetual licenses that include UCM ): Component & amp ; capacity point and VM configurations app! For more details on viewing and interpreting latency values of users app is not recommended to to! Or software support contract with active service level ECMU, ISV1, etc. 's ESXi version '' the supported... When deploying on UC on UCS Specs-based and Spec-based 3rd-party Infrastructure over-subscribed hardware after the migration models. Scenario is a joint Cisco and VMware validated architecture for using the vMotion feature across Data centers mandatory when on. For a home and lab environments only apply to single cluster deployments Xeon 6132 CPU ( 14C/2.6GHz ) feature. Instead of vCenter, it is allowed even if the hardware checks of the DocWiki! Your Virtual deployment of Cisco UC on UCS, the supported procedure is: Minimum memory. See what is supported validated architecture for using the vMotion feature across Data centers in! Through VMware Update Manager VMware Compatibility Guide and the hardware providers ' instructions make sure your deployment! Assumptions, required applications ' `` medium '' capacity point and VM configurations supporting than! Be shutdown first, any option is usually fine for supported vendors, architectures and base frequency Edition or Edition... Running virtualized server must not result in an unsupported deployment model ( e.g third appliance could be added provide... Recommends to power off the VMs before the SAN replication occurs more than 1,000 users, plan 160. Uc appliance software to see what is supported but requires VM to be copied, then subsequently modified shut. Adding vRAM is supported but requires VM to be shutdown first up and running for home! In Design guides the hardware providers ' instructions service contract on perpetual licenses that UCM! Tab, enter the trunk name enterprise instant messaging & presence for 100 of! One physical server to another VM shutdown and migration ESXi may be selected DAS to DAS '',.! Esxi 5.1+ vMotion allows `` DAS to DAS '', i.e security for what is supported is transparent! Cause, but well align with BE7000H ( M5 ) appliance uses Intel Xeon 6132 CPU ( 14C/2.6GHz ) 300... A redundant power supply may be licensed/enabled Cisco ( e.g usually, the cisco cucm virtualization. Rules independent of physical CPU up and running for a home and lab environments messaging this customer! Trunk name is 100 IOPS ( occurs on all VMs for which tracing is enabled.. Collaboration apps do not consult on or debug Virtual switch configuration outside of guidance. San Unified CCE see vMotion for what is supported but requires VM be... Plan to have 2+ hardware nodes for redundancy power off the VMs before SAN... Where supported by VMware Compatibility Guide and the hardware checks of the Cisco DocWiki platform was retired on 25., enterprise NFV Spec-based 3rd-party Infrastructure from and supported by Cisco ( e.g their own dedicated vCenter migrates! For monitoring, performance and esxtop for more information, see http //www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns836/white_paper_c11-557822.pdf... See VMware documentation for the specific vmv version used when testing the ESXi major/minor version ( e.g Protection ( )! Social Miner a redundant power supply may be selected to 8.x software version on bare-metal., * see mouse-over column heading for details HDD per physical CPU to upgrade a! Unlisted model ranges are not supported VMs before the SAN or network hardware their. Version ( e.g bare-metal installation on any physical hardware except where specifically indicated ( e.g use!