One of the software-defined technologies that have gained tremendous momentum in the past couple of years is software-defined storage. Most hardware and software companies are jumping on the bandwagon to offer software-defined storage solutions. Software-defined storage offers great benefits for housing your production virtualized workloads. The dominant leader in the enterprise data center when it comes to software-defined storage is VMware vSAN.
VMware vSAN is continuing to pick up many new customers with each new release. The features and functionality with each new release of vSAN continue to raise the bar for competitors to the product. VMware was a pioneer of the Software-Defined Data Center SDDC concept which it coined back in to virtualize all aspects of the data center including compute, network, and storage.
The software-defined data center is the new mindset moving forward that allows your business to be as agile, automated, and abstracted from underlying hardware infrastructure as possible. As one of the foundational pillars of your SDDC, software-defined storage is at the heart of your data.
vSAN Technology Overview
By using automation and storage pooling, this allows abstracting the hardware from the storage solution and easily pooling resources between servers. This allows your environment to gain many benefits using VMware vSAN, including easily scaling up and scaling out as a mechanism that is built right into the vSAN solution and in essence, is now built into the vSphere solution in general as part of the hypervisor itself and vCenter Server.
VMware vSAN allows you to move beyond the traditional concept of storage LUNs that were extremely labor-intensive to change, grow, or reconfigure. In times past, this generally meant destroying the original LUN and reprovisioning a new LUN with new characteristics.
When a virtual machine is provisioned you can choose a VM storage policy to support the application that is running on a particular virtual machine. With the appropriate storage policy selected, SPBM makes sure the virtual machine is allocated the resources needed and is provisioned on the correct tier of storage configured based on the performance and other characteristics.
VMware vSAN is an object-based storage solution. Each component of the virtual machine is created as an underlying virtual machine object. Each object is an individual block storage device. Components are pieces of the VM objects that are stored on a particular cache or capacity device. The maximum number of components per host limit is How are the objects and components laid out physically across the vSAN environment?
VMware vSAN takes care of the layout of vSAN objects automatically and bases the placement decisions based on a number of factors including:. This provides many tremendous benefits when thinking about configuring and provisioning the vSAN solution. It is a native part of VMware vSphere. As mentioned previously, vSAN is integrated into the hypervisor as a kernel-based solution.
This means there are no additional installations that are required or needed. It is part of vSphere. How does VMware vSAN provide the availability and performance of the software-defined storage solution in a vSphere environment?
This allows VMware vSAN to withstand failures of the underlying infrastructure and still maintain the resiliency of your data. This could be anything from a failed disk drive cache or capacity or a failure in the network network card, etc.The answer to that question has radically changed over the last couple of years.
Hyper-converged storage is now the new kid on the block! VSAN is a fairly new product of VMware that now combines their already well-established hypervisor with hyper-converged storage features.
A VSAN implementation consists of a minimum of 3 hosts. These disk groups will be committed to the Virtual SAN storage. VSAN will present a shared data storage to all hypervisors in that cluster. Since we needed a new hardware platform capable of running our ever-expanding labwe took a closer look in order to see if this solution could meet our needs.
The hardware had a few minuses on which I will elaborate later in this article. Throughout the migration from our previous lab platform to the new VSAN cluster, we encountered a number of issues.
Unfortunately, this broke the existing VSAN configuration. This resulted in corrupting the existing one.
Obviously, we were not satisfied with these results. Therefore, we tried it again but this time, we used a different method :. First, we migrated a single host from the old cluster to the new VSAN cluster. That initial server will register with the new Vcenter and apply the cluster-enabled features. Subsequently, we moved the other 3 hosts to the cluster. This time, the cluster did not report any VSAN partition related issuesas it did in the previous attempt see illustration above.
The cluster was healthy again. The trick is to add one host first and once that has been done, you bulk add the remaining ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.
As I said earlier, there were some drawbacks to the hardware we chose. We suffered from a HDD failure in one of our nodes. At that moment, we saw the true power of VSAN. This resulted in the automatic rebuilding of the failed VSAN object on a new physical disk somewhere else in the cluster.
A VSAN object is a logical volume that distributes its data and metadata and that grants access to that data across the entire cluster. The only thing we lost that moment was pure raw storage capacity for the VSAN. This was resolved by adding the new spare disk to the chassis, afterwards VSAN was completely healthy again. Overall, I would definitely like to state that the capacity device failure and the rebuilding of VSAN is very robust and intuitive. All I had to do manually, was removing the failed device from the disk group and adding the new HDD to the same disk group.
This can be solved by adding more disk groups. The possibility to add more than 1 SDDs to the cashing tier would be a very useful feature.
With the overall lab running smoothly, everything looked good again. Unfortunately, features such as deduplication and compression are unfortunately not possible because we opted for a hybrid configuration SSD for flush tier, HDD for the capacity tier. Since we enabled those features on our former platform and since we designed our new VSAN platform with about the same raw capacity, not benefiting from dedup, compression or erasure coding, we completely filled up our VSAN data store.
That is definitely something to remember when you migrate from the old storage infrastructure to the new Hybrid VSAN environment.In this post, I will cover some of the main big-ticket items that have been included in this release. In vSAN 7. In order to be able to ensure that users file are protected, and users can only see their own shares, integration with Kerberos authentication for NFS and Active Directory for SMB.
One of the most common pieces of feedback we have heard over the years with vSAN is the fact that when there is available capacity on one vSAN cluster, it cannot be easily used by another vSAN cluster.
There are some requirements around networking which will be made clear in the official docs. There are also some scaling limits as well in this first version. This is another feature that our customers have been requesting for some time. Prior to this release, the deduplication and compression space efficiency features were combined; you could not enable one without the other. So even workloads that did not benefit from deduplication needed to have it enabled on the vSAN datastore if they wanted the compression feature.
This also had some issues for availability, since the deduplication hash table was striped across all of the disks in the disk group. Should a disk failure occur when deduplication was enabled, the failure impacted the whole of the disk group.
Having an option to enable compression only in vSAN 7. For customers who have deployed 2-node vSAN clusters, these customers will be very much aware of the requirement to use a witness appliance. For each 2-node vSAN cluster deployed, an additional witness appliance also needed to be deployed. A single vSAN 7. Note that the shared witness appliance is only available for 2-node vSAN clusters at this time. I believe most readers at this point will be well aware of the shift in application development towards a more cloud native approach, typically involving containers and most likely orchestrated by Kubernetes.
With this in mind, VMware are continuously enhancing vSAN to be a platform for both container workloads and virtual machine workloads. At the same time, we want to ensure that these applications can run as optimally as possible from a storage perspective. Lastly, these applications will have the built-in smarts to understand what action to take when there is an event occurring on the underlying vSphere infrastructure, e.
We are currently working with a handful of design partners in the initial release. Some reader will be aware that we have provided limited support for Share Nothing Architectures SNA in the past, but this meant we had to do various steps such as disable clustering features for the application.
Deploying directly onto the vSAN datastore like this is fully supported with the Data Persistence platform, but there is another option available as well. To facilitate a high performance data path for these application, the Data Persistence platform also introduces a new construct for storage called vSAN-Direct.
However these local storage devices are still under the control of HCI management, so that health, usage and other pertinent information about the device is bubbled up to the vSphere client. The primary goal here is to allow cloud native applications to be seamlessly deployed onto vSAN, but at the same time have those applications understand infrastructure operations such as maintenance mode, upgrades and indeed host failures.
As mentioned, we have partnered with a number of cloud native application vendors who will create bespoke Kubernetes operators that will work with the Data Persistence platform. Partners can then define how their application should behave e.
I will write more about Data Persistence platform as our design partners start to come online. As you can see, there is lots of new goodness in the vSAN 7. There are lots of features here that customers have been requesting for some time, but also significant improvements in enabling vSAN to become that platform for both container and virtual machine workloads. Note that there are a range of additional features and enhancements in this release which I have not spoken about. Please check out the official vSAN 7.
View all posts by Cormac. So in ROBO license what is the fileservices limit?The success of vSAN can be attributed to many factors such as performance, flexibility, ease of use, robustness, and pace of innovation. Paradigms associated with traditional infrastructure deployment, operations, and maintenance include various disaggregated tools and often specialized skill sets. The hyperconverged approach of vSphere and vSAN simplifies these tasks using familiar tools to deploy, operate, and manage private-cloud infrastructure.
A great fit for large and small deployments with options ranging from a 2-node cluster for small implementations to multiple clusters each with as many as 64 nodes—all centrally managed by vCenter Server. Whether you are a customer deploying traditional, or container-based applications, vSAN delivers developer-ready infrastructure, scales without compromise, simplifies operations, and management tasks as the best HCI solution today — and tomorrow.
It abstracts and aggregates locally attached disks in a vSphere cluster to create a storage solution that can be provisioned and managed from vCenter and the vSphere Client. Also, vSAN accommodates a stretched cluster topology to serve as an active-active disaster recovery solution.
This allows greater flexibility to scale storage and compute independently. VM storage provisioning and day-to-day management of storage SLAs can be all be controlled through VM-level policies that can be set and modified on-the-fly. Each host contains flash drives all flash configuration or a combination of magnetic disks and flash drives hybrid configuration that contribute cache and capacity to the vSAN distributed datastore.
Each host has one to five disk groups. Each disk group contains one cache device and one to seven capacity devices. In all flash configurations, the flash devices in the Cache tier are used for primarily for writes but can also serve as read cache for the buffered writes.
Two grades of flash devices are commonly used in an all flash vSAN configuration: Lower capacity, higher endurance devices for the Cache layer and more cost effective, higher capacity, lower endurance devices for the Capacity layer.
Writes are performed at the Cache layer and then de-staged to the Capacity layer, as needed. This helps maintain performance while extending the usable life of the lower endurance flash devices in the capacity layer. In hybrid configurations, one flash device and one or more magnetic drives are configured as a disk group. A disk group can have up to seven drives for capacity.
One or more disk groups are used in a vSphere host depending on the number of flash devices and magnetic drives contained in the host. Flash devices serve as read cache and write buffer for the vSAN datastore while magnetic drives make up the capacity of the datastore.
VMware is always looking for ways to not only improve the performance of vSAN but improve the consistency of its performance so that applications can meet their service level requirements. Storage in a Hyper-Converged Infrastructure HCI requires computing resources that have been traditionally offloaded to dedicated storage arrays.
Nearly all other HCI solutions require the deployment of storage virtual appliances to some or all hosts in the cluster. These appliances provide storage services to each host.Basically, I had a cluster in which I wanted to configure an All-Flash vSAN instance on, however, the cluster in question had already been configured in that state once before. Once the hosts had been split up and redeployed I went back in to configure a couple different instances of vSAN.
As you know, the first step is marking those undetected SSDs as flash within vSAN, however, when I attempted to do so the following error was displayed. A bit more digging into the actual event and this was discovered. A little bit better — Now knowing that for some reason the disk is still claimed on the actual host we can at least determine that the problem is within the host itself.
Running the following command on the host shows that the disk in question indeed does belong to a vSAN disk group still. Hmm, Unable to complete Sysinfo operation — ok!Stretched Clusters and High Availability Best Practices - vSAN
Well this is much better! The disk group in question was still kicking around, and in turn was configured with compression and deduplication. This is done by specifing the —uuid or -u option with the same command, but pointing the actual diskgroup uuid instead of an individual disk. As you can see, we no longer list out any devices, disks, or disk groups within vSAN.
This should hopefully mean that our disks are available once again for re-use in other scenarios — in my case, vSAN again! Heading back into the vSphere client and attempting to mark the disk as flash now will succeed!
Just wash, rinse, repeat on the remaining hosts you wish to re-use and you should be good to go! Thanks for reading! Tech Enthusiast from Canada eh!
What is VMware vSAN Disk Group?
That's enough for now :. You might also enjoy these recent posts Removing failed replica disks from Veeam proxies. For the most part in my Unable to delete an inactive datastore. I ran into an issue where there was a datastore present in the datastore inventory view that was no longer About The Author.I cannot overstate this but make sure you have all the firmware and drivers up to date which is provided in the HCL. You are commenting using your WordPress.
You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Search for:. All virtual disks must be removed or deleted.
Hot spare disks must be removed or re-purposed. All foreign configurations must be cleared or removed. All physical disks in a failed state, must be removed. Any local security key associated with SEDs must be deleted. I followed these steps: Put host into maintenance mode with full data migration. Have to select full data migration since we will be deleting the disk group. This process can be monitored in RVC using command vsan. Share this: Twitter Facebook.
Like this: Like Loading Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:. Email required Address never made public.
Name required.In a typical Datacenter, It is common that the disk failure can happen for the server. Most of the disks in modern servers are hot-swappable disks and no need of downtime to servers while replacing the failed disk with the RAID systems.
Disks are very important for the vSAN implementation. It is common to replace the failed disks in the server. In this article, I will explain the detailed step by step procedure to remove capacity disk from VSAN disk group before attempting to replace the failed disk. Login to the vCenter server using vSphere Client. Select the disk group and select the disk which you want to remove it from the disk group. Pre-check evacuation provides you the idea of the impact of removing the disk from the disk group.
You can select the Option based on the need. Since it is the lab environment, It shows only MB will be moved. It will be huge in the production environment. This will be fine if you are replacing the failed disk with the new disk.
Once the new disk is added, Non-complaint objects will be re-build. Click on Close. Once you have understood the impact of removing the disk from the disk group. Select the appropriate option based on your choice. Now I only have 3 disks in the disk group. You can replace the failed disk and re-add it into to the disk group. We are done. I hope this is informative for you. Thanks for Reading!!!. Be social and share it on social media, if you feel worth sharing it.
Menu Skip to content.