Had a challenge today trying to use NetApp disks that we’re provisioned in Clustered Data OnTAP 8.2 on a system reverted to 8.1.4, found this useful tidbit here, and easily cleared the labels making the disks usable again.
- filercluster::>node run -node <filer>
- filer> disk assign <diskid>
- filer> priv set diag
- filer*> labelmaint isolate <diskid>
- filer*> label wipe <diskid>
- filer*> label wipev1 <diskid>
- filer*> label makespare <diskid>
- filer*> labelmaint unisolate
- filer*> priv set
Cleanup all of the old data
- # pkg remove -a -f
- # rm -rf /var/db/pkg/* /var/db/ports/* /usr/local/*
Reinstall ports data
- # portsnap fetch
- # portsnap extract
- # pkg info
- You’ll be prompted to reinstall pkg, press “y” and enter
By changing the following thresholds you can likely prevent unexpected failover of the DAG.
cluster.exe /prop SameSubnetDelay=2000:DWORD
cluster.exe /prop CrossSubnetDelay=4000:DWORD
cluster.exe /prop CrossSubnetThreshold=10:DWORD
cluster.exe /prop SameSubnetThreshold=10:DWORD
To change back to the defaults…
cluster.exe /prop SameSubnetDelay=1000:DWORD
cluster.exe /prop CrossSubnetDelay=1000:DWORD
cluster.exe /prop CrossSubnetThreshold=5:DWORD
cluster.exe /prop SameSubnetThreshold=5:DWORD
To view your current settings.
Working with a variety of different virtualization and cloud architectures over the last few years I’ve come to realize that the majority of environments offer little in savings over their physical counterparts from years past. I believe one of the major reasons for this is over-engineering of the architecture. One of my favorite examples of this is storage multipathing. It’s a generally accepted truth that multipathing allows for high-availability and also commonly accepted that it provides additional performance. Unfortunately it seems as though the desire for n+1+1+1 type redundancy and multiple active connections causes one to lose sight of the mathematics involved. For example, say a VMware vSphere 5 cluster of 5 nodes is attached via 10Gb iSCSI to a 10Gb attached NetApp Filer capable of moving 4Gbit/s, active/active multipathing and interface bonding do nothing to increase throughput, only unnecessarily complicate the configuration. A better approach would be to look at the overall requirements for the solution and meet them with the least complex
It seems Oracle isn’t the only one interested in progressing ZFS beyond where SUN had left it. The addition of SCSI UNMAP support to Nexenta and later the Illumos project, and lack of said support in Solaris proper, seems to be the “first blood” in the battle between Oracle’s closed ways and the desire of the community to maintain ZFS as a viable and progressive alternative to commercial storage technologies. In the end it all proves one thing, competition is good.
Why would you choose to partner with a provider that relentlessly annoys your customers? I’ve been getting calls daily for almost 2 weeks now form Sirius about the expired trial service on my wife’s Taurus that I’ve no interest in renewing. Asking them not to call seems to do nothing, so I’ve resorted to filing a complaint with the FCC. Strangely enough they don’t bother me about my Audi… maybe someone over at Ford should tell Sirius to stop annoying it’s customers.
After reading through the manpage for mdadm and struggling with attempts to kill the resync processes manually I came across the checkarray command. To stop an active resync of all md raid devices “/usr/share/mdadm/checkarray -xa”. To stop a specific device “/usr/share/mdadm/checkarray -x /dev/md”.
- Download the ESXi installable ISO.
- Double-click the ISO to mount it (an icon will appear on your desktop). From there, navigate the contents of the ISO image to find VMware-VMvisor-big-3.5.0-xxxx.i386.dd.bz2 and copy it out of the ISO image into a separate folder.
bzcat <path to VMware-VMvisor file> | dd of=/dev/disk1