Blog posts for February 2014

Remove old Solaris boot snapshots

Removal or deletion of old boot images is useful from time to time. Deleting the old, unneeded entries from the boot menu is one advantage, but the recovery of disk space is the more significant benefit.

Each boot menu item, like 'OpenIndiana-1', or 'OpenIndiana' has a corresponding Boot Environment. Each of these Boot Environments is realized with a ZFS snapshot of the root filesystem. The following utilizes OpenIndiana for the example environment, but the utility works the same way in OpenSolaris and Oracle Solaris.

To remove the ZFS snapshot, we first want to remove the boot menu entry. Management of Boot Environments, as the Solaris documentation refers to them, is not too difficult -- the beadm utility is used.  

First obtain a list of existing BootEnvironments. Decide which one(s) to keep and which ones to expunge.

root@castor:/# beadm list -a
BE/Dataset/Snapshot                             Active Mountpoint Space Policy Created
   rpool/ROOT/openindiana                       -      -          13.2M static 2012-11-04 17:39
   rpool/ROOT/openindiana-1                     -      -          18.0M static 2012-11-04 20:32
   rpool/ROOT/openindiana-2                     -      -          20.7M static 2013-09-16 20:18
   rpool/ROOT/openindiana-3                     NR     /          9.20G static 2013-12-07 14:44
   rpool/ROOT/openindiana-3@2012-11-05-02:32:06 -      -          156M  static 2012-11-04 20:32
   rpool/ROOT/openindiana-3@2013-09-17-01:18:25 -      -          1.47G static 2013-09-16 20:18
   rpool/ROOT/openindiana-3@2013-12-07-20:44:51 -      -          1.13G static 2013-12-07 14:44
   rpool/ROOT/openindiana-3@install             -      -          68.5M static 2012-11-04 17:44

Note that the corresponding ZFS snapshot names are shown, indented, below each boot environment (boot menu entry) name.

Make sure the boot entry / boot environment you choose does not have "NR". "R" or "N" in the Active column of "beadm list".  Those are your bootable or soon-to-be-bootable snapshots!

In this case, pretend we want to remove the (original) 'openindiana' boot environment.  

root@castor:/# beadm destroy openindiana
Are you sure you want to destroy openindiana?
This action cannot be undone (y/[n]): y
Destroyed successfully

One could repeat this step as needed to remove other unneeded boot environments.  

For the paranoid, to verify the snapshot was truly deleted and disk space recovered, one can check that the ZFS snapshot no longer appears:

root@castor:/# zfs list -t snapshot
BE/Dataset/Snapshot                 Active Mountpoint Space Policy Created
   rpool/ROOT/openindiana-3         NR     /          6.07G static 2013-12-07 14:44
   rpool/ROOT/openindiana-3@install -      -          1.93G static 2012-11-04 17:44

For further reading, refer to Oracle's Solaris documentation on beadm:

NoSQL database acronyms

In my over-simplified view of the world, here is my summary of distributed database acronyms, in the order I remember their introduction.


What every web application has to do, repeatedly.

  • Create
  • Read
  • Update
  • Delete


  • Atomicity       (transaction context where operations all fail or all succeed)
  • Consistency  (data and relationships are valid -- think check constraint and foreign key constraint)
  • Isolation        (one client's uncommitted transaction invisible to others)
  • Durability       (committed writes survive power loss, other perils)

CAP (aka Brewer Theorem)

Choose two.

  • Consistency           (all nodes see same data at same time)
  • Availability              (every request gets a pass/fail response)
  • Partition tolerance  (loss of node)


  • Basically Available      (every request gets a pass/fail response)
  • Soft state                    (state may change without request arriving)
  • Eventual consistency  (system is async, stream-like.  it eventually becomes consistent after cessation of requests


  • helps solve the "last writer" problem due to "speed of light race condition"
  • forms consensus in network of unreliable processors (e.g. in presence of packet loss)
  • paxos is a consensus protocol, a state machine
  • proposer (asserts), acceptor (votes), learner (takes action upon consensus)

datastax-agent tar installation

If you are like me, you sometimes want to install a product in an arbitrary location and not pollute your root filesystem (Linux, UNIX and MacOSX).

Normal package management-based installers (.deb and .rpm formats) require adding the DataStax repository to your list of authorized repositories.  See

The automated install (initiated from OpsCenter management web application) will configure the IP address for the Opscenter daemon's stomp listener for you.  If you perform a tar-based installation of DSE or OpsCenter you will need to take some additional steps to configure your datastax-agent (formerly known as "opscenter-agent").

step one - configure stomp listener endpoint

After you untar the dse-x.y.z-bin.tar.gz archive, locate the dse-x.y.z/datastax-agent/conf directory relative to the install directory.  Substitute the DSE version for x.y.z, e.g. dse-4.0.0/datastax-agent/conf.

Edit dse-x.y.z/datastax-agent/conf/address.yaml and set the IP address where you have installed OpsCenter (this could be a different node or host).  Use your own IP address in place of the one below.


step two - configure path to cassandra.yaml

This step is tricky because the DataStax documentation (URL above) is misleading.  When a tar-based install is conducted, datastax-agent cannot locate DSE / Cassandra configuration files using a relative path (as of DSE 4.0.0).  You will get the following error message:

datastax-agent startup error
ERROR [Initialization] 2014-02-27 07:57:24,306 Exception in thread "Initialization"
ERROR [StompConnection receiver] 2014-02-27 07:57:24,306 failed calling listener
clojure.lang.ExceptionInfo: throw+: {:type :bad-permissions, :message "Unable to locate the cassandra.yaml configuration file. If your configuration file is not located with the Cassandra install, please set the 'conf_location' option in the Cassandra section of the OpsCenter cluster configuration file and restart opscenterd. Checked the following directories: [\"/etc/dse/cassandra/cassandra.yaml\" \"/etc/cassandra/conf/cassandra.yaml\" \"/etc/cassandra/cassandra.yaml\" \"/Users/resources/cassandra/conf/cassandra.yaml\"]"} {:object {:type :bad-permissions, :message "Unable to locate the cassandra.yaml configuration file. If your configuration file is not located with the Cassandra install, please set the 'conf_location' option in the Cassandra section of the OpsCenter cluster configuration file and restart opscenterd. Checked the following directories: [\"/etc/dse/cassandra/cassandra.yaml\" \"/etc/cassandra/conf/cassandra.yaml\" \"/etc/cassandra/cassandra.yaml\" \"/Users/resources/cassandra/conf/cassandra.yaml\"]"}, :environment {tar-location "/Users/resources/cassandra/conf/cassandra.yaml", conf nil, checked-files ["/etc/dse/cassandra/cassandra.yaml" "/etc/cassandra/conf/cassandra.yaml" "/etc/cassandra/cassandra.yaml" "/Users/resources/cassandra/conf/cassandra.yaml"]}}
        at opsagent.util.cassandra_util$cassandra_conf_location.invoke(cassandra_util.clj:118)
        at opsagent.util.cassandra_util$get_cassandra_conf.invoke(cassandra_util.clj:130)
        at opsagent.opsagent$create_thrift_conf_vars.invoke(opsagent.clj:52)
        at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:95)
        at clojure.lang.RestFn.invoke(
        at opsagent.conf$handle_new_conf.invoke(conf.clj:171)
        at opsagent.messaging$message_callback$fn__5192.invoke(messaging.clj:30)
        at opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown Source)
        at org.jgroups.client.StompConnection.notifyListeners(

 The workaround for this error is to explicitly tell datastax-agent where to find cassandra.yaml.  This will be in the following path relative to your DSE installation directory:  resources/cassandra/conf/cassandra.yaml

Edit datastax-agent/conf/address.yaml again and add the following directive.  Note that this is undocumented and the official documentation is misleading at this time.  This will be fixed in an upcoming release but this workaround should help for the impatient.

Assuming you have installed DSE 4.0.0 to /apps/dse-4.0.0:

cassandra_conf: /apps/dse-4.0.0/resources/cassandra/conf/cassandra.yaml

Re-start your datastax-agent using "datastax-agent/bin/datastax-agent", run from the DSE installation directory (/apps/dse-4.0.0 in this example).

Created by Administrator on 07/09/2013
This website content is not licensed to you. All rights reserved.
XWiki Enterprise 9.11.1 - Documentation