Blog posts for April 2011

OpenWRT 10.03 dnsmasq.conf for gPXE

Using OpenWRT (backfire 10.03 is the version I'm running) and dnsmasq, a MAC-address-based configuration for dnsmasq enables diskless network boot using gPXE and iSCSI.  gPXE is the new generation of Etherboot, a network boot PROM.
gPXE has iSCSI support via BIOS 0x80 traps.  I played with pxelinux also; it has nice menu capabilities but no iSCSI.  I ended up putting pxelinux on a USB flash drive and using gPXE for diskless PC network booting.

# An example of dhcp-boot with an external TFTP server: the name and IP
# address of the server are given after the filename.
# Can fail with old PXE ROMS. Overridden by --pxe-service.

# required by old versions of gPXE (e.g. the one that comes with OracleVM)

# MAC-specific tags
dhcp-mac=net:ECS,00:14:aa:bb:cc:dd       # ECS mobo / Socket AM2 Athlon
dhcp-mac=net:SLI,00:11:bb:cc:dd:ee       # Asus A8N-SLI mobo forcedeth / Socket 939 Athlon X2

dhcp-match=gpxe,175            # tags the request with net:gpxe if the gPXE option was supplied in DHCP request
dhcp-option=175,8:1:1          # turn on the keep-san option to allow iSCSI-capable OS installation


# Last match wins?
#dhcp-boot=pxelinux.0                  # ONLY SET IF want gPXE to run script or chain-load pxelinux
dhcp-boot=net:#gpxe,gpxe.0             # Here #gpxe means 'not gpxe': i.e. the tag is not set        
#dhcp-boot=net:ECS,sanboot-test.gpxe   # run gpxe script

Solaris 11 iSCSI target configuration

One can implement a virtualized IO scheme by implementing a ZFS volume-based iSCSI SAN using Solaris 11 Express without too much gnashing of teeth.  To minimize confusion, consider that there are some updated COMSTAR administration commands in Solaris 11 Express that differ from their OpenSolaris / OpenIndiana counterparts.

ZFS volumes (sparse or not) form the block devices that will be exported via iSCSI.  ZFS volume snapshots can be made and rolled back to (after first varying offline the related iSCSI target) and ZFS volumes can be cloned and promoted resulting in easy-to-manage virtual machine duplication.

This post is part of my how-to guide for configuring a diskless boot, gPXE, iSCSI SAN solution.  The second half of the solution involves using gPXE to convince your PC to load a Master Boot Sector over the network using iSCSI.  The gPXE stuff to define the target ID (iqn string) based on the PC's MAC addres is in this post.

Solaris 11 COMSTAR server (iSCSI target) cheatsheet

svcadm enable iscsi/target

sbdadm list-lu

stmfadm list-lu -v
stmfadm list-view 
stmfadm list-target

stmfadm create-tg
stmfadm add-tg-member -g 
stmfadm list-tg -v

stmfadm offline-target
stmfadm online-target
stmfadm offline-lu
stmfadm online-lu

stmfadm delete-lu
stmfadm create-lu
stmfadm remove-view
stmfadm add-view -h

itadm list-target
itadm list-target -v
itadm delete-target
itadm create-target

Create iSCSI share on Solaris 11 Express (iSCSI server)

create ZFS pool
zfs create -o mountpoint=none rpool/iSCSI
zfs create -V 8g -s -o shareiscsi=on rpool/iSCSI/volMercury00

create LUN
sbdadm create-lu /dev/zvol/rdsk/rpool/iSCSI/volMercury00
stmfadm list-lu -v

create target group
stmfadm create-tg firstTargetGroup

create target
itadm create-target
itadm offline-target

associate target and target group
stmfadm add-tg-member -g firstTargetGroup
stmfadm list-tg -v

associate target group and LUN
stmfadm add-view -t firstTargetGroup 600144F008002705295A4D86C17F0001
stmfadm list-view -l 600144F008002705295A4D86C17F0001

Clone an iSCSI target using ZFS snapshot

zfs list -t snapshot
zfs snapshot rpool/iSCSI/volMercury00@snap1
zfs clone rpool/iSCSI/volMercury00@snap1  rpool/iSCSI/volPluto00
zfs promote rpool/iSCSI/volPluto00
stmfadm list-lu -v
sbdadm create-lu /dev/zvol/rdsk/rpool/iSCSI/volPluto00
stmfadm create-tg secondTargetGroup
stmfadm add-tg-member -g secondTargetGroup
stmfadm add-view -t secondTargetGroup 600144f05800223e8c054d9e58390001
itadm create-target
itadm list-target

Side Effect

Oops.  Since I defined no host groups and defined no target groups, an unintended side effect is that the existing LUN exports the existing LUN as device 0 and the new LUN as device 1 (think /dev/sda and /dev/sdb).
The second target I created shows the existing and new LUN also.  LUN 0 (and 1) are visible on both target IDs, which I didn't expect.
The impact of this side effect is a shared iSCSI block device (LUN), accessible over two different iSCSI target ids.
If I recall, I made a couple of snapshots of the ZFS volume, then cloned one snapshot into a new volume and deleted the other snapshot.  I then made a view and target from the new volume.  But when my initiator accesses the new LUN it is actually updating data in the original volume!  Running both corresponding PCs at the same time resulted in lots of filesystem antics and quick corruption.

Must Use Target or Host Filter

To avoid this cantango, we *must* constrain each view to avoid the "shared LUN" scenario.
One approach would be to configure a host group and constrain the view to members of the host group.  A host group might be declared using the iSCSI iqn identifier string.  In my gPXE use case, the iqn string created contains the hostname deduced from the DHCP-supplied IP address, but this is a different iqn string than the Linux OpeniSCSI initiator iqn string used when the kernel attaches the iSCSI LUN.  The host group would need an entry for each iqn.
Another approach is to configure a target group and constrain the view to members of the target group.  This approach associates one view with only one target.  One target is created for each view in this approach.  This is a similar pattern to traditional FC-AL attached SAN storage.

IPv6 in 2011?


InteropNet's IPv6 Plans
Posted by Mike Fratto, Editor Network Computing
April 8, 2011

One of the more interesting aspects of attending Interop is seeing the demonstrations that the InteropNet team is putting on. At the upcoming show, the InteropNet is running several IPv6-capable networks that are supporting both exhibitors and attendees. This marks the first show since Interop returned its Class A address space to the American Registry of Internet Numbers (ARIN) in 2010. If you are at Interop, check out the various InteropNet locations offering IPv6. I caught up with some of the InteropNet team by phone to talk about IPv6 while they were hot-staging the network prior to the show.

Gunter Van de Velde, senior technical leader, IPv6, with Cisco and chair of IPv6 Council-Belgian Chapter, says, "The good news is that the equipment interoperates extremely well, and the problems with particular operating systems are well-known. We spent a lot more time getting people familiar with IPv6. Once you know it, configuring for IPv6 is no harder than IPv4."

InteropNet is in a somewhat unique position: A new network is built out for every show, meaning there are no legacy equipment restraints nor concerns about growing the network over time. The Interop team sets up and tears down the network in about 45 days. However, InteropNet also has to support a variety of devices from exhibitors and show attendees. You may not have the luxury of being able to create your networks from scratch, but as Van de Velde points out, transitioning to IPv6 provides an opportunity to take what you have learned managing IPv4 and better plan your IPv6 address space along logical lines.

InteropNet's IPv6 addressing is based, in part, on taking the address allocation received from ARIN and mapping existing hexadecimal equivalent IPv4 subnets into the network portion of the IPv6 address. IPv4 subnets and corresponding IPv6 subnets are then run concurrently. IPv6 hosts will either be statically configured (such as critical servers like DNS, syslog, etc.), or hosts will use IPv6 auto-config with EUI-64 to set-up their addressing. EUI-64 conversion expands a 48-bit MAC address to 64 bits by inserting 16 bits "FFFE" between the OUI and NIC portions of the MAC address. EUI64 ensures that the host portion of the IPv6 address is unique during auto-config.

Host addressing will also use stateless auto-config. Auto-config relies on routers sending out router advertisements that provide IPv6 addressing and subnet masks. Auto-config uses the network information sent from the router advertisement and the EUI-64 address based on the hosts' own MAC address to create a full IPv6 address.  DHCPv6 is used to tell hosts where critical servers are, such as DNS and network time--something that isn't transmitted in router advertisements. To protect the IPv6 from rogue router advertisements, which are used to create man-in-the-middle situations, RA-Guard, a feature found on many modern IPv6 capable switches, will allow router advertisements only from specific ports and block all the rest.

InteropNet is creating three IP networks. The main network will be a dual-stack IPv4/IPv6 network that will be available to exhibitors and show attendees. Scott Leibrand, content delivery network architect with LimeLight Networks, who has worked on several dual-stack IPv4/IPv6 networks, says, "Dual-stack networks just work and are totally transparent to the end user. Anything that is IPv6 goes over the IPv6 network; everything else goes IPv4. Turn it on and no one sees a difference." Since dual-stack is so seamless, InteropNet wanted to set up additional networks that attendees can use to see what happens with network translation and in a pure IPv6 world.

The other two IPv6 demonstration networks will be WLANs in an area designated for hands-on experimentation. One will be an IPv6 network with NAT64 to perform translation between IPv6 hosts and IPv4 networks. This will give you a taste for what it will be like to transition to IPv6. The other network in the same area will be IPv6 without NAT64--so a pure IPv6 network. On this network, you will be able to see some of the problems that hosts will have. For example, Microsoft Windows XP supports IPv6, but can only make queries via IPv4. If you have Windows XP computers in an IPv6-only network, they won't be able to resolve DNS addresses. You're going to have to pry your fingers off your Windows XP systems if you haven't already.

Talking with Liebrand and Van de Velde, it's not surprising when they say getting IPv6 running isn't hard if you understand IPv6 and have a plan to follow. The latter is the critical part--having a plan. I bet in a number of ways, IPv6 networking is going to be easier than you think. Getting up to speed on IPv6 and laying out migration plans is going to be a big focus area for Network Computing in the coming years. You can bet I will be checking out InteropNet's IPv6 features with whatever gear I can get my hands on. Hope you will, too.

Debug Windows 7 iSCSI boot

Having determined that, for a new target with syslinux mbr.bin added, gPXE or my Asus BIOS fails to return/continue to the next bootable device after PXE runs gPXE (Etherboot), I decided to try the "image copy" approach.  More pain ensued.

I installed Win7 to a SATA disk with a 16 GB partition.  I then transfer this image to my iSCSI target (ZFS volume).  The target is large enough to contain the 16 GB plus bootloader (0x200 bytes or so).

Windows starts booting via gPXE.  Unfortunately the boot process doesn't complete and I suspect incomplete iSCSI configuration to be the cause.  I did nothing special to configure windows 7 with the iSCSI target string like I did with Linux.

Tracing packets shows good progress up to a point, then the initiator/client disappears/goes offline.  I've also noticed the boot progress stall at points when the initiator spits out many TCP ReSeTs.

Ugh.  Do I really have to run msdbg?

Charles McDonald III sings Folgers Jingle

My cousin Charlie has a rich voice, not unlike Folgers coffee.  I bet he wins the Contest for their new Jingle.

XWiki Syntax Etymology

XWiki 1.0 syntax was inspired by both Radeox and TWiki.

Starting with XWiki Enterprise version 1.7 the default markup is XWiki Syntax v2.0.

Authors rationalized the XWiki markup by using at least double characters everywhere.  This resolved certain ambiguities, e.g. when there was a bold item starting a line and a bulleted list.  Resolving the ambiguities enabled a deterministic WYSIWYG editor.

XWiki Syntax v2.0 moves closer to the Creole 1.0 syntax which is becoming a standard for wiki syntax.  The Creole community has taken the time to analyze all the existing wiki syntaxes before deciding on symbols.  The choices made are thus very good.

The XWiki rendering engine is a superset wrapper around Wikimodel and Doxia and supports these popular Wiki syntaxes.

  • MediaWiki
  • Confluence
  • JSPWiki
  • Creole
  • TWiki
  • XWiki v1 & v2

Groovy Tutorial

Read and type in each example using a Groovy console or a Groovy web application like XWiki.

Getting Started

Differences From Java

Things to remember

Things to forget

New features added to Groovy not available in Java

  • closures
  • native syntax for lists and maps
  • GroovyMarkup and GPath support
  • native support for regular expressions
  • polymorphic iteration and powerful switch statement
  • dynamic and static typing is supported - so you can omit the type declarations on methods, fields and variables
  • you can embed expressions inside strings
  • lots of new helper methods added to the JDK
  • simpler syntax for writing beans for both properties and adding event listeners
  • safe navigation using the ?. operator, e.g. "variable?.field" and "variable?.method()" - no more nested ifs to check for null clogging up your code
Created by Administrator on 07/09/2013
This website content is not licensed to you. All rights reserved.
XWiki Enterprise 9.11.1 - Documentation