Difference between a customer and a client

There’s a difference between being a customer and being a client.  People on both ends of a relationship always get into trouble when they don’t understand this.

A customer – is always right

A client – expects to hear the truth, especially when it’s unpleasant

This is true in professional relationships.  Doctors, lawyers, accountants are expected to provide good advice in their professional relationship — but their customers are always right.  Reports should be: on time, of high quality, and delivered at a fair price.

The same goes for religious relationships.  When we are in the worship service, we are the clients of the person leading the show.  But we customers are right — the coffee should taste good, the roof shouldn’t leak and we have every right to be treated with respect.

In higher education the student customer should get good value for their tuition dollar, grades should arrive on time, campus services should be superb and their teachers should be engaged and respectful.  But the student is the client of the teacher when it comes to grades and feedback on their work.

The relationship between the regulated and the regulator is a mix of services (a customer relationship), policy and compliance (which are stakeholder and client relationships).

Be careful when you cross the streams.



ICANN participants

I do these scratch-pad posts for really narrow audiences, the rest of you will find them a bit bewildering.  Sorry about that.  This one is aimed at the GNSO Council, as we ponder the question “how do we increase the pool of PDP working-group volunteers?”

Broadening the bottom of the bottom-up process is a critical need at ICANN right now. But at least in the part of ICANN where I live (GNSO policy-making working groups) the conversations that take place are very nuanced and do require a good deal of background and experience before a person is going to be an effective contributor to the conversation.

So I think that we/ICANN need to develop a clearer understanding of the many different roles that people play as they progress toward becoming an effective participant in the process. And then put the resources and process in place to encourage them along the way.  This is my current picture…



Here’s a starter-kit list of roles that people play. I’m putting them in pairs because nobody can do this by themselves — we all need the help of others as we progress. I’ve also built a little drawing which puts these in a never ending circle because we’re always turning back into newcomers as we explore another facet of the organization. I decided to beat the term “translation” to death in these descriptions.  I think ICANN needs to “translate” what it does for a wide range of audiences to make it easier for them to participate.

Newcomer <-> Recruiter

A newcomer is likely to be just as bewildered by that experience as most of the rest of us have been. They need a “recruiter” greet them, welcome them into the flow, translate what’s going on into terms they can understand, find out what their interests and goals are and get them introduced to a few “guides” who can take them to the next step.

Explorer <-> Guide

As the newcomer finds their place, they will want to explore information and conversations that are relevant to their interests and they need a “guide” to call on to translate their questions into pointers toward information or people that they’re trying to find.

Student <-> Teacher

As the person progresses they need a positive, low-risk way to learn the skills and knowledge they need in order to be able to contribute. And, like any student, they need a teacher or two. I’ve always thought that we are missing a huge opportunity in the GNSO Constituencies by not consciously using the process of preparing public comments as a place for less experienced members to develop their policy-making skills in a more intimate, less risky environment than a full-blown working-group. I’d love to see newer members of Constituencies consciously brought into progressively richer roles in the teams that write public comments for Constituencies.

Researcher <-> Expert

Another person who needs a very specific kind of partner is a person who comes to ICANN to research — either to find an answer to a policy-related question, find the best way to handle a problem or complaint that they have with a provider, or to discover whether there is data within the ICANN community that can help with formal academic research. Again, here is a person with fairly clear questions who needs help sifting and sorting through all the information that’s available here — another form of translation, this time provided by a librarian or an “expert” in my taxonomy.  This person may not want to build new skills, they’re just here for answers.  But filling that “expert” role could be a great opportunity for somebody who’s already here.

Teammate <-> Coach

A person who is experiencing a policy-making drafting-team (e.g. within a constituency) or working group for the first few times has a lot of things to learn, and many of those things aren’t obvious right at the start. And this person may not feel comfortable asking questions of the whole group for a wide variety of reasons. They would benefit from a “coach” — a person who makes it clear that they are available to answer *any* question, no matter how small. This person is translating a sometimes-mysterious team process for a teammate who is learning the ropes.

Leader <-> Mentor

As our person progresses, they eventually take up a leadership role, and once again could use the help of others to navigate new duties — yet another form of translation, this time delivered by a mentor who helps the emerging leader be effective in their chosen role.


I also think there are all kinds of information assets that participants use and access in different ways depending on what their role is at the moment. Another kind of translation! 🙂 Here’s another starter-kit list:

  • Organizational structures
  • Documents
  • Transcripts
  • Email archives
  • Models
  • Processes
  • Tools and techniques
  • Outreach materials

I think there’s a gigantic opportunity to make this “career progression” and “information discovery” easier and more available to people wanting to participate at the bottom of the bottom-up process. I’m not sure that there’s much need for new technology to do all this — my thoughts run more toward setting goals, rewarding people who help, etc. But a dab of cool tech here and there might help…

Using Ableton Live to drive Logic Pro X

This is another scratchpad post to remind myself how I set up two of my favorite digital audio workstations (Ableton Live and Apple Logic Pro X) to run at the same time.  I like facets of each of these systems and want to have the best of both worlds — the live-performance flexibility of Live and the instruments and signal processing of Logic.  In some perfect future, Logic will run as a Rewire slave and a fella won’t have to do all this goofy stuff.  Until then, this is a set of notes on how I do it.  Your mileage may vary.  I’ll will try to respond to your questions as best I can (click HERE to contact me) — but I’ll be sluggish, don’t count on a reply in anything less than 24 hours.


The goal is to use MIDI coming from Live to control instruments in Logic, and get that audio back into Live.  This is where you’re headed and this diagram may be all you need.

Update – March, 2017

This revised version of the post…

Changes the audio transport mechanism from Soundflower to Loopback (by Rogue Amoeba Software).

Soundflower is no longer being actively supported and I haven’t been able to get it working properly since OSX 10.9.  Loopback is great stuff and I heartily recommend it — but it isn’t free.

Revises the Logic and Live templates to reflect this change in audio processing.

Advises against using this approach if your Live workflow includes controllers such as Ableton Push or NI’s Komplete Kontrol.

Remember, all this post does is describe how to convert Logic into a software instrument that’s available to Live (as if Logic were a Rewire slave to Live).  There is a strong presumption that Live is the “Master” and Logic is the “Slave.”

Logic breaks controllers like Push and Komplete Kontrol because it grabs them away from Live as it starts up.  Komplete Kontrol works fine under Logic, but it’s completely disabled in Live (except for MIDI).  Push shuts down entirely as soon as Logic is running.  Rats.

If you rely on controllers that don’t use MIDI to communicate with the DAW (like Push) my suggestion is to use this dual-DAW configuration in a separate (controller-free) session, capture the Logic sound in Live audio tracks, dump out of Logic and complete your work in Live-only sessions with your controllers back in the workflow.


16-channel project templates for Live and Logic

Here are links to two project files which you are welcome to try out as a template.  They’re set up to do 14 channels of audio and MIDI.  Why not 16, you ask?  Because this template includes a B3 organ instrument in Logic, which consumes 3 MIDI channels all by itself.  The configuration steps to set up the environment are still required, but you should then be able to load these up as a starting point.

Zip archive of the (revised) Logic and Live templates

Excellent video introduction to the Loopback software

I highly recommend you watch this video which, at about minute 3, walks you through setting up a two-channel audio connection between Logic and Live that is exactly the same as what this tutorial shows.


Quick Checklist

After a few times through, this checklist may serve as a useful shorthand reminder of the steps that are required.  It’s basically a table of contents of the rest of the post.

Disconnect all external MIDI devices

Install Loopback

Set up the IAC bus

Click the “Device is online” button

Optional: rename the port

Open Live first.

Configure Preferences in Live

Configure Audio Preferences in Live to recognize Loopback as its audio input

Configure MIDI Preferences in Live to recognize the IAC Driver

Open a new or existing project in Live

Drag External Instruments into empty MIDI tracks

Configure the External Instrument MIDI output(s) to send it to Logic via the IAC driver

Configure the External Instrument’s Audio input to receive audio back from Logic

Open Logic

Configure global preferences in Logic

Un-tick “Control surface follows track selection” in Logic Pro > Control Surface > Preferences

Set the global Audio Output device to Loopback

Open a new or existing Logic project

Set project-level configuration preferences (only required for multitrack work)

Select “Auto demix by channel” if multitrack recording

Configure the project to only listen to MIDI from the IAC MIDI input (this is an essential step — skipping this will result in all sorts of weird errors as MIDI flows directly from sources rather than through Live)

Open the Environment window

Select the “Click and ports” layer

Delete the connection between the “sum” and “input notes” objects

Create a connection between the inbound IAC MIDI port and the “input notes” object

Create or select Software Instrument track(s)

Assign MIDI channels to correspond with the MIDI-To settings in Live

Record-arm the track(s)

Switch back to Live

Test the configuration

Reestablish external MIDI controllers in Live


Assign B3 Organ instruments FIRST, and only to MIDI channels 1,2 and 3

Drummer tracks don’t respond to external MIDI


Controller (e.g. Ableton Push or Komplete Kontrol) stops working in Live. 

IAC Driver can’t be selected as a “MIDI to” destination in Live (it’s greyed out)

All channels sound in Logic if any channel is record-armed in Live

Two channels sound in Logic, the external-keyboard channel and the record-armed one

Instruments sound in Logic even if no Live tracks are record-armed

Instruments stop responding to MIDI as new channel strips are added in Logic

Step by Step:

Disconnect all external MIDI devices

First get your template working without the complications of stray MIDI coming from your devices (use the computer-keyboard to generate notes in Live), then add external sources of MIDI back in one at a time and debug any conflicts.

Install Loopback

Download Loopback HERE.

Configure Loopback

Loopback provides several ways to get this job done.  I’ve chosen to set it up as a simple “loopback” device (no audio source) with 32 channels (16 stereo pairs, added manually) and a label to help me identify it when setting preferences in the DAWs.  Here’s how it looks in Loopback:

And here’s how it looks in Audio MIDI Setup:

Set up the IAC bus (used to pass MIDI signals from Live to Logic).

It’s in the MIDI window of Audio MIDI Setup.   If this the first time the IAC bus has been used the IAC icon will likely be greyed out.


Tick the “Device is online” box to bring it online

Optional: rename the port (by clicking on the name and waiting for it to turn into an edit box).

It will have 16 midi in/out ports even though the “Connectors” boxes are greyed out.  Here’s the way the IAC Driver Properties dialog will look when it has been put online and the port has been renamed (note, this is the name that are being used in the template files, either rename the port or revise the audio routing in Live and Logic to match).



Open Live first.

Opening Logic first may cause Logic to launch as a Rewire host and Live will then automatically open as a Rewire Slave – the whole goal of this exercise is to have Live act as the master not Logic.

Configure Preferences in Live

Configure Audio Preferences in Live to recognize Soundflower as its audio input.  This tutorial uses the 32-channel Loopback device configured above.  Use a 2-channel Loopback device for single-instrument configuration, “multi voice” versions need the 32-channel option if mixing is going to be done in Live.  2-channel Loopback will work if multi-channel audio is going to be mixed in Logic before it is brought into Live.  Use smaller buffer sizes if latency becomes an issue for live performance.  Note that sample sizes and sample rates are set in Loopback, Live and Logic.

Note for multi-channel configurations:  To make some or all of the channels of the Loopback device visible to External Instruments, toggle them in Preferences > Audio > Input Config

Configure MIDI Preferences in Live to recognize the IAC Driveronly enable the IAC drive for MIDI Output to Logic.  Getting MIDI Input data from Logic causes the risk of MIDI loops — so leave that option turned off.


Open a new or existing project in Live.

Feel free to download the Live template I’ve posted in the “Resources” section near the beginning of this post.

Drag External Instruments into empty MIDI tracks

Configure the External Instrument MIDI output(s) to send it to Logic via one of the MIDI channels of the IAC driver (Channel 1 in the picture below).  In multichannel configurations this is the Live end of the MIDI mapping to Logic – these channel assignments are mapping the MIDI from Live into the corresponding channel in Logic.

Configure the External Instrument’s Audio input to receive audio back from Logic.  Since Soundflower has selected as the global audio-input source for Live in Preferences, the channel selections will all refer to Soundflower.  The single-number options refer to single channels, the options with two vertical bars refer to stereo pairs.  The stereo pairs are the likely choice in most situations.


Open Logic.

Ignore the warning that another Rewire host is running – this is the correct  behavior, we don’t want Logic to be the host.

Configure global preferences in Logic

Un-tick “Control surface follows track selection” in Logic Pro > Control Surface > Preferences.


Set the global Audio Output device to the Loopback device .  The 32 channel version is used in this setup.

Open a new or existing Logic project.

Feel free to download the Logic template I’ve posted in the “Resources” section near the beginning of this post.

Set project-level configuration preferences

The steps in this section are to overcome the somewhat wonky multi-channel MIDI routing in Logic and are not required for driving a single channel in Logic from Live

Select “Auto demix by channel if multitrack recording” in File > Project Settings > Recording


Configure the project to only listen to MIDI from the IAC MIDI input.  Projects default to listening to all instruments.  This causes endless trouble with MIDI loops.  These steps force Logic to only listen to the MIDI being sent from Live.  Here are the steps:

Open the Environment window – Window > Environment
Select the “Click and ports” layer
Delete the connection between the “sum” and “input notes” objects
Make a connection only between the inbound IAC MIDI port (which is where the MIDI events from Live will be coming from) and the “input notes” object.

Click on this photo to get a full-sized version — it’s hard to see, but the second little triangle, coming from IAC Live -> Logic, is the only one that’s connected to the “Input Notes” object


Set up the tracks in Logic

Create or select Software Instrument track(s). 

Assign MIDI channels to correspond with the MIDI To settings in Live

Record-arm the track (or tracks)

Click on this photo to get a full-sized version — note that all channels are armed for recording, and each has a different MIDI channel assigned as seen with the light number in brackets immediately to the right of each track name.


Use the Mixer window to assign audio channels to tracks in Logic.  This is only required for multi-channel configurations. Click this picture to expand it to full size and take a hard look at the “Output” row, that’s where the assignments are made.


Switch back to Live

Test the configuration.  Logic should now respond to MIDI events from Live.  Enable the Computer MIDI Keyboard function in Live, record-arm a track or two in Live and type a few A’s, S’s and D’s on the computer keyboard.  Notes should sound in Logic on the tracks corresponding to the ones that have been record-armed in Live.

Reestablish external MIDI controllers in Live.   Bring each external controller back into the Live configuration one at a time and iron out any wrinkles that may appear.  In general problems will be caused either if MIDI events leak into Logic directly rather than being forced to pass through Live first or because Logic “took over” a controller (e.g. Push or Komplete Kontrol).  Debugging all possible problems with external controllers is beyond the scope of this post.  But likely fixes will be in Logic’s MIDI Environment.


Assign B3 Organ instruments FIRST, and only to MIDI channels 1,2 and 3 – B3 type instruments (e. Vintage B3 Organ or the Legacy series of B3 organs) require more than one MIDI channel – and those channel-assignments default to MIDI channels 1,2 and 3.  Adding one of these instruments “on top” of already-assigned instruments causes unusual breakage, thus it’s a good idea to avoid these channels for anything except B3 instruments.

Drummer tracks don’t respond to external MIDI – either export the track to an audio file for use in Live, or follow these steps to create a Software Instrument track that mimics the Drummer track but that will respond to MIDI

Select/create a Software Instrument track

Copy the channel strip settings from the Drummer track (right-click on the name of the channel strip, select “Copy Channel Strip Setting”

Paste the channel strip settings into the Software Instrument Track

Adjust MIDI and audio settings in the new channel strip

Bonus – copy/paste regions from the Drummer track into the new Software Instrument track to get MIDI renditions of the region – which can then be exported into Live and used to trigger the drummer

Here’s a drawback to staying MIDI rather than exporting to audio – the “automatic hi-hat” components of the Drummer don’t come across, because they’re not imbedded in MIDI.  Best turn that feature off in the Drummer settings off if MIDI is the format you’re using.


Controller (e.g. Ableton Push or Komplete Kontrol) stops working in Live.  This is true.  Logic is grabs those controllers away from Live when it starts.  I have no fix, only a workaround.  Use this dual-DAW configuration sparingly, then shut down Logic and complete your production in Live by itself.

IAC Driver can’t be selected as a “MIDI to” destination in Live (it’s greyed out)

  • use the Audio Midi Setup app to confirm that the IAC Driver “device is online” box is ticked
  • confirm that the IAC output MIDI ports are enabled in Live -> Preferences -> MIDI

Two channels sound in Logic, the external-keyboard channel and the record-armed one – check the project’s Environment window to make sure that Logic is only receiving MIDI from the IAC driver.

Instruments sound in Logic even if no Live tracks are record-armed – check the project’s Environment window to make sure that Logic is only receiving MIDI from the IAC driver

Instruments stop responding to MIDI as new channel strips are added in Logic – check to make sure that the “disappearing” instruments aren’t assigned to MIDI channels 1, 2 or 3 if there’s a B3 organ instrument in the mix (which uses all three of those channels).  One reminder is to delete or mute the MIDI 2 and 3 channel strips in Live if a B3 organ part of the mix in Logic.

Name Collisions II — A call for research

This post is a heads up to all uber-geeks about a terrific research initiative to try to figure out causes and mitigation of name-collision risk.  There’s a $50,000 prize for the first-place paper, a $25,000 prize for the second place paper and up to five $10,000 prizes for third-place papers.  That kind of money could buy a lot of toys, my peepul.  And the presentation of those papers will be in London — my favorite town for curry this side of India.  Interested?  Read on.  Here’s a link to the research program — you can skip the rest of this post and get right to the Real Deal by clicking here:


Background and refresher course — what is the DNS name-collision problem?

Key points

  • Even now, after months of research and public discussion, I still don’t know what’s going to happen
  • I still don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m still not convinced by their arguments right now
  • Thus, I still think the best thing to do is learn more
  • That’s why I’m so keen on this research project.

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would ultimately become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, some of them are going to be very hard and expensive to rename, and some of them are likely to break (perhaps catastrophically) when second level names in new gTLDs hit the root.  m’Kay?


Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.



Private networks, the way they may look AFTER 1400 new gTLDs get dropped into the root

The next picture shows the namespace collision problem that the research efforts should be aimed at addressing.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resource get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.



The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.

Think of all the interesting and creative ways your old network could fail.  Awesome to contemplate, no?  But wait…

What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing harm?  But wait…

What if you have old old old applications that are hard-coded to count on a consistent NXDOMAIN response from a root server.  Suppose that the application gets a new response when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  What if the person that wrote that old old old application is long gone and the documentation is…  um…   sketchy?  But wait…

To top it all off, with this rascal, problems may look like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.  It’s not going to happen on gTLD-delegation day, it’s going to happen one domain at a time.  Nice isolated random events sprinkled evenly across the world.  Hot damn.  But wait…

On the other end of the pipe, imagine the surprise when some poor unsuspecting domain-registrant lights up their shiny new domain and is greeted by a flood of email from network operators who are cranky because their networks just broke.  What are THEY going to be able to do about those problems?  Don’t think it can happen?  Check out my www.corp.com home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.

So argue all you want.  From my perch here on the sidelines it looks like life’s going to get interesting when those new gTLDs start rolling into the root.  And that, dear reader, is an introduction to the Name Collision problem.


Mitigation approaches.

Once upon a time, 3 or 4 months ago when I was young and stupid, I thought this might be a good way to approach this problem.  I’m going to put it in this post as well, but then I’m going to tell you why it won’t work.  Another explanation of why we need this research and we need it now.

Start here:

If you have private networks that use new gTLDs (look on this list) best start planning for a future when those names (and any internal certificates using those names) may stop working right. 

A bad solution:

In essence, I thought the key to this puzzler was to take control of when the new gTLDs become visible to your internal network.  It’s still not a terrible idea, but I’ve added a few reasons why it won’t work down at the end.  Here’s the scheme that I cooked up way back then.

By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.



This is the part where you look at the NXD traffic and find the trouble spots.  Then, with a mere wave of my hand and one single bullet point, I encourage you to fix all your networks.  Maybe you’ve got a few hundred nodes of a distributed system all over the world that you need to touch?  Shouldn’t be a problem, right?



This is the Good Guy part of this approach.  Of course, because we all subscribe to the One World, One Internet, Everybody Can Reach Everything credo, we will of course remember to remove the preventative blocking from our routers just as soon as possible.  Right?  Right?


The reasons why this won’t work:

The first thing that blows my idea out of the water is that you probably don’t have complete control over the DNS provider your customers use.  I still think this is a pretty good idea in tightly-run corporate shops that don’t permit end users to modify the configuration of their machines.  But in this Bring Your Own Device world we live in, there’s going to be a large population of people who configure their machines to point at DNS providers who aren’t blocking the names that conflict with your private network space.

Let’s assume for a minute that everything is fine in the internal network, and the corporate DNS resolver is blocking the offending names while repairs are being made (hopefully cheaply).  Suppose a road warrior goes out to Starbucks and start using a laptop that’s configured to point at Google’s DNS resolver.  In the old days before new gTLDs, the person would fire up their computer, go to the private name, the query would fail and they would be reminded to fire up the VPN to get to those resources.  Tomorrow, with a conflicting new gTLD in the root, that query might succeed, but they wouldn’t be going to the right place.

Slide 22


Here’s the second problem.  My tra-la-la scheme above assumes that most mitigation will be easy, and successful.  But what it it’s not?  What if you have a giant Active Directory tree which, by all accounts, is virtually impossible to rename without downtime?  What if you have to “touch” a LOT of firmware in machines that are hard-wired to use new gTLDs.  What if vendors haven’t prepared fixes for the devices that are on your network looking at a new gTLD with the presumption that it won’t route to the Internet (yet now it does)?  Or the nightmare scenario — something breaks that has to be diagnosed and repaired in minutes?

Slide 23

The research project

See why we need you to look hard at this problem?  Like, right now??  ICANN is already delegating these domains into the root.  Here’s a page that lists the ones that have already been delegated.


If you see one of your private network names on THIS list, you’re already in the game.  Hooyah!  So this is moving FAST.  This research should have been done years ago, long before we got to this stage.  But here we are.  We, the vast galaxy of network operators and administrators who don’t even know this is coming, need your help.  Please take a look at the NameCollisions.net site and see if you can come up with some cool ideas.  I hope you win — because you’ll help the rest of us a lot.  I’ll buy you a curry.



Commentary on Fadi Chehadi Montevideo Statement


I love toiling at the bottom of the bottom-up ICANN process.  And it’s also quite entertaining to watch senior ICANN “managers” running wild and free on the international stage. The disconnect between those two things reminds me of the gulf that usually exists between the faculty and administration in higher education institutions.  Both sides think they run the joint.  That same gulf exists in ICANN and, while I was hopeful for a while that the new guy (Fadi Chehadi) was going to grok the fullness, it’s starting to slide into the same old pattern.

The picture above is of the last guy (Rod Beckstrom)

The audio file linked below is 2-minute mashup of the new guy Fadi’s (quite unsatisfactory) answer to Kristina Rosette’s recent question about whether he has community and Board air cover for a recent (pretty controversial) statement he made.  Pretty inside baseball for all you regulars, but it may map pretty well to your situation even though the details differ.



to listen to the 2-minute clip (but turn the volume down if you do it at work).


Here’s a link to the public post that points at the various transcripts of the call


Interestingly, while the audio transcript is still available, the links to the written transcripts that are contained in that email have disappeared.  Since I have copies of those files from when I downloaded them earlier today, I’ve posted them to this site.  Here are links to those missing documents.

Word document — full transcript — click HERE

Word document — Adobe Chat transcript — Click HERE

What if people stop trusting the ICANN root?

Courtesy FreeDigitalPhotos.net

So once upon a time I worked at a terrific ISP in St. Paul, MN.  Back then, before the “grand bargain” that led to the shared hallucination known as ICANN, there were several pretty-credible providers of DNS that later (somewhat disparagingly) became known as “alternate” root providers.

In those days, we offered our customers a choice.  You could use our “regular” DNS that pointed at what later became the ICANN-managed root, or you could use our “extended” DNS servers that added the alternates.  No big deal, you choose, your mileage may vary, if you run into trouble we’d suggest that you switch back to “regular” and see if things go better, let us know how you like it, etc.

Well.  Fast forward almost 20 years…

The ICANN root is getting ready to expand.  A lot — like 1200 new extensions.  Your opinion about this can probably be discerned by choosing between the following.  Is that expansion of the number of top level names (ala .com, .org, .de) more like;

  • going from 20 to 1200 kinds of potato chips, or
  • going from 20 to 1200 kinds of beer?

Whatever.  The interesting thing is that suddenly the ICANN root is starting to look a lot more like our old “extended” DNS.  Kinda out there.  Kinda crazy.  Not quite as stable.  And that rascal may cause ISPs and network administrators a lot of headaches.  I’ve written a post about the name-collision issue that describes one of these puzzlers.

If those kinds of problems crop up unexpectedly, ISPs and their network administrator customers are going to look for a really quick fix (“today!”…  “now!”…).  If you’re the network admin and your whole internal network goes haywire one day, you don’t have time to be nice.  The bosses are screaming at you.  You need something that will fix that whole problem right now.  You’ll probably call your ISP for help, so they need something that will help you — right now.

One thing that would fix that is a way to get back the old “regular” DNS, the one we have now, before all those whizbang new extensions.  You know, like Coke Classic.  I know that at our ISP, we’d probably be looking for something like that.  We’d either find one, or roll our own, so we could offer it to customers with broken networks.

We’d be all good tra-la-la network citizens about it — “don’t forget to switch back to the ICANN root when your networks are fixed” and so forth.  But it would get the emergency job done (unless your DNS is being bypassed by applications, but that’s a topic for another post).

That means that the ICANN root might not forever be the first-choice most-trusted root any more.  Gaining trust is slow and hard.  Losing trust can happen in a heartbeat.  I can’t speak for today’s ISPs, but back in the day we were not shy about creative brute force solutions to problems.

We might stop trusting the ICANN root and go looking for a better one.

Oh, and one more thing.  We might put “certified NSA-free” on the shopping list.  Just sayin’


Disclaimer: I was “ICANN insider” when I wrote this post.  I don’t exactly know when that happened, but there you go.  I was a member of the ISPCP (ISP and Connectivity Provider constituency) of the GNSO (Generic Name Supporting Organization) where I participated in a lot of policy working groups and (briefly) represented the constituency on the GNSO Council.

I’m also a domain registrant – I registered a gaggle of really-generic domain names back before the web really took off.  I think I’m going to challenge John Berryhill to a Calvin-Ball debate as to whether new gTLDs help or hurt the value of those old .com names.  Back to the “chips vs beer” argument.

New gTLD preparedness project

Another scratchpad post — this time about what a “get ready for new gTLDs” project might look like.  I’ll try to write these thoughts in a way that scales from your own organization up to world-wide.

I’m doing this with an eye towards pushing this towards ICANN and new-gTLD applicants and saying “y’know, you really should be leading the charge on this.  This is your ‘product’ after all.”  Maybe we could channel a few of those “Digital Engagement” dollars into doing something useful?  You know, actually engage people?  Over a real issue?  Just sayin’

Here we go…

Why we need to do this




  • Impacts of the arrival of some new gTLDs could be very severe for some network operators and their customers
  • There may not be a lot of time to react
  • Progress on risk-assessment and mitigation-planning is poor (at least as I write this)
  • Fixes may not be identified before delegation
  • Thus, getting ready in advance is the prudent thing to do
  • We benefit from these preparations even if it turns out we don’t need them for the new gTLD rollout

The maddening thing is, we may not know what’s really going to happen until it’s too late to prepare — so we may have to make guesses.

New gTLD impacts could be very broad and severe, especially for operators of private networks that were planned and implemented long before new gTLDs were conceived of.  ISPs and connectivity providers may be similarly surprised.  Click HERE to read a blog post that I wrote about this — but here are some examples:

  • Microsoft Active Directory installations may need to be renamed and rebuilt
  • Internal certificates may need to be replaced
  • Long-stable application software may need to be revised
  • New attack vectors may arise
  • And so forth…

The key point here is that in the current state of play, these risks are unknown.  Studies that would help understand this better are being lobbied for, but haven’t been approved or launched as I write this.

A “get ready” effort seems like a good idea

Given that we don’t know what is going to happen, and that some of us may be in a high-risk zone, it seems prudent to start helping people and organizations get ready.

  • If there are going to be failures, preparedness would be an effective way to respond
  • The issues associated with being caught by surprise and being under-prepared could be overwhelming
  • “Hope for the best, prepare for the worst” is a strategy we often use to guide family decisions — that rule might be a good one for this situation as well
  • Inaction, in the face of the evidence that is starting to pile up, could be considered irresponsible.

Looking on the bright side, it seems to me that there are wide-ranging benefits to be had from this kind of effort even if mitigation is never needed.

  • We could improve the security, stability and resiliency of the DNS for all, by making users and providers of those services more nimble and disaster resistant
  • If we “over prepare” as individuals and organizations, we could be in a great position to help others if they encounter problems
  • Exercise is good for us.  And gives all factions a positive focal point for our attention.  I’ll meet you on that common ground.

Here’s a way to define success

I’m not sure this part is right, but I like having a target to shoot at when I’m planning something, and this seems like a good start.


  • Minimize the impact of new-gTLD induced failures on the DNS, private and public networks, applications, and Internet users.
  • Make technical-community resources robust enough to respond in the event of a new-gTLD induced disruption
  • Maximize the speed, flexibility and effectiveness of that response.

Who does what

This picture is trying to say “everybody can help.”  I got tired of adding circles and connecting-lines, so don’t be miffed if you can’t find yourself on this picture.  I am trying to make the point that it seems to me that ICANN and the contracted parties have a different role to play than those of us who are on the edge, especially since they’re the ones benefiting financially from this new-gTLD deal.

Note my subtle use of color to drive that home.  Also note that there’s a pretty lively conversation about who should bear the risks.



How do we get from here to there?  If I were in complete command of the galaxy, here’s a high level view of how I’d break up the work.


As I refine this Gantt chart, it becomes clear to me that a) this is something that can be done, but b) it’s going to take some planning, some resources and (yes, dearly beloved) some time.  Hey!  I’m just the messenger.

We should get started

So here you are at the end of this picture book and mad fantasy.  Given all this, here’s what I’d do if this puzzler were left up to me.


And here are the things I’d start doing right away:

  • Agree that this effort needs attention, support and funding
  • Get started on the organizing
  • Establish a focal point and resource pool
  • Broaden the base of participation
  • Start tracking what areas are ready, and where there are likely to be problems

There you go.  If you would like this in slide-deck form to carry around and pitch to folks, click HERE for an editable Powerpoint version of this story.  Carry on.


Disclaimer:  While the ICANN community scrambles to push this big pile of risk around, everybody should be careful to say where they’re coming from.  I’m a member of the ISPCP constituency at ICANN, and represent a regional IXP (MICE) there.  I don’t think this issue generates a lot of risk for MICE because we don’t provide recursive resolver services and thus won’t be receiving the name-collision notifications being proposed by ICANN staff.  I bet some of our member ISPs do have a role to play, and will be lending a hand.

I am also a first-generation registrant of a gaggle of really-generic domain names.  New gTLDs may impact the value of those names but experts are about evenly divided on which way that impact will go.  I’m retired, and can’t conceive of how I’ll be making money from any activity in this arena.

New gTLDs and namespace collision

This is another scratch-pad post that’s aimed at a narrow audience —  network geeks, especially in ISPs and corporations.  The first bit is a 3-minute read, followed by a 20-minute “more detail” section.  If you’re baffled by this, but maybe a little concerned after you read it, please push this page along to your network-geek friends and colleagues and get their reaction.  Feel free to repost any/all of this.

Key points before we get started

  • I don’t know what’s going to happen
  • I don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m not convinced by their arguments right now
  • Thus, I think the best thing to do is learn more, hope for the best and prepare for the worst
  • My goal with this post is just to give you a heads-up

If I were you, I’d:

  • Scan my private network and see if any of my names collide with the new gTLDs that are coming
  • Check my recursive DNS server logs and see if any name collisions are appearing there
  • Start thinking about remediation now
  • Participate in the discussion of this topic at ICANN, especially if you foresee major impacts
  • Spread the word that this is coming to friends and colleagues

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, they’re going to be very hard and expensive to rename, and some of them are likely to break when new gTLDs hit the root.  m’Kay?

Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.


This next picture shows the namespace collision problem.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resources get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.

But wait, there are more bad things that might happen.  What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing you harm?

But wait, there’s more.  What if you have old old applications that are counting on a consistent NXDOMAIN response from a root server.  Suppose that the application was written in such a way that that it falls over when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  Does this start to feel a little bit like Y2K?

Well one of the good things about Y2k was that most of the “breakage” events would have all happened on the same day — with this rascal, things might look more like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.

Imagine the surprise when some poor unsuspecting domain-registrant wakes up to a flood of email from network operators who are cranky because their networks just broke.  Don’t think it can happen?  Check out my www.corp.com home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.


The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.  The singular goal of this post is to let you know about this possibility in advance.  Here’s the key message:

If you have private networks that use TLDs on this list, best start planning for a future when those names (and any internal certificates using those names) are going to stop working right. 

That’s it.  If you want, you can quit reading here.  I’m going to stick updates in this section, followed by the “More detail” part at the bottom.

Update 1 — Mikey’s first-try at a near-term mitigation plan

After conversations with a gaggle of smart people, I’ve decided that the following three pictures are a relatively low-impact way to address this problem in a network that you control.

In essence, I think the key to this approach is to take control of when the new gTLDs become visible to your internal network.  By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.






More detail

Note: all the color, bold, highlighting in this section is mine — just to draw your eye to things that I find interesting.

There are over 1000 names on that list I linked to above.  Here is a shorter list drawn from Interisle Consulting Group’s 2-August, 2013 report entitled “Name Collisions in the DNS” [PDF, 3.34 MB].  This list is the top 100 names in order of frequency of queries that they saw in their study.  I’ve taken the liberty of highlighting a few that might be interesting for you to keep an eye out for on your network or your customer’s networks.


1 home 21 mail 41 abc 61 yahoo 81 gmail
2 corp 22 star 42 youtube 62 cloud 82 apple
3 ice 22 ltd 43 samsung 63 chrome 83 thai
4 global 23 google 44 hot 64 link 84 law
5 med 24 sap 45 you 65 comcast 85 taobao
6 site 25 app 46 ecom 66 gold 86 show
7 ads 26 world 47 llc 67 data 87 itau
8 network 27 mnet 48 foo 68 cam 88 house
9 cisco 28 smart 49 tech 69 art 89 amazon
10 group 29 orange 50 free 70 work 90 ericsson
11 box 30 web 51 kpmg 71 live 91 college
12 prod 31 msd 52 bet 72 ifm 92 bom
13 iinet 32 red 53 bcn 73 lanxess 93 ibm
14 hsbc 33 telefonica 54 hotel 74 goo 94 company
15 inc 34 casa 55 new 75 olympus 95 sfr
16 dev 35 bank 56 wow 76 sew 96 man
17 win 36 school 57 blog 77 city 97 pub
18 office 37 movistar 58 one 78 center 98 services
19 business 38 search 59 top 79 zip 99 page
20 host 39 zone 60 off 80 plus 100 delta

Here’s the executive summary of the InterIsle report.

Executive Summary — InterIsle Consulting Report

Names that belong to privately-defined or “local” name spaces often look like DNS names and are used in their local environments in ways that are either identical to or very similar to the way in which globally delegated DNS names are used. Although the semantics of these names are properly defined only within their local domains, they sometimes appear in query names (QNAMEs) at name resolvers outside their scope, in the global Internet DNS.

The context for this study is the potential collision of labels that are used in private or local name spaces with labels that are candidates to be delegated as new gTLDs. The primary purpose of the study is to help ICANN understand the security, stability, and resiliency consequences of these collisions for end users and their applications in both private and public settings.

The potential for name collision with proposed new gTLDs is substantial.  Based on the data analyzed for this study, strings that have been proposed as new gTLDs appeared in 3% of the requests received at the root servers in 2013. Among all syntactically valid TLD labels (existing and proposed) in requests to the root in 2013, the proposed TLD string home ranked 4th, and the proposed corp ranked 21st. DNS traffic to the root for these and other proposed TLDs already exceeds that for well-established and heavily-used existing TLDs.

Several options for mitigating the risks associated with name collision have been identified.  For most of the proposed TLDs, collaboration among ICANN, the new gTLD applicant, and potentially affected third parties in the application of one or more of these risk mitigation techniques is likely to substantially reduce the risk of delegation.

The potential for name collision with proposed new gTLDs often arises from well- established policies and practices in private network environments. Many of these were widely adopted industry practices long before ICANN decided to expand the public DNS root; the problem cannot be reduced to “people should have known better.”

The delegation of almost any of the applied-for strings as a new TLD label would carry some risk of collision.  Of the 1,409 distinct applied-for strings, only 64 never appear in the TLD position in the request stream captured during the 2012 “Day in the Life of the Internet” (DITL) measurement exercise, and only 18 never appear in any position. In the 2013 DITL stream, 42 never appear in the TLD position, and 14 never appear in any position.

The risk associated with delegating a new TLD label arises from the potentially harmful consequences of name collision, not the name collision itself.  This study was concerned primarily with the measurement and analysis of the potential for name collision at the DNS root. An additional qualitative analysis of the harms that might ensue from those collisions would be necessary to definitively establish the risk of delegating any particular string as a new TLD label, and in some cases the consequential harm might be apparent only after a new TLD label had been delegated

The rank and occurrence of applied-for strings in the root query stream follow a power- law distribution.  A relatively small number of proposed TLD strings account for a relatively large fraction of all syntactically valid non-delegated labels observed in the TLD position in queries to the root.

The sources of queries for proposed TLD strings also follow a power-law distribution. For most of the most-queried proposed TLD strings, a relatively small number of distinct sources (as identified by IP address prefixes) account for a relatively large fraction of all queries.
A wide variety of labels appear at the second level in queries when a proposed TLD string is in the TLD position. For most of the most-queried proposed TLD strings, the number of different second-level labels is very large, and does not appear to follow any commonly recognized empirical distribution.

Name collision in general threatens the assumption that an identifier containing a DNS domain name will always point to the same thing. Trust in the DNS (and therefore the Internet as a whole) may erode if Internet users too often get name-resolution results that don’t relate to the semantic domain they think they are using. This risk is associated not with the collision of specific names, but with the prevalence of name collision as a phenomenon of the Internet experience.

The opportunity for X.509 public key certificates to be erroneously accepted as valid is an especially troubling consequence of name collision. An application intended to operate securely in a private context with an entity authenticated by a certificate issued by a widely trusted public Certification Authority (CA) could also operate in an apparently secure manner with another equivalently named entity in the public context if the corresponding TLD were delegated at the public DNS root and some party registered an equivalent name and obtained a certificate from a widely trusted CA. The ability to specify wildcard DNS names in certificates potentially amplifies this risk.

The designation of any applied-for string as “high risk” or “low risk” with respect to delegation as a new gTLD depends on both policy and analysis. This study provides quantitative data and analysis that demonstrate the likelihood of name collision for each of the applied-for strings in the current new gTLD application round and qualitative assessments of some of the potential consequences. Whether or not a particular string represents a delegation risk that is “high” or “low” depends on policy decisions that relate those data and assessments to the values and priorities of ICANN and its community; and as Internet behavior and practice change over time, a string that is “high risk” today may be “low risk” next year (or vice versa).

For a broad range of potential policy decisions, a cluster of proposed TLDs at either end of the delegation risk spectrum are likely to be recognizable as “high risk” and “low risk.” At the high end, the cluster includes the proposed TLDs that occur with at least order-of-magnitude greater frequency than any others (corp and home) and those that occur most frequently in internal X.509 public key certificates (mail and exchange in addition to corp). At the low end, the cluster includes all of the proposed TLDs that appear in queries to the root with lower frequency than the least-frequently queried existing TLD; using 2013 data, that would include 1114 of the 1395 proposed TLDs.

And here is their list of risk-mitigation options.

9 Name collision risk mitigation

ICANN and its partners in the Internet community have a number of options available to mitigate the risks associated with name collision in the DNS. This section describes each option; its advantages and disadvantages; and the residual risk that would remain after it had been successfully implemented.

The viability, applicability, and cost of different risk mitigation options are important considerations in the policy decision to delegate or not delegate a particular string. For example, a string that is considered to be “high risk” because risk assessment finds that it scores high on occurrence frequency or severity of consequences (or both), but for which a very simple low-cost mitigation option is available, may be less “risky” with respect to the delegation policy decision than a string that scores lower during risk assessment but for which mitigation would be difficult or impossible.

It is important to note that in addition to these strategies for risk mitigation, there is a null option to “do nothing”—to make no attempt to mitigate the risks associated with name collision, and let the consequences accrue when and where they will. As a policy decision, this approach could reasonably be applied, for example, to strings in the “low risk” category and to some or all of the strings in the “uncalculated risk” category.

It is also important to note that this study and report are concerned primarily with risks to the Internet and its users associated with the occurrence and consequences of name collision—not risks to ICANN itself associated with new TLD delegation or risk mitigation policy decisions.

9.1 Just say no

An obvious solution to the potential collision of a new gTLD label with an existing string is to simply not delegate that label, and formally proscribe its future delegation—e.g., by updating [15] to permanently reserve the string, or via the procedure described in [9] or [16]. This approach has been suggested for the “top 10” strings by [ ], and many efforts have been made over the past few years to add to the list of formally reserved strings [15] other non-delegated strings that have been observed in widespread use [1] [9] [10] [16].
A literal “top 10” approach to this mitigation strategy would be indefensibly arbitrary (the study data provide no answer to the obvious question “why 10?”), but a policy decision could set the threshold at a level that could be defended by the rank and occurrence data provided by this study combined with a subjective assessment of ICANN’s and the community’s tolerance for uncertainty.

9.1.1 Advantages
A permanently reserved string cannot be delegated as a TLD label, and therefore cannot collide with any other use of the same string in other contexts. A permanently reserved string could also be recommended for use in private semantic domains.

9.1.2 Disadvantages
There is no disadvantage for the Internet or its users. The disadvantages to current or future applicants for permanently proscribed strings are obvious. Because the “top N” set membership inclusion criteria will inevitably change over time, this mitigation strategy would be effective beyond the current new gTLD application round only if those criteria (and the resulting set membership) were periodically re-evaluated.

9.1.3 Residual risk
This mitigation strategy leaves no residual risk to the Internet or its users.

9.2 Further study

For a string in the “non-customary risk” or “calculated risk” category, further study might lead to a determination that the “severity of consequences” factor in the risk assessment formula is small enough to ensure that the product of occurrence and severity is also small.

9.2.1 Advantages
Further study might shift a string from the “uncalculated risk” to the “calculated risk” category by providing information about the magnitude of the “severity of consequences” factor. It might also reduce the uncertainty constant in the risk assessment formula, facilitating a policy decision with respect to delegation of the string as a new TLD.

9.2.2 Disadvantages
Further study obviously involves a delay that may or may not be agreeable to applicants, and it may also require access to data that are not (or not readily) available. Depending on the way in which a resolution request arrives at the root, it may be difficult or impossible to determine the original source; and even if the source can be discovered, it might be difficult or impossible (because of lack of cooperation or understanding at the source) to determine precisely why a particular request was sent to the root.

The “further study” option also demands a termination condition: “at what point, after how much study, will it be possible for ICANN to make a final decision about this string?”

9.2.3 Residual risk
Unless further study concludes that the “severity of consequences” factor is zero, some risk will remain.

9.3 Wait until everyone has left the room

At least in principle, some uses of names that collide with proposed TLD strings could be eliminated: either phased out in favor of alternatives or abandoned entirely. For example, hardware and software systems that ship pre-configured to advertise local default domains such as home could be upgraded to behave otherwise. In these cases, a temporary moratorium on delegation, to allow time for vendors and users to abandon the conflicting use or to migrate to an alternative, might be a reasonable alternative to the permanent “just say no.” Similarly, a delay of 120 days54 before activating a new gTLD delegation could mitigate the risk associated with internal name certificates described in Sections 6.2 and 7.2.

9.3.1 Advantages
A temporary injunction that delays the delegation of a string pending evacuation of users from the “danger zone” would be less restrictive than a permanent ban.

9.3.2 Disadvantages
Anyone familiar with commercial software and hardware knows that migrating even a relatively small user base from one version of the same system to another—much less from one system to a different system—is almost never as straightforward in practice as it seems to be in principle. Legacy systems may not be upgradable even in principle, and consumer-grade devices in particular are highly unlikely to upgrade unless forced by a commercial vendor to do so. The time scales are likely to be years—potentially decades—rather than months.

Embracing “wait until…” as a mitigation strategy would therefore require policy decisions with respect to the degree of evacuation that would be accepted as functionally equivalent to “everyone” and a mechanism for coordinating the evacuation among the many different agents (vendors, users, industry consortia, etc.) who would have to cooperate in order for it to succeed.

9.3.3 Residual risk
Because no evacuation could ever be complete, the risks associated with name collision would remain for whatever fraction of the affected population would not or could not participate in it.

9.4 Look before you leap
Verisign [4] and others (including [8]) have recommended that before a new TLD is permanently delegated to an applicant, it undergo a period of “live test” during which it is added to the root zone file with a short TTL (so that it can be flushed out quickly if something goes wrong) while a monitoring system watches for impacts on Internet security or stability.

9.4.1 Advantages
A “trial run” in which a newly-delegated TLD is closely monitored for negative effects and quickly withdrawn if any appear could provide a level of confidence in the safety of a new delegation comparable to that which is achieved by other product-safety testing regimes, such as pharmaceutical and medical-device trials or probationary-period licensing of newly trained skilled craftsmen.

9.4.2 Disadvantages
The practical barriers to instrumenting the global Internet in such a way as to effectively perform the necessary monitoring may be insurmountable. Not least among these is the issue of trust and liability—for example, would the operator of a “live test” be expected to protect Internet users from harm during the test, or be responsible for damages that might result from running the test?

9.4.3 Residual risk
No “trial run” (particularly one of limited duration) could perfectly simulate the dynamics of a fully-delegated TLD and its registry, so some risk would remain even after some period of running a live test.

9.5 Notify affected parties
For some proposed TLDs in the current round, it may be possible to identify the parties most likely to be affected by name collision, and to notify them before the proposed TLD is delegated as a new gTLD.

9.5.1 Advantages
Prior notice of the impending delegation of a new gTLD that might collide with the existing use of an identical name string could enable affected parties to either change their existing uses or take other steps to prepare for potential consequences.

9.5.2 Disadvantages
Notification increases awareness, but does not directly mitigate any potential consequence of name collision other than surprise. For many proposed TLDs it might be difficult or impossible to determine which parties could be affected by name collision. Because affected parties might or might not understand the potential risks and consequences of name collision and how to manage them, either in general or with respect to their own existing uses, notification might be ineffective without substantial concomitant technical and educational assistance.

9.5.3 Residual risk
In most cases at least some potentially affected parties will not be recognized and notified; and those that are recognized and notified may or may not be able to effectively prepare for the effects of name collision on their existing uses, with or without assistance.

Here are some of the tasty bits from a risk-mitigation proposal issued by ICANN staff several days later (5-August, 2013).



The Study establishes a low-risk profile for 80% of the strings. ICANN proposes to move forward with its established processes and procedures with delegating strings in this category (e.g., resolving objections, addressing GAC advice, etc.) after implementing two measures in an effort to mitigate the residual namespace collision risks.

First, registry operators will implement a period of no less than 120 days from the date that a registry agreement is signed before it may activate any names under the TLD in the DNS1. This measure will help mitigate the risks related to the internal name certificates issue as described in the Study report and SSAC Advisory on Internal Name Certificates. Registry operators, if they wish, may allocate names during this period, i.e., accept registrations, but they will not activate them in DNS. If a registry operator were to allocate names during this 120-day period, it would have to clearly inform the registrants about the impossibility to activate names until the period ends.

Second, once a TLD is first delegated within the public DNS root to name servers designated by the registry operator, the registry operator will not activate any names under the TLD in the DNS for a period of no less than 30 days. During this 30-day period, the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it. The minimum set of requirements for the notification is described in Appendix A of this paper. This measure will help mitigate the namespace collision issues in general. Note that both no-activate- name periods can overlap.

The TLD name servers may see DNS queries for an un-delegated name from recursive resolvers – for example, a resolver operated by a subscriber’s ISP or hosting provider, a resolver operated by or for a private (e.g., corporate) network, or a global public name resolution service. These queries will not include the IP address of the original requesting host, i.e., the source IP address that will be visible to the TLD is the source address of the recursive resolver. In the event that the TLD operator sees a request for a non-delegated name, it must request the assistance of these recursive resolver operators in the notification process as described in Appendix A.


ICANN considers that the Study presents sufficient evidence to classify home and corp as high-risk strings. Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk based on the criteria described above. An applicant for one of these strings would have the option to withdraw its application, or work towards resolving the issues that led to its categorization as high risk (i.e., those described in section 7 of the Study report). An applicant for a high-risk string can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string.


For the remaining 20% of the strings that do not fall into the low or high-risk categories, further study is needed to better assess the risk and understand what mitigation measures may be needed to allow these strings to move forward. The goal of the study will be to classify the strings as either low or high-risk using more data and tests than those currently available. While this study is being conducted, ICANN would not allow delegation of the strings in this category. ICANN expects the further study to take between three and six months. At the same time, an applicant for these strings can work towards resolving the issues that prevented their proposed string from being categorized as low risk (e.g., those described in section 7 of the Study report). An applicant can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string. If and when a string from this category has been reclassified as low-risk, it can proceed as described above for the low-risk category strings.


ICANN is fully committed to the delegation of new gTLDs in a secure and stable manner. As with most things on the Internet, it is not possible to eliminate risk entirely. Nevertheless, ICANN would only proceed to delegate a new gTLD when the risk profile of such string had been mitigated to an acceptable level. We appreciate the community’s involvement in the process and look forward to further collaboration on the remaining work.


Registry operator will notify the point of contact of each IP address block that issue any type of DNS requests (the Requestors) for names under the TLD or its apex.  The point of contact(s) will be derived from the respective Regional Internet Registry (RIR) database. Registry operator will offer customer support for the Requestors or their clients (origin of the queries) in, at least, the same languages and mechanisms the registry plans to offer customer support for registry services. Registry operator will avoid sending unnecessary duplicate notifications (e.g. one notification per point of contact).

The notification should be sent, at least, over email and must include, at least the following elements: 1) the TLD string; 2) why the IP address holder is receiving this email; 3) the potential problems the Requestor or its clients could encounter (e.g., those described in section 6 of the Study report); 4) the date when the gTLD signed the registry agreement with ICANN, and the date of gTLD delegation; 5) when the domain names under the gTLD will first become active in DNS; 6) multiple points of contact (e.g. email address, phone number) should people have questions; 7) will be in English and may be in other languages the point of contact is presumed to know; 8) ask the Requestors to pass the notification to their clients in case the Requestors are not the origin of the queries, e.g., if they are providers of DNS resolution services; 9) a sample of timestamps of DNS request in UTC to help identify the origin of queries; 10) email digitally signed with valid S/MIME certificate from well- known public CA.

It’s that last appendix, where people are going to get notified, that really caught my eye.  I can imagine a day when an ISP is going to get notifications from all kinds of different registry operators listing the IP addresses of their customer-facing recursive DNS servers.  The notification will be that their customers are generating this kind of error traffic — but leaves the puzzle of figuring out which customer up to the ISP.  Presumably this leaves the ISPs to comb through DNS logs to ferret out which customer it actually was, carry the bad news to the customer, and presumably deal with the outraged fallout.  In other cases these notifications will go directly to corporate network operators with the same result.  In either case, ponder the implications of a 30 lead-time to fix these things.  Maybe easy.  Maybe not.

What’s next?  Where do we go from here?

For me, “learning more” and “spreading the word” are the next steps.  People on all sides of the argument are weighing in, but as InterIsle points out, there is a lot of analysis that should be done.  They were able to identify the number of queries, the new-TLDs that were queried and the scope of IP addresses of where queries came from.  What they point out we don’t (and need to) know is the impact of those.  How bad would the breakdowns be?   Opinions are loudly stated, but facts are scarce.

If you want to learn more, the best place to get started is probably ICANN’s “Public Comment” page on this issue.  You’ll have some reading to do, but right now (until 17-September, 2013) you have the opportunity to submit comments.  The more of you that do that the better.  The spin-doctors on all sides are hard at work — it’s very difficult to find unbiased information. There aren’t very many comments as I write this in mid-August, but they should make interesting reading as they come in — and you can read them too.

Click HERE for the ICANN  public-comment page

That’s more than enough for one blog post.  Sorry this “little bit more detail” section got so long.  There’s plenty more if you want to dig further.

DISCLAIMER:  Be aware that almost everybody in this debate is conflicted in one way or another (including me – here’s a link to my “Statement of Interest” on the ICANN site).  I participate in ICANN as the representative of a regional internet exchange point (MICE) and also as the owner of a gaggle of really generic .COM domains (click HERE for that story).  I haven’t got a clue what the impact of new gTLDs will be on my domains.  I also don’t know what the impact will be on ISPs and corporate network operators but I am very uneasy right now.  I may write some more opinionated posts about that unease, once I understand better what’s going on.


Repairing the road


So here’s a new thing for me to obsess about.  The condition of the road in the summer time.  This spring was especially tough on our road because the rain. never. stopped.  So our road, which was already getting pretty ratty, turned into a nightmare this year.

Here’s a picture from last year – note the gravel-free tracks through grass.  This is not what a gravel road is supposed to look like.  It’s supposed to have gravel in it, not grass.



Here’s a picture of that same segment of road as of this morning.  See?  Gravel, not grass.  Much better.  In essence this is what I’ve been fiddling with every dry day for the last month.  There have been precious few of those, so this project has taken a lot longer than I thought it would.


This is a piece of road we hardly ever use.  It was built so that semis can turn around when they get in here (useful for when we were building the house, and for grain trucks when we were still renting the land for row crops).  But most of the time it just sits there, and you can see that it likes to be covered with grass.


But here’s a picture of it this spring.  One trip across it with a truck and there are giant divots in the road.


So this was my first experiment with the land plane.  It’s starting to get grassy again because I fixed this chunk about a month ago and it’s been raining pretty much ever since.  But you can see how the divot is gone.


Now let’s take a look at some areas that got really bad this spring.  This first one never ever gets this bad.  And never this long a stretch needing to be repaired.


Here’s what it looked like after a few passes of the land plane.  This was the “dang, I’ve really messed this up” picture.  I was thinking that I might be doing more damage than good when I took this shot.  But fear not!  It has to get ugly before it can get pretty.  Pulling all that grass out makes a mess for a while.


See?  This is that same segment after the very last pass.


Here’s another view of that segment.  My first approach, before using the land plane, was to use the bucket on my other tractor.  That’s all I’ve done in prior years, but you can see that I wasn’t really making much of a dent — mostly because there was so much damage over a really long piece of the road.  I was pretty unhappy with the results.


Here’s that same “first few passes with the land plane” shot.


And here’s the “after last pass” shot.  It should be noted that to get through this whole project, I’ve taken something like 10-15 passes across the road.  I changed the settings a few times to try things out and have some ideas that you’ll find in the “Tips” section at the end of the post.


THIS part of the road is always nasty — it’s going through a really wet area and is always soft.  There’s a “redo this section of the road with road fabric” project in my future here.  But you can see just how bad things got this spring.  This shot was taken AFTER I’d worked on this area with the bucket for a while.


And here’s that last-pass shot…  It looks pretty good, but it’s still really fragile.  This smoothyness won’t last long, especially if a few trucks go over it before the rain stops.


Another “before” shot.  Same part of the road, just a little bit around the corner and looking out into the wetland.


And the “after” shot.  This part was really hard to do.  There’s a lot of dirt and not much gravel to dig up along here.  But even with all that, the gravel came back pretty well.  Again, the gravel along here will be pounded back into the road as the summer progresses.  The “redo with road-cloth” project is going to have to extend into this part of the road too.


Here’s the implement — a Woods land plane, hanging on the 3-point hitch of my Kubota M-6800.  This is a really slick deal.  The two edges adjust up and down, and tilt, independently.  See the four bolts at the bottom left?  Loosening them allows that shoe at the bottom to be adjusted up and down.  I fiddled with variations of “low in front, low on one side, etc.” and have a few ideas about how to do that.  You’re looking at my “last pass” configuration — low in front, high in the back, symmetrical side to side.   This doesn’t cut into the road at all, it just rides through the loose gravel and makes it flat.  My goal when running this configuration was to have a nice amount of gravel caught by the front blade and no gravel going over the top of the back blade.  That’s why the road’s so smoothy.  But this configuration is no good for actually repairing the road, only for dressing up the gravel at the end.


Here’s another view of the land plane, showing how the blades are on a diagonal.  In theory, this means that the gravel moves from one side to the other.  It probably does a little bit, but it’s certainly no replacement for a real rear blade if you need to move a lot of gravel from one part of the road to another.



OK, you’re probably really interested in this stuff if you made it this far through the post.  Here are some lessons I learned that I’m documenting for me, since I probably won’t do this project again until next spring and will likely forget some of this stuff.

Clearing grass

The box will clog up during early grass-pulling, dirt-removing passes.  Just raise it a little bit and back up.  That’ll smooth the dirt and grass out and after a few days it’ll have dried enough that it’ll break up rather than clogging the works in a subsequent pass (have I mentioned lots of passes??).  At first I was pushing that stuff off to the side, or pulling it out by hand.  Way too hard.


I ran the scarifiers right at the same level as the front blade for a while, but eventually pulled them off (they aren’t on the land plane in the pictures).  I think they would probably be really important if you were using this to stir up gravel when the road is really dry, but it’s wet here right now and the land plane did a better job of smoothing the ruts without them.

Removing ruts

I set the whole thing up at it’s mid-points all around and level (front and back, side to side, 3-point hitch level) while I was taking the ruts and grass out.  That worked OK, but I think next time I’ll try a slightly less aggressive version of this next setting.

Crowning and removing ruts

Towards the end of the project I wanted to put a little more crown in the road while removing some ruts that came in after a rain.  I set the “leading side” side of the land plane as low as it would go, front and back.  The “trailing side” got set as high as it would go.  I made the leading side bite even more by lowering that side of the box on the 3-point hitch.  So my goal was to bevel the road, with the leading side doing the cutting and then allowing the material to move over and escape out the trailing side.

Finishing and dressing the gravel

Those first two settings are fine for working divots out of the road, but they leave a lumpy surface, because a lot of material goes over the second blade.  I would try to keep that at a minimum by raising and lowering the 3-point but there’s almost no way to avoid it, because my goal was to remove ruts not leave a perfect surface.  But the last couple passes I just wanted to smooth out the gravel, not change the contour of the road.  For this setting, my goal is NO gravel going over the rear blade — that’s how I got that really smoothy surface.  So this setting was level side to side (both on the land plane and the 3-point), low in front and high in the back (to grab gravel easily with the front blade but not let much escape over the back blade).


A great project.  I borrowed the land plane from my friend Danny, but I think I’ll have to buy it from him.  He’s gonna have to pry this thing out of my cold dead hands.  I can imagine taking another pass or two several more times this summer, just to pull the grass.  Darn nifty.





A blog post from Fargo – a new gizmo

Dave Winer has a cool new gizmo (Fargo) that I’ve been messing around with for the last week or so (don’t get me all wrapped up in a time warp here).

Why I loves Fargo

  • I loves this gizmo because I’m addicted to outlining and I’m always on the hunt for simpler, more approachable ways to do it (and recruit other addicts). For the most part, I’ve gotten pretty solidly into the “mind mapping” groove, but that’s just a habit. When you boil my use of mind-mapping software down you find that all I’m really doing is outlining. Enough about why Fargo attracts me.

Problems this WordPress connector solve

  • The problem I was running into with Fargo was “well gee, in many cases I will eventually want to slurp it out of Fargo and push it into a traditional word processor and turn it into a report of some kind — how I do dat?”
  • Another problem I was running into was “how can I keep non-addicts up to date on the outline without forcing them into something that makes them uneasy?”

This connector between Fargo and WordPress may just be the ticket. So here’s a first-try blog post that I’ll then come back and edit a bit to test out how this gizmo works.


  • I coulda sworn I saw one of these outlines posted to a WP site in a way that the expanding/shrinking triangles came along too.
  • That would be good to know how to do — ’cause some of my outlines get
  • really big and it would be nice to allow people to open/close parts of
  • it rather than seeing the whole thing. I wonder if that’s done in CSS,
  • or if it’s a theme thing, or a plugin? Ah… maybe can do that with a public link to the post? Eeeauuu… That’s pretty homely. What about a link to view this post in Reader?
  • oops – lost all the links in that ‘graph. tried to pull them back in by copy/pasting from the WP version but the links didn’t come with.
  • i’m making hash out of this. where did all those extra Returns come from when i pasted the text back in (tried copy/paste of a portion of the paragraph)
  • hm… dragging does something. but not sure what. dragged a big chunk to the bottom of the page and it disappeared. where’s “undo” when i need it? 😉
  • How do I chop over-long paragraphs (like this one is getting to be) into chunks so I can reorganize them? Hitting Return in the middle of my long ‘graph gets me a new one at the bottom. Shift-Return? cmd-Return? alt-Return? ctr-Return? Enter? nope. Hmm. I’m constantly taking notes and tidying up afterward. Gotta be a way… Maybe it’s just a drafting habit I need to learn
    • But I think it would be nice to have a “split this headline” command. place cursor at split point, issue “split” command and wind up with the headline divided in two.
  • Ahhh. Firefox and Safari. That’s the source of my troubles. Safari is a lot nicer experience. I can cut/paste sections of headlines without getting a whole series of headlines.
  • Repainting the SC430

    OK, I admit it.  I’m kindof a lame car guy.  I love cars, but I am old and tired and hate being uncomfortable.  So about 5 years ago I bought a year-1 (2002) Lexus SC430 that had been rode hard and put away wet for the princely sum of $17000.  I’ve been bringing it back from an early grave ever since.  The first few years were devoted to repairing the driving stuff — replacing bent wheels, struts, etc.

    I also did some exterior work on my own, because the black paint (pity me, I own a black car) had gotten a really bad case of the swirlys from many years of bad car washes.  Plus the headlights had gotten really fogged, so I cleaned them up.

    But this year is the year to do what I’ve been dreaming of ever since I bought the car — a complete repainting job.  Mostly to cure all the battered-paint troubles, but also to slightly change the color to an extremely dark blue.  I’m hoping to get that effect where it looks black unless you really look at it in direct sun, at which point the blue metallic will show up.

    This is a post to chronicle the project.

    The folks who did it

    Will and Robert Latuff — of Latuff Brothers Autobody.  They look displeased, no?


    Rick, Dan, Brandon, Don and Tim — the guys that did the heavy lifting.  They look unhappy too.  Maybe they’re feeling crummy about the terrible job they did?  Or maybe they just don’t get along with each other very well.


    Huge hole in this post, waiting for a picture of Kim and Steve from Dick and Rick’s Auto Interiors in Bloomington — the folks who redid the upholstery.


    Ridiculous wallpaper photos (click on them — these thumbnails don’t do them justice)

    Being a big believer in eating my dessert first, here are some “ridiculous wallpaper photos” of the completed project, taken here at the farm.

    sc430 wallpaper 1

    sc430 wallpaper 2

    sc430 wallpaper 3


    sc430 wallpaper 5

    “Before” pictures of the body

    Click on the photos to get the full huge versions so you can see the nasties that I’m trying to fix. Dings, chips, swirlies.  The complete catastrophe.





    Not unexpectedly, this 12 year old car had some extra projects hidden inside it.  Like this crimped thingy.  I’ll have to ask Robert what it is.  My guess is that it’s one of the hoses for the headlight washers.


    This was a good one.  When the one of the prior repairs was made, the people at the body shop GLUED the front bumper on to the car.  No wonder it didn’t line up right.


    “In progress” pictures of the body

    Robert Latuff shared a whole boatload of documentation shots that he took along the way.  Thanks Robert!

    There was all kinds of detail work to do.


    And repairs to badly-done prior repairs.  This car has been through a lot, mostly at the hands of the prior 3 owners.


    There was some pretty rough hail damage, especially on the roof…


    The rear bumper needed to be reworked…


    Even the doors needed to be returned to something more closely approximating their original shape.


    Poor car, so many dents and troubles to be smoothed out.



    Here’s a series of pictures showing the car in various stages of being taken apart, repaired, primed, etc.  Again, these pictures are mostly courtesy of Robert Latuff, although there are a few of mine sprinkled in from the day Robert let me look in on the car while it was in progress.

    Ever wondered what your car looks like with all the soft cushy bits removed?


    It seems silly, but that’s where the “back seat” of an sc430 goes…



    I embarrassed Robert and forced him to stand in one of my pictures.








    Bits and pieces are coming off to get painted


    I don’t think this is street legal, but it looks like it might be fun to drive — if it had seats.


    Redoing the seats

    Speaking of seats. another part of this project was to redo those.  It started to feel like a good time to do it about half way into the repainting, since the seats had already been yanked out of the car.  So they went off to Dick and Rick’s Auto Interiors in Bloomington for a re-do.  Here are some pictures of the way they looked when we started…

    One of the prior owners must have been a cowboy that drove this car with his spurs on.  Really hard on the lower edge of the seat.



    One of the “rear seat” belts had been taped down to keep it from flapping in the wind…  Nice, huh?


    Driver’s seat didn’t look too bad from this angle, but the leather was pretty much on its last legs



    This is a weird sc430 problem that lots of owners have.  The “headrests” in the “rear seat” get clobbered by the sun, shrink, and pull away from their underlying frames.  Homely.


    Here’s the back of the “rear seat” after it’s been removed from the car — in all of it’s duct tape glory.


    And here are the front seats.


    Here’s a shot that Steve took over at the upholstery shop showing another surprise.  I wonder what took that bite out of the upper-left corner of the seat foam.  A bear?

    VINYL 006

    “After” pictures

    These are some more utilitarian pictures — not quite as snazzy as the ridiculous wallpaper pictures at the top of the post, but more documentation of this great project.  Nah, I don’t like it.  Ick.  What a misguided effort this was.

    This is one of the “before” pictures from up above, with a similar “after” picture right behind it.  Oh, one other change this year — I replaced the tires that had worn out with smaller wheels (went from 18″ to 17″) and higher-profile tires to bring the total diameter back up to roughly what it had been before.  If you’re thinking about this, I can tell you I couldn’t be happier.  It’s easy to see the comparison in these two shots.  Old = skinny tires.  New = slightly fatter tires.  It’s also pretty easy to see the slight change in color — from black to dark blue.



    Here’s another “before/after” comparison, again showing the difference in color and tires.  If you click on these thumbnails, you’ll be able to really see the difference in the paint.  Also note the lovely job that the lads did on fixing up the beat up mirror shrouds.




    I forgot to take a “before” picture of all the road rash on the front of the car.  But it’s all gone now.


    Another thing the folks at Latuff fixed was a funky gas cap cover.  It used to stick out in a weird crooked way.  Fixed.


    Hail damage to the trunk?  Fixed.


    Marcie liked the view of the clouds and the trees reflected in the hood.  I do too.


    And here are the seats!

    Here’s a “before” shot, just as a reminder…


    Are these nifty or what?  Steve and Kim over at Dick and Rick’s steered me straight on this one.  I told them that I was going for the color of an old Mercedes SL convertible and this is where we wound up.


    Note the way that the rear “head rests” look now that Steve’s been at it.


    Everybody was a little edgy about whether the remaining old black interior and seat frames were going to work with the new, different-color, upholstery.  I think they work great — I like the way they set each other off.


    The end

    So there you have it.  The Great 2013 Redo of a 2002 Lexus SC430.  I couldn’t be happier — thanks to all who helped!

    One last Ridiculous Wallpaper Picture to send you on your way.  Happy trails!

    sc430 wallpaper 4

    ICANN Intersessional meeting — LA — March, 2013

    A few photos from a “between meetings” ICANN meeting of the non-contracted parties house of the GNSO.  Click on the pictures for full-sized versions.










    You’ll definitely want to click on this panorama and take a look at the full-sized version.  This was an informal session with members of the Board who were arriving for meetings the following day.


    Front row seats




    Migrating from Snow Leopard Server to OSX Server (Mountain Lion)

    Back in late 2011 I wrote this scratchpad post to document my efforts to move from Snow Leopard Server to Lion Server.  I ran into some configuration problems that stumped the 2nd-level folks at Apple and eventually I abandoned the project and stayed on Snow Leopard.

    When Mountain Lion came out, and went through an update or two to iron the kinks out, I decided to have another go at it.  I’m crossing my fingers here, but I’ve been on OSX Server (the new/old name under Mountain Lion) for about a month now and things look pretty stable.  So here’s another scratchpad post to document what I did to put back a few things that were removed from the standard OSX Server environment.


    Stability and Reliability

    Upgrade memory

    I found that the standard 4gByte memory that shipped with my server started to get very tight as I started turning on the various Python based services (Calendar, Contacts, Wiki, etc.).  In fact, by the time I had all those services running, the machine would lock up and crash after being unreachable for a while.  I upgraded the memory to 16 gBytes (not officially supported).  Looking at this memory-use graph out of Server, you can see why the server was having trouble with 4 gBytes but it looks like 8 gBytes would work OK as well.


    Nightly auto-restarts

    I know, real men are supposed to run their servers for decades without restarting them.  But I’ve found that having the server reboot itself every night in the wee hours of the morning clears out a lot of memory-leak cruft and, combined with the added memory, has made the machine quite stable.  System Preferences/Energy Saver/Schedule is the place to do that.


    I hardly ever use it, but the idea of a completely-under-my-control VPN appeals to my tin-foil-hat privacy side.  Setting it up is a little tricky and I found this guide to setting up VPN on a Mac Mini server that’s running Mountain Lion to be really helpful.  I stepped through the process exactly as they described it and it worked.  I love that.

    Replace features that were removed

    Replace firewall capability

    The nifty firewall in Snow Leopard (IPFW) was replaced with the newer packet filter (PF) firewall in Mountain Lion.  And all of the firewall-management features were removed from Server Manager.  Most likely because the presumption is that these servers are running on a network that is already behind a firewall — and because these rascals are tricky and hard for Apple to support.  But I needed to run the PF firewall on this machine.  Doing that by hand is Too Hard, so here’s what I did.

    • Consider using IceFloor, a PF front end — http://www.hanynet.com/icefloor/index.html
    • Note: firewall logging gets turned on every time you reload the settings.  Logging can be disabled (once you’ve got a stable set of rules) by editing the config file from the main rules tree.

    Restore MySQL

    Apple dropped MySQL from their distribution (licensing issues would be my guess).  But all of the family web sites run WordPress on top of MySQL so I need to add that back.  Here’s what I did:

    Webmail and email aliases

    Webmail is in the “nice to have in travel emergencies” category.  But the Roundcube webmail is also the best place I’ve found to replace some of the email-forwarding, email-exploder capabilities that went away in the transition from Snow Leopard to Mountain Lion.  So I put it back.  Conceptually, it’s an email client running on the server that can talk to the mail server just like any other client.  It just happens to use the web as its user interface.  Here are useful links to get you started.

    • A useful step-by-step guide – http://diymacserver.com/mail/mountain-lion/install-roundcube-webmail/
    • I had the devil’s own time getting authentication to work properly.  In fact the only scheme that works for me is by allowing “Cleartext” as an authentication option in Server, and using LOGIN as the IMAP_AUTH setting in the RoundCube config file (main.inc.php).  Here’s a thread that gives more detail around this, although the fix in that thread didn’t work for me — https://discussions.apple.com/thread/4424570?tstart=0
    • Here’s how to add the “filters” capability (the most important part, for me).  https://discussions.apple.com/thread/4153247?tstart=0  The only thing to keep an eye on is that the example changes are being made to the main.inc.php.DIST file rather than the main.inc.php file.  I think this is just an error — but there may be super-cleverness going on there.  In any event, I made the changes to the live main.inc.php file and it’s working.  ymmv
    • I had to do a lot of debugging on this one.  The log/error files (in the /webmail directory where RoundCube is installed) are of great help in figuring out what’s going on.

    Once Roundcube is running, and supporting filters you can…

    Replace “group” emails (in other words, create multi-recipient email aliases)

    Here are the steps I would go through to create an alias friends@bgnws.com

    • Set up the alias in Server Manager as a local user named “friends”
    • Use WorkGroup Manager (download here – https://support.apple.com/kb/DL1567) to add additional email domains, if you need to.  In this example the “friends” user needs to have friends@bgnws.com added because I host multiple email domains on this server and it would only answer to friends@cloudmikey.com if I didn’t.
    • Log into Roundcube with the “friends” user credentials to establish the filter that will redirect the mail to the real recipients
      • Go to Settings/Filters
      • Create a new filter
      • Select the “all messages” option for the filter
      • Execute “Send message copy to” rule for each target address (there may be a limit on the number, I only use this for small lists)
      • Execute “Redirect message to” for the last addressee on your list if you don’t want to keep copies of the messages in the “friends” IMAP account on your server
      • Execute “Stop evaluating rules”

    Replace mailing list (Mailman) capability

    This was one of the hardest debugging jobs in the whole transition.  Now that I’ve been through the manual install of this system, I can see why Apple dropped it.  It must be a support nightmare for them.  But I host a couple of very active lists and I have to have this capability, losing it in the migration is a non-starter for me.

    For most of you, you can stop here.  Your email lists will be working on your new server.

    I wanted to run parallel lists under two domains, keeping the lists running under the old domain name until I had the new version up and tested on the new server and then cutting all the list members over to the new list.  If you have a low-priority list where participants can be down for a while, this is probably overkill.  Just let them know that things are going to be broken for a few days, take the lists across, redirect the domain when you change the main DNS MX entry for email and have done.  But I was trying for 100% uptime during the transition.  I bounced my users over a few rocks during this process, but we were up all the time.

    To do email lists under multiple domains in Mailman, you have to pay attention to Alias Maps.

    • I used two different sources to piece together a working configuration:
    • The first page, from Apple, gives you the right syntax for the changes you need to make to the Mailman config file (mm_cfg.py).  The rest of the steps are useful too, except they are pointing to an older location for the Mailman installation (the files are now in usr/local/mailman rather than usr/share/mailman).
    • Here are the key lines in my live mm_cfg.py file, using my real domains.  The main server domain is cloudmikey.com, the other three are used for testing or delivering mailing lists.  Every goofy quote and comma matters here.
      • ##################################################     
        # Put your site-specific settings below this line.     
        MTA = 'Postfix'     
        DEFAULT_EMAIL_HOST = 'server.cloudmikey.com'     
        DEFAULT_URL_HOST = 'server.cloudmikey.com'     
        POSTFIX_STYLE_VIRTUAL_DOMAINS = [ ‘dissembling.com’, ‘bgnws.com’, 'haven2.com' ]
    • Note: do not use <angle brackets> around any of these entries.  It took me a week to realize that all the documentation was trying to do is look pretty.  But putting <angle brackets> around some of those domain name entries breaks Mailman in a really subtle way.  It works fine at receiving and sending posts to the lists.  But notification-emails to list-owners and list-admins are malformed and get rejected by the SMTP server.
    • That second link, from the GNU documentation, got me to working entries in the Postfix main.cf files.  Again, here are the two real working entries from my server.  They’re buried in the file, but that second post explains what you’re about:
      • virtual_alias_maps = $virtual_maps hash:/Library/Server/Mail/Config/postfix/virtual_users,hash:/usr/local/mailman/data/virtual-mailman
      • alias_maps = hash:/etc/aliases,hash:/usr/local/mailman/data/aliases
    • Now that all the plumbing is in place to create email lists under multiple domains, there’s one more trick.  The web-based front end to Mailman is fine if you’re creating lists in a single domain.  But it doesn’t allow you to specify which domain the list will be created in, so if you want to create a list in a domain other than the server’s default domain name, you have to use the command-line command to create the list.  It’s not hard, here’s how.
      • Enter the command line
      • Go to the following directory — you have to be in this directory in order to launch the program.  It will fail if you try it from anywhere else.
        • $ cd /usr/local/mailman/
      • Launch the newlist program and follow the prompts.  The key thing is to include the domain name in the name of the list when you’re prompted — that’s the bit that’s missing from the web front end.  Again, I’ll use live entries that work with the config stuff above.  You type the stuff in bold.
        • sudo bin/newlist
          Enter the name of the list: bgnws-testing@bgnws.com
          Enter the email of the person running the list: mike@haven2.com
          Initial bgnws-testing password:
          Hit enter to notify bgnwstest owner...
      • To restart mailman
        • sudo bin/mailmanctl restart
      • Finally, once the new list is created, here are the steps I went through to keep people on the air during the transition period.  My goal was to have the old list keep working while the new one was being built, and then have it wind up that people could send notes to either the old or the new address of the list and wind up in the same place.  This may be needlessly complicated, but it’s the way I did it.
        • Create an email alias in WG manager on the old server – same name, but forwards to the new-server address.  This alias won’t work until the old list is deleted with the rmlist command, coming up in a second.  (note, different domain names are needed for this to work, because I don’t want to migrate all the email/lists at the same time – this would be much easier if you’re just cutting over from an old server to new)
        • Create a forwarding account on the new server – NOT the same name as the new list (so it doesn’t conflict with the new list) but with an alias to the OLD domain name.  Use Roundcube forwarding to push old-domain posts along to the new-domain address of the list.
        • Create a duplicate list on the new server, along with all members and settings
        • Delete the old-server list – now the alias on the old server will kick in and redirect mail to the new-server address.
        • Transition is complete when old-server DNS is moved to new-server – list continues to answer to either new or old domain name because of the forwarding done by the alias account on the new server.



    Update, late 2013:  Preparing for the NEXT upgrade — the road to Mavericks

    I’ve started a thread over on the Apple Support Community to see if there are any impacts to these additions with an in-place upgrade to Mavericks.  It took me a really long time to get from Snow Leopard to Mountain Lion (my attempt to get to Lion never succeeded).  I’m hoping that the road won’t be quite as bumpy this time, but we’ll see.  Here’s a link to the thread.


    So far it looks like Roundcube may need to be updated, although the update looks pretty cool.  One of the appealing things is that address books may be available in the Roundcube environment.  That alone makes it intriguing.

    Loading a Comodo free email cert into the Mac OSX Mail.app and iOS

    The previous post was all about self-signed certs on my Mac.  Worked fine until I tried to export the cert to my iPhone.  Then I ran into the dreaded “no valid certificates” problem when trying to authorize the profile to sign and encrypt outbound mail.  My homebrew cert worked fine for enabling s/MIME on the device, but it was crippled.  So I ran off and got me a Comodo free email cert and pounded that in.

    Get the cert — using your Mac

    Go HERE — but don’t use Firefox, use Safari on your Mac.  If your default browser is Firefox, copy and paste this link into Safari.  You’ll thank me later.  It works fine in Firefox, but it doesn’t install the cert in a way that actually talks to email.  Their download is highly automated and there’s breakage along the way.

    Follow the steps on the Comodo site and keep your fingers crossed, by the end of the normal process the cert will be correctly installed.  Go look in Downloads for the Collectccc.p7s file if the Comodo site stalls on the “attempting to collect and install your free certificate” step.  Double-click that file and the Keychain Access app will pop up and start prompting for the password you created when you configured the cert at Comodo.

    Click HERE for more detail on managing email certs in the Keychain app.  I deleted the old cert once I had the new cert installed and the Mail.app included in the cert-key access-control tab

    Put a reminder on your calendar to renew the cert before it expires in a year.

    Configuring Mail to use the cert, on the Mac

    If the cert has been properly loaded, restart Mail and the signing and encrypting buttons should show up when launching a new email message.  Note that they’re toggles — pay attention to what state they’re in.  Otherwise you’ll be signing or encrypting all your mail which may make your recipients a little crazy.

    Configuring iOS to use the cert

    I sure hope this post never goes away.  That’s what I used to learn how to load the cert on my iPhone.  I’m going to put a shorthand version here, just to preserve it (since I’m going to need to repeat this every year when I renew the cert).

    Find the Comodo cert in the Keychain Access app.  UPDATE: Open the Keychain Access app, Click the “My Certificates” choice in Category, select the cert with your email address.  This will solve the “.p12 option greyed out” problem that PY Schobbens noted in the comments.

    Export it in Personal Information Exchange (.p12) format.  Pay attention to the password you put on the export file, you’ll need it on the other end.

    Email the exported cert (drag it into a Mail message to yourself) to the iOS device that’s using the same email address as your Mac.

    Open the attached cert on the iOS device and blast through the “Unsigned Profile” warning.  This is where that password will come in handy.

    Enable s/MIME on the phone (Settings/Mail, Contacts, Calendars/<your email account>/Advanced).  Check to make sure that the signing and encrypting options actually find your cert.  Then take care to back up a layer and tap “Done” to actually write the change to the account.   Note:  this bold highlighting is mostly a message to myself — surely you won’t skip that last step.  But if you send yourself a test message from your phone and it isn’t signed, that’s probably the cause.

    Note: with the arrival of iOS 8, the toggles for encrypting have changed.  So now the “encrypt” option is available at email-sending-time even when “Encrypt by default” is toggled off for the account — much better arrangement for those of us who only encrypt to a few people.

    Notes: adding and using a self-signed s/MIME email certificate to OSX Mail in Mountain Lion

    This is just a scratchpad post to remind myself what I did to get a self-signed cert into Mail under OSX Mountain Lion.

    This first post is all about using a self-generated cert — which will work fine unless you ALSO want to use it on an iOS device.  In which case, skip to the NEXT post, where I cracked the code of getting a Comodo cert installed on my Mac and my iPhone.  Sheesh, this is harder than it needs to be.

    Generating a self-signed certificate

    Click HERE to read the post that laid out the step by step process I followed to create that self-signed cert.  That post goes through the openssl commands to do the deed.  The instructions are written for a Windows user so I’ve rewritten them for a Mountain Lion user

    • Note: openssl is already installed on Mountain Lion, so you shouldn’t need to do any installation
    • make sure to create the cert with the email address you are using in Mail.  In addition, I used that email address as the answer to the “common name” request during the prompting that happens in the Certificate Request part of the process (Steps 2 and 3 below).  I’m not sure that’s required, but it’s part of the formula that worked for me.

    Here are the command-line commands (mostly lifted from the blog post)

    1.    Generate a RSA Private Key in PEM format

    Type: (one time, just to drop into the openssl environment):



    genrsa -out my_key.key 2048


    my_key.key  is the desired filename for the private key file
    2048  is the desired key length of either 1024, 2048, or 4096

    2.    Generate a Certificate Signing Request:


    req -new -key my_key.key -out my_request.csr


    my_key.key is the input filename of the previously generated private key
    my_request.csr  is the output filename of the certificate signing request

    3.    Follow the on-screen prompts for the required certificate request information.

    4.    Generate a self-signed public certificate based on the request.


    x509 -req -days 3650 -in my_request.csr -signkey my_key.key -out my_cert.crt


    my_request.csr  is the input filename of the certificate signing request
    my_key.key is the input filename of the previously generated private key
    my_cert.crt  is the output filename of the public certificate
    3650 are the duration of validity of the certificate. In this case, it is 10 years (10 x 365 days)
    x509 is the X.509 Certificate Standard that we normally use in S/MIME communication

    This essentially signs your own public certificate with your own private key. In this process, you are now acting as the CA yourself!

    5.    Generate a PKCS#12 file:


    pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in my_cert.crt -inkey my_key.key -out my_pkcs12.pfx -name “my-name”


    my_cert.crt  is the input filename of the public certificate, in PEM format
    my_key.key  is the input filename of the private key
    my_pkcs12.pfx  is the output filename of the pkcs#12 format file
    my-name  is the desired name that will sometimes be displayed in user interfaces.

    6.    (Optional) You can delete the certificate signing request (.csr) file and the private key (.key) file.

    7.    Now you can import your PKCS#12 file to your favorite email client, such as Microsoft Outlook or Thunderbird. You can now sign an email you send out using your own generated private key. For the public certificate (.crt) file, you can send this to others when requesting them to send an encrypted message to you.

    Importing a self-signed certificate into the OSX Keychain Access application

    I double-clicked the .pfx (PKCS) file that I’d just created.  That fired up the Keychain Access app and loaded it into the keychain.   I told it to trust the cert when it asked about that.

    Getting OSX Mountain Lion Mail to recognize the self-signed certificate

    Part of what derailed me in this process was that the transition from Lion to Mountain Lion eliminated the account-setup option to select a cert.  It’s automatic now.  So if the email address that’s in the cert matches the email address of the account, the s/MIME capability simply appears when composing a new message.  But in order for this to work, there’s one step needed in order to pull the cert in:

    • restart the Mail app