WiiMote -> OSCulator -> Wekinator -> OSCulator -> Ableton Live

This is a scratchpad post to remind myself how to put together a machine-learning system on a Mac.  This won’t work on a PC as some of the software is Mac-only.  In this configuration a WiiMote (input device) is connected to Wekinator (real time interactive machine-learning software) through OSCulator (OSC bridging and routing software).  Wekinator outputs are mapped to MIDI to drive Ableton Live through another instance of OSCulator.

Here is a block diagram (clicking on it makes it bigger)

WiiMote OSCulator Wekinator Ableton Live block diagram

Before beginning

Grab a copy of the following .oscd templates to simplify the connection from OSCulator to Wekinator — https://github.com/fiebrink1/wekinator_examples/tree/master/inputs/Wiimote/WiimoteViaOsculator_MacOnly .  This example uses the 3-input template.

WiiMote to OSCulator

WiiMote to OSCulator is a built in feature of OSCulator.   Open OSCulator with the 3-input template linked above.  Open the sliding panel on the right side of the main OSCulator page, turn on the WiiMote, click “Start Pairing”.   Here’s the way it looks when it’s working.

WiiMote to OSCulator

OSCulator to Wekinator

This first instance of OSCulator translates the motions of the WiiMote into OSC messages and pushes them to Wekinator on Wekinator’s default UDP port (6448).  If you’re using the example .oscd file, this should be working now.  I’ve included the mapping if you are building this from an empty OSCulator file.

OSCulator to Wekinator

If you are starting from an empty OSCulator file, here is what the Parameters page looks like in this first instance of OSCulator.   If Wekinator is running, locating and selecting these entries should be available through the drop down menus.

OSCulator to Wekinator parameters page

Wekinator to OSCulator (this is the second instance of OSCulator)

The default Wekinator output port is 12000.  The second instance of OSCulator (instantiated through File/New) is set to listen on port 12000.  If Wekinator is running, OSCulator will pick up the Wekinator outputs and they should be displayed in the Messages column of OSCulator.

Wekinator to OSCulator 2nd instance

2nd instance of OSCulator to Ableton Live

OSCulator has been configured to convert the OSC messages from Wekinator to MIDI CC messages in this example.  I picked those message numbers because they’re within a range (85-90) that’s generally not used by other devices.

Once OSCulator is producing MIDI, Ableton Live can be trained to apply those MIDI signals in the normal way.  Turn on the MIDI Map Mode switch (blue, in the upper right corner, says “MIDI”), click on the control that should receive the MIDI signals and toggle the device on and off in OSCulator.  The mappings will appear in the box on the left (under “MIDI Mappings”) as they’re added.  I found it useful to turn all the devices off (untick the boxes in the left column) before starting the mapping.

OSCulator 2nd instance to Ableton Live

Notes and Tips

  • I found that controls sometimes wouldn’t work.  It turned out that sometimes controls were set to a higher value than the maximum value coming to them from OSCulator.  So the control wouldn’t “pick up” the MIDI signal.  Setting the unresponsive control (eg. “Warmth”) to zero solves that problem.
  • Signals coming into Live were quite jittery at first.   Cranking up the “Smoothing” settings in the 1st instance of OSCulator fixed that.

Electrical repair of Power Trac PT-1850

A scratchpad post as I diagnose and repair an electrical fault in our Power Trac PT-1850.  Pretty sparse right now, just starting.

The Problem:

  1. Reset circuit breaker
  2. Turn ignition on without starting motor — I get the normal beeps, flashing strobe and 12.4 volts on the gauge (normal for a 12 volt battery at rest)
  3. Crank and start the motor
  4. Voltage drops to around 11 volts (indicating to me that something is really pulling hard, maybe a short) and stays there for about 15 to 30 seconds, then the breaker pops and the voltage jumps up to 13.2 (pretty normal alternator-charging voltage, battery seems to be getting charged up).
  5. With the breaker open/popped the strobe stops flashing. But tilt seat, emergency seat switch and draft control are still operational and the tractor made it back to the barn (about a mile).
  6. Use the ignition switch to shut off the engine (which is weird because the breaker is right in front of the ignition switch in the wiring diagram, so why does it still work?)
  7. The ignition switch is dead *after* I shut the engine off — no strobe, no beeps, no voltage on the gauge when I key it on.
  8. Return to number 1 above

My current theory is that I have a short (first project — see if I can figure out which circuit it’s in, and where). One option is to replace the alternator (I have one coming, but at $600 I’d like to avoid opening the box if I can).

Wiring Diagrams

Here’s a PDF I got from Power Trac (which is newer than my machine and doesn’t match up as the next one – figuring out which one is right is another task for today)

1850 Wiring diagram 6-1-20110001

Here’s an older GIF (I need to retrace my steps to figure out where I found this on the ‘net)



MySQL repair or replacement on OSX Server (Yosemite)


Another scratchpad post.  This one is a reminder of what I did to repair MySQL on OSX Server after the upgrade from Mavericks to Yosemite kinda broke things.

I was working to solve two problems: intermittent “unable to connect to database” errors on all our WordPress sites, and the dreaded “unable to update PID” errors when starting and stopping MySQL.

  • I think the “unable to connect” errors are caused by intruders trying to brute-force break the passwords on my (roughly 35) web sites.  This problem can possibly be cured just by doing the “tuning” steps at the end of the cookbook.
  • “Unable to update PID…” type problems are more symptomatic of a broken MySQL implementation and probably require the whole process.

None of these were terrible (a system-restart every few days kept things more or less in check) so I limped along for a while after upgrading from Mavericks to Yosemite, but it finally drove me crazy and I decided to upgrade MySQL and rebuild all the databases.

The cookbook

I tried several approaches and finally landed on one which I can reliably repeat in the future (as long as the good folks at Mac Mini Vault continue to provide their magnificent script).


I used backups from all sorts of places.  I thought I was being a little over the top, but things went wrong and I was really glad to have all these safety nets.  Here were the backups I had available:

  • Time Machine backups of the /usr/local/mysql/ directory
  • Pre-upgrade copies of the /usr/local/mysql/data/ directory (and their Time Machine backups)
  • Historical (nightly) MYSQLDUMPs of all the databases (and their Time Machine backups).  Use this command to write each database to a text file.  I have a script that does all 35 of my databases at once, every night.
    sudo mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

clear out the previous installation of MySQL:

This was by far the hardest part to get right.  MySQL doesn’t have an “uninstall” script and reacts badly when little odds and ends are left over from previous installations.  My OSX Server has been running MySQL since OSX Lion and there was a fair amount of cruft left behind that was causing some of the trouble.  Here’s my current list of things to move or remove (although not all of them will exist on any given machine):

  • move the old data directory (/usr/local/mysql/data/) to someplace else (yet another backup)
  • rename the old base MySQL directory (this is the directory with the version-number that the mysql alias points to – I renamed rather than deleted as a backup)
  • remove the /usr/local/mysql alias (it’s going to get recreated during the install, pointing at the new/correct base directory)
  • move MySQL.com out of /Library/StartupItems/
  • move My* out of /Library/PreferencePanes/
  • move My* out of your account’s /Library/PreferencePanes/  (I had two mismatched ones of these, one lurking in /Users/admin/Library/PreferencePanes that was really old)
  • edit /etc/hostconfig and remove the line MYSQLCOM=-YES- (If it’s still there — this was left over from the days when MySQL shipped as part of OSX Server)
  • remove entries for receipts for mysql in /Library/Receipts/Install-history.plist (I edited the plist with a text editor to do this)
  • remove receipts for mysql in /private/var/db/receipts/
  • remove mysql-related scripts from /Librarly/LaunchDaemons/
  • remove any aliases for mysql.sock from /var/mysql/, /etc/ and /private/var/mysql/ (I’ve had good luck leaving the directories in place and just deleting the aliases – ymmv)
  • If the Mac Mini Vault (MMV) script has been run before, here are a couple more things:
    • remove the MYSQL_PASSWORD item from the desktop
    • remove MySQL-install.plist from your /Downloads/ directory

install MySQL using the MMV script:

NOTE: the folks who maintain the script only support it for a clean install of MySQL on a freshly-loaded version of OSX.  So this cookbook is OUTSIDE the bounds of what they support — please don’t complain to them if things break.  Instead thank them for sharing this script publicly, and consider buying some of their services.

Click HERE to read an introduction to the script on MMV’s site

Click HERE to read the important documentation of the script

Last step:  change the MySQL root password:

sudo mysqladmin -u root -p'MMV-generated-password' password 'mypasswd'

MySQL should now be running properly.  I restarted the server to make sure MySQL started up on a reboot.  I also started and stopped MySQL a few times from the command line to make sure that the “unable to update PID…” problems were solved:

sudo /usr/local/mysql/support-files/mysql.server stop
sudo /usr/local/mysql/support-files/mysql.server start

I do NOT TRUST the MySQL preference-pane that is installed in OSX System Preferences and don’t use it – that may have been another source of dreaded “failure to update PID…” errors.  Just sayin’

import the databases:

I chose to rebuild my databases from the nightly dumps.  I tried various versions of “moving the data directory back and using the Update command” and had a rough time with all of them.  Besides, rebuilding the databases for the first time in many years seemed like a good housekeeping item.  I have about 35 databases — it took about an hour.  Note: change all the ‘mydb’ ‘myuser’ ‘mypasswd’ to values that match your environment.

  • Log into mysql as MySQL’s root user:
mysql -u root -p'mypasswd'
  • Create the databases in mysql using this command:
create database mydb;
  • Create a user for each database in mysql using this command (btw Sequel Pro 1.0.2 is crashing when it creates a user — a known bug, just do it from the command line).  Note: I’m assuming that you’re only using MySQL for WordPress sites like I am, and only need one user per database — this process will get a lot more tedious if you have multiple users per database.
grant all privileges on mydb.* to myuser@localhost identified by 'mypasswd';
  • Import the text-dump of each database into the newly-created empty one using Sequel Pro — File > Import


Two things have really helped with the brute-force attacks.  Opening up MySQL a bit and changing a setting in PHP.

To give MySQL a little more oxygen, I followed the guidelines in a sample .cnf file that came with the Mac Mini Vault script.  I slightly changed the settings, mostly to make them conform to MySQL standards (I’m not sure whether this matters).

  • edit /usr/local/mysql/my.cnf and add these lines at the very bottom (these are just a starting point, feel free to fiddle with them a bit):
  • edit /private/etc/php.ini and turn off persistent links.  Here’s the way my file looks right now:
; Allow or prevent persistent links.  
; http://php.net/mysql.allow-persistent 
mysql.allow_persistent = Off

Image courtesy of Stuart Miles at FreeDigitalPhotos.net



Drone links

This is a scratchpad post for links to Drone articles that have caught my eye

Click here – to get back to the “Aerial Favorites” page on PrairieHaven.com

Why the Drone Revolution Can’t Get off the Ground

Someone Crashed a Drone on The White House Lawn

NYTimes article about the White House drone

Of Guns and Drones

Giving the Drone Industry Leeway to Innovate

Days of Wine and Droning – Gail Collins

Audubon Socieity – How Drones Affect Birds

The Verge – Deadmouse on Drones (Youtube video)

IEEE Spectrum – California’s No-Drone Zone

HackADay – 1 hour home-brew drone build project

Bloomberg – FAA says small drones will provide significant benefits

Politico – President Obama executive order on drone privacy due soon

NYTimes – New to the archeologist’s toolkit, the drone

NYTimes Editorial – Regulating the drone economy

IEEE Spectrum – What Might Happen If an Airliner Hit a Small Drone?

Bloomberg – What the French know about drones that Americans don’t


Life after ICANN

Several of my ICANN pals have asked “how are you doing??” recently and I decided to write a little blog post to make it easier to describe the current sorry state of affairs.

Basically, dropping the ICANN stuff has freed up a lot of time.  What follows is a sampling of how I’m filling it.

I’ve been doing a *lot* more music.  Just learning how all the new electronic instruments work, and work together, is probably a full-time thing.  I’m looking forward to winter when I’ll have a bit more time indoors to get at that.  But today it rained and I fooled around with some new software (Komplete Kontrol) that rolled in.  This is a completely unedited snapshot of what I was doing — except this is only a couple minutes out of several hours of playing around


The farm is indeed taking up a really large part of my time right now.  Spring and fall are busy seasons for me and all my machines (Marcie is busy all the time, that story is over on her blog — PrairieHaven.com).  Right now I’m out on the tractor pretty much any chance I can get.  The last few days have focused on mowing invasive Aspen trees out of one of the prairies that Marcie has planted.  Here’s an action shot taken from my seat on the tractor…


And here’s a picture of the field when I was done later that afternoon — the goal is to mow as little as possible, while still knocking out the little brushy trees.  A few years ago i had to mow that whole field, so all the little un-mowed places are definitely a step in the right direction.  The last few years of ICANN, there wasn’t enough time to do this right.


I’ve been playing with a new toy — a drone.  Here’s a picture of two of them — the big one is the one that takes pictures and video, the little one is the one I’ve been using to teach myself to fly (it’s much better to crash a $40 drone than a $1300 one, although I’ve crashed the big one a bunch of times and it seems to be holding up OK).

IMG_2540 copy

And here’s a picture i took from the big drone yesterday, showing the mist moving out of our valley that morning.  Marcie has posted a few of the videos on her web site ( http://www.prairiehaven.com/?page_id=24997 )


Then, there’s the continuing battle against frac-sand mining here.  All those beautiful bluffs you can see above the mist are the target of the miners — they’d like to peel the tops off of them, extract about 20 meters of sand, and then pile all the stuff back up again (thus completely wrecking the environment around here).  I’ve been fighting them since 2010 and we’ve been pretty successful at keeping them out of this area.  I’ve been keeping track of a lot of that stuff here on this web site.


There’s been a big change in the makeup of the County Board, mostly due to all the activism that sprouted up around the frac sand issue.  One upshot of that is that I’ve just been appointed to a committee that keeps an eye on the land-information system that the County is responsible for (stuff like survey-quality section and quarter-section markers, online property descriptions, deeds, and the like).  Just as my interest in ICANN focused on the proper tracking and treatment of domain names, I’m really interested in this land-information stuff.  So I still have a hand in the policy-making game.

Another hobby that suffered while I over-committed to ICANN was woodworking.  I have a pretty nifty woodworking shop here at the farm and I used to do pretty elaborate projects down there.  The last few years of ICANN really chewed into that and I’m looking forward to getting back into it this winter.  I went wild about a week ago and started cleaning up the shop and getting things reorganized.  Here’s a link to a blog post that describes a typical “big” (all-winter) project.  This was a dresser I built for Marcie just before I got caught up in ICANN-madness.  You can read the whole blog post (showing some of the intermediate steps) HERE


There’s more, but this is probably enough to give you the picture.

I recently read a quote somewhere that a person should have no more than two hobbies.  I’ve got a ways to go to get there, but I think it’s good advice.


Adding SSL to my OSX server

I decided it was time to make a little statement and add “always on” encryption to this completely innocuous site.  The online equivalent of moving a lemonade stand inside a bank vault.  Now when you read about refurbishing my car, or fixing a seed drill, you’ll be doing it over an encrypted connection.

This is another scratchpad post for folks who run an OXS Server and want to use a multi-domain UC (unified communications) SSL certificate.  The rest of you can stop here — this is probably the most boring post of all time.

UPDATE – May 2016 – Cloudflare Origin Cert on an OSX Server:

This section describes using Cloudflare Origin Certificates, the following section is the original post where I was installing a Godaddy cert.

I’ve taken to using Cloudflare for all my sites.  If you haven’t come across them, I heartily recommend you take a look — they’re a pretty nifty gang.  Somewhere along there they added SSL to all the connections from end-users to their servers but that left the link from Cloudflare to my sites unencrypted.

They now support several ways to secure that connection – most of which are free.  Free is good, since commercial certs to cover the 20 or so websites I host start to add up.  I decided to try implementing their preferred approach where they issue me an “origin cert” (rather than using a self-signed cert which wouldn’t give end-users as much confidence).

Doing that on OSX Server is dead simple.  Here are the steps

Create a new cert-request on OSX Server.


We’ll create one for my buddy Foo (at bar.com).


Which results in a cert request that looks like this


Go to Cloudflare (I’m assuming the site is already established there) and submit the cert request.  Note that I’ve elected to submit my own CSR.  Cloudflare has a pretty interesting process to do it on their own but I decided I needed to generate the CSR within OSX Server in order to have a socket for the cert when it is issued.


Cloudflare generates the cert and provides it in a variety of formats.  I elected PEM format and the certificate appears in the window.  I copied/pasted/saved that text into a new text file (demo-cert.pem in this example) and saved it to the desktop of the server.


Back to OSX Server now.  The CSR shows up as a pending cert in the Certificates window.  Double-clicking it results in this screen.   Drag the newly-saved demo-cert.pem file into the Certificate Files box and all is complete.


Create a new SSL web site, use the newly-installed cert, point it at the same directory as the port-80 cleartext site, do a redirect to the port-443 site to complete the job.  Don’t forget to tick the “allow overrides using .htaccess files” box in Advanced Settings for the site so’s the permalinks work.

Original post – August 2014 – Godaddy Cert

I’m a happy Godaddy customer, so the examples in this post are Godaddy-oriented.  But the theory should apply to any Unified Communications (UC) cert vendor.

Single-domain cert

Here is Godaddy’s list of steps for installing a standard single-domain cert.   Click here to view the help page these came from.  The process for a multi-domain cert is almost the same, but let’s start with “vanilla.”

To Generate a CSR with Mac OS X 10.7-10.9

  1. On the Mac, launch Server.
  2. Click Certificates on the left.
  3. Click +.
  4. Click Next.
  5. Complete the on-screen fields, and then click Next
  6. Either copy the CSR that displays, or click Save and save the file locally.

After you have successfully requested a certificate, you need to install the certificate files we provide on your server.

To Install an SSL Certificate with Mac OS X 10.9

  1. Download your certificate’s files — you can use the files we offer for Mac OS 10.6 (more info).
  2. On the Mac, launch Server.
  3. Click Certificates on the left.
  4. Double-click the common name of the certificate you requested.
  5. Click and drag the certificate and bundle files into the Certificate Files section.
  6. Click OK.

This installs the certificate on your server. To verify its installation, you should see your certificate’s common name listed in the Settings menu.

Multi-domain Unified Communications (UC) cert

There are two things that are different when using a UC cert.

Change #1) Use one CSR to request the cert

  • Create one certificate signing request (CSR) in the OSX Server app, no matter how many domains are going to be covered by the UC cert.  The CSR is just creating a socket into which the certificate is going to be installed by OSX Server and only one such socket is needed.
  • All of the domains added through Godaddy’s “manage Subject Alternative Names (SAN)” process will work once the cert is installed.
  • Take care in choosing the domain name when creating the CSR.  This will be the “common name” on the cert and is the only domain name that cannot change later.  This is the apex of the hierarchy of the cert and is the only one that will appear if site-visitors view the cert.
  • The picture below is an example of the Godaddy management interface looking at a (prior version of) the cert that secures this page.  That cert appears in OSX Server’s list of Trusted Certificates as “server.cloudmikey.com” — that name came from the CSR I generated in OSX Server.
  • The alternate domains that will also work with this version of the cert are “cloudmikey.com” and “server.haven2.com” but those names are entered at the Godaddy end, NOT through CSR’s from OSX Server.
  • To restate — just create one CSR and add the rest of the domains through the cert-vendor’s Subject Alternative Name process.  In my case, the domain in the CSR was for “server.cloudmikey.com”


Change #2) Add domains to the cert BEFORE downloading it to OSX Server

  • Don’t download the cert that’s created from the CSR just yet.  It will only have the Common Name and doesn’t yet include the other domains that the cert will cover
  • In the case of the cert shown above, the SANs “cloudmikey.com” and “server.haven2.com” were added through Godaddy’s cert-management interface before I downloaded/installed the cert.

Follow the vendor-provided download/install steps. 

Now that the cert has the proper names added, it installs the same way a single-domain cert does (see above).

To recap Godaddy’s instructions: download the OSX 10.6 version of the files, unzip them, click on the pending cert request in Server, drag the two unzipped files into the CSR when it asks for them, click OK and wait a bit while the cert installs.

Verify that the cert covers the domain-names that are needed

Once the cert has been installed, review (double-click) the cert on OSX Server to make sure that all the needed domains are there.  The list of domains is in the middle of the cert, each entry is titled “DNS Name”  If they’re all there, jump ahead and start assigning the cert to web pages and services.

If the names listed on the installed cert don’t match what’s needed, add the missing domains before using it

  • Delete the cert from OSX server (it’s OK, it’ll be downloaded again in a jiffy)
  • Return to Godaddy and modify the Subject Alternative Names (SANs) to get the domains right
  • Create a new CSR on OSX Server – again, this is just a socket into which the cert will install.
  • Download/install/verify the cert

Note: The cert will install correctly as long as the domain in the new CSR matches one of the domains covered by the cert.  But it will always be appear under the common name on the cert, which confused me.  I surprised myself by installing this cert under a “haven2.com” CSR — it installed just fine, but it’s name changed to “server.cloudmikey.com” on the list of Trusted Certificates in OSX Server.  Best to avoid confusion by creating the replacement CSR under the common name.

Once the cert is right, associate the cert with web pages and services.

  • Web pages and services will operate correctly as long as the domain of the web-page or service matches one of the domains on the cert.
  • It doesn’t matter that the common name of the cert (server.cloudmikey.com in this case) doesn’t match the domain of the web page (haven2.com).

That concludes my report.  This web page is running under a later version of that cert — you can see what it looks like by double-clicking the “lock” in the URL bar of your browser.

Renew the cert

A year has passed and it’s time to renew the cert.  Here’s a checklist:

  • Launch the Server app, open the cert that is coming up for renewal, click the “renew” button, generate a CSR.
  • Renew the cert at the cert-provider, using the newly-generated CSR (this is a copy/paste operation at Godaddy)
  • Download the certs from the vendor once you have been validated
  • Open up the “pending” cert again in the Server app and drag the newly-downloaded cert files from the vendor into the box that’s displayed.
  • There should now be two certs in the Server app list — the current one and the new one.  Update the cert configuration to point at the newly-renewed cert.
  • Test with the new cert and once all is working and verified, delete the expiring one

More bottom!

Samantha Dickinson Tweeted this photo from the ICANN meeting today and tagged it #VolunteerFatigue.  I’m living proof.


Let’s say that each of those 7 working groups needs 4 volunteers — that’s almost 30 people.  Just from the ccNSO.  Just for upcoming working groups.  Never mind the GSNO, ALAC, SSAC and GAC.  A rough extrapolation puts the total required at over 100 volunteer community members just to handle the IANA/Accountability/Transition work.

ICANN is dangerously thin at the bottom of the bottom-up process.  Are there that many people with the experience/time/expertise/will available?  What happens to all the other Working Group work in the meantime?


Difference between a customer and a client

There’s a difference between being a customer and being a client.  People on both ends of a relationship always get into trouble when they don’t understand this.

A customer – is always right

A client – expects to hear the truth, especially when it’s unpleasant

This is true in professional relationships.  Doctors, lawyers, accountants are expected to provide good advice in their professional relationship — but their customers are always right.  Reports should be: on time, of high quality, and delivered at a fair price.

The same goes for religious relationships.  When we are in the worship service, we are the clients of the person leading the show.  But we customers are right — the coffee should taste good, the roof shouldn’t leak and we have every right to be treated with respect.

In higher education the student customer should get good value for their tuition dollar, grades should arrive on time, campus services should be superb and their teachers should be engaged and respectful.  But the student is the client of the teacher when it comes to grades and feedback on their work.

The relationship between the regulated and the regulator is a mix of services (a customer relationship), policy and compliance (which are stakeholder and client relationships).

Be careful when you cross the streams.



Are projections of registration growth in generic top-level domains realistic?

I was reading the Appendices of a recent ICANN report when I came across an interesting assumption built into their analysis.  The boffins that did this portion of the study are projecting 22% annual growth in total domains for the next five years.

Here’s the reference that caught my eye:

Growth assumptions

You can find this on page 150 of the newly-released Whois study — done by the Expert Working Group on gTLD Directory Services.  Here’s the link to the study — https://www.icann.org/en/system/files/files/final-report-06jun14-en.pdf

The thing that struck me was that whopping 22% annual growth rate.  Partly this is due to some pretty optimistic assumptions about the growth in the number of new gTLDs (I’m willing to bet any money that the beginning of 2015 will not see 200,000 gTLD domains in the root).  But leaving that aside, 22% year over year growth isn’t something we’ve seen since pre-bubble glory days.

Here’s a little chart I put together to show you the history.  We’re running about 4% growth right now.  A long way from 22%.  If I were building budgets for ICANN, I’d be seriously researching where these projections are coming from and applying a pretty serious quantity of cold water.  Just saying…

gTLD growth rate

Notes to the nit-pickers:

  • The assumptions are in a scoping study done by IBM to determine how much a system would cost — so these numbers could be pretty old, given how long the EWG has been running
  • These numbers do not include ccTLDs (since IBM didn’t) — they’d be a lot prettier if I had, since that’s where all the growth has been for the last 5 years or so
  • Verisign seems to be quite far behind in publishing their quarterly statistics reports, so I had to finish up with RegistrarStats data.  No warranty expressed or implied.
  • Ping me if you want the spreadsheet that this chart is built on — visit the Get In Touch page to find out how

ICANN participants

I do these scratch-pad posts for really narrow audiences, the rest of you will find them a bit bewildering.  Sorry about that.  This one is aimed at the GNSO Council, as we ponder the question “how do we increase the pool of PDP working-group volunteers?”

Broadening the bottom of the bottom-up process is a critical need at ICANN right now. But at least in the part of ICANN where I live (GNSO policy-making working groups) the conversations that take place are very nuanced and do require a good deal of background and experience before a person is going to be an effective contributor to the conversation.

So I think that we/ICANN need to develop a clearer understanding of the many different roles that people play as they progress toward becoming an effective participant in the process. And then put the resources and process in place to encourage them along the way.  This is my current picture…



Here’s a starter-kit list of roles that people play. I’m putting them in pairs because nobody can do this by themselves — we all need the help of others as we progress. I’ve also built a little drawing which puts these in a never ending circle because we’re always turning back into newcomers as we explore another facet of the organization. I decided to beat the term “translation” to death in these descriptions.  I think ICANN needs to “translate” what it does for a wide range of audiences to make it easier for them to participate.

Newcomer <-> Recruiter

A newcomer is likely to be just as bewildered by that experience as most of the rest of us have been. They need a “recruiter” greet them, welcome them into the flow, translate what’s going on into terms they can understand, find out what their interests and goals are and get them introduced to a few “guides” who can take them to the next step.

Explorer <-> Guide

As the newcomer finds their place, they will want to explore information and conversations that are relevant to their interests and they need a “guide” to call on to translate their questions into pointers toward information or people that they’re trying to find.

Student <-> Teacher

As the person progresses they need a positive, low-risk way to learn the skills and knowledge they need in order to be able to contribute. And, like any student, they need a teacher or two. I’ve always thought that we are missing a huge opportunity in the GNSO Constituencies by not consciously using the process of preparing public comments as a place for less experienced members to develop their policy-making skills in a more intimate, less risky environment than a full-blown working-group. I’d love to see newer members of Constituencies consciously brought into progressively richer roles in the teams that write public comments for Constituencies.

Researcher <-> Expert

Another person who needs a very specific kind of partner is a person who comes to ICANN to research — either to find an answer to a policy-related question, find the best way to handle a problem or complaint that they have with a provider, or to discover whether there is data within the ICANN community that can help with formal academic research. Again, here is a person with fairly clear questions who needs help sifting and sorting through all the information that’s available here — another form of translation, this time provided by a librarian or an “expert” in my taxonomy.  This person may not want to build new skills, they’re just here for answers.  But filling that “expert” role could be a great opportunity for somebody who’s already here.

Teammate <-> Coach

A person who is experiencing a policy-making drafting-team (e.g. within a constituency) or working group for the first few times has a lot of things to learn, and many of those things aren’t obvious right at the start. And this person may not feel comfortable asking questions of the whole group for a wide variety of reasons. They would benefit from a “coach” — a person who makes it clear that they are available to answer *any* question, no matter how small. This person is translating a sometimes-mysterious team process for a teammate who is learning the ropes.

Leader <-> Mentor

As our person progresses, they eventually take up a leadership role, and once again could use the help of others to navigate new duties — yet another form of translation, this time delivered by a mentor who helps the emerging leader be effective in their chosen role.


I also think there are all kinds of information assets that participants use and access in different ways depending on what their role is at the moment. Another kind of translation! 🙂 Here’s another starter-kit list:

  • Organizational structures
  • Documents
  • Transcripts
  • Email archives
  • Models
  • Processes
  • Tools and techniques
  • Outreach materials

I think there’s a gigantic opportunity to make this “career progression” and “information discovery” easier and more available to people wanting to participate at the bottom of the bottom-up process. I’m not sure that there’s much need for new technology to do all this — my thoughts run more toward setting goals, rewarding people who help, etc. But a dab of cool tech here and there might help…

Using Ableton Live to drive Logic Pro X

This is another scratchpad post to remind myself how I set up two of my favorite digital audio workstations (Ableton Live and Apple Logic Pro X) to run at the same time.  Y’see, I like facets of each of these systems and want to have the best of both worlds — the live-performance flexibility of Live and the instruments and signal processing of Logic.  In some perfect future, Logic will run as a Rewire slave and a fella won’t have to do all this goofy stuff.  Until then, this is a set of notes on how I do it.  Your mileage may vary.  I’ll leave comments open and will try to respond to your questions as best I can — but I’ll be sluggish, don’t count on a reply in anything less than 24 hours.


Here’s a diagram to show what’s going on.  The goal is to use MIDI coming from Live to control instruments in Logic, and get that audio back into Live.  There are some interesting tricky bits, but this is where you’re headed and this diagram may be all you need.  By the way, click HERE if you’d like to see my pile of gear.  The pictures get more current as you roll down the page.



Here are links to two project files which you are welcome to try out as a template.  They’re set up to do 14 channels of audio and MIDI.  Why not 16, you ask?  Because this template includes a B3 organ instrument in Logic, which consumes 3 MIDI channels all by itself.  The configuration steps to set up the environment are still required, but you should then be able to load these up as a starting point.

Zip archive of the Logic and Live templates

(I had to fiddle with file permissions — let me know if you run into problems opening the files)

Here’s a link to the Soundflower audio shim that you need to install to get sound back from Logic into Live

Soundflower:  http://cycling74.com/products/soundflower/

Quick Checklist

After a few times through, this checklist may serve as a useful shorthand reminder of the steps that are required.  It’s basically a table of contents of the rest of the post.

Disconnect all external MIDI devices

Install Soundflower

Set up the IAC bus

Click the “Device is online” button

Optional: rename the port

Open Live first.

Configure Preferences in Live

Configure Audio Preferences in Live to recognize Soundflower as its audio input

Configure MIDI Preferences in Live to recognize the IAC Driver

Open a new or existing project in Live

Drag External Instruments into empty MIDI tracks

Configure the External Instrument MIDI output(s) to send it to Logic via the IAC driver

Configure the External Instrument’s Audio input to receive audio back from Logic

Open Logic

Configure global preferences in Logic

Un-tick “Control surface follows track selection” in Logic Pro > Control Surface > Preferences

Set the global Audio Output device to Soundflower

Open a new or existing Logic project

Set project-level configuration preferences (only required for multitrack work)

Select “Auto demix by channel if multitrack recording

Configure the project to only listen to MIDI from the IAC MIDI input

Open the Environment window

Select the “Click and ports” layer

Delete the connection between the “sum” and “input notes” objects

Create a connection between the inbound IAC MIDI port and the “input notes” object

Create or select Software Instrument track(s)

Assign MIDI channels to correspond with the MIDI-To settings in Live

Record-arm the track(s)

Switch back to Live

Test the configuration

Reestablish external MIDI controllers in Live


Assign B3 Organ instruments FIRST, and only to MIDI channels 1,2 and 3

Drummer tracks don’t respond to external MIDI


All channels sound in Logic if any channel is record-armed in Live

Two channels sound in Logic, the external-keyboard channel and the record-armed one

Instruments sound in Logic even if no Live tracks are record-armed

Instruments stop responding to MIDI as new channel strips are added in Logic

Step by Step:

Disconnect all external MIDI devices

First get your template working without the complications of stray MIDI coming from your devices (use the computer-keyboard to generate notes in Live), then add external sources of MIDI back in one at a time and debug any conflicts.

Install Soundflower

Download Soundflower HERE.  The Audio Devices screen in Audio MIDI Setup should look like this at the end once you’ve completed the installation.


Set up the IAC bus (used to pass MIDI signals from Live to Logic).

It’s in the MIDI window of Audio MIDI Setup.   If this the first time the IAC bus has been used the IAC icon will likely be greyed out.


Tick the “Device is online” box to bring it online

Optional: rename the port (by clicking on the name and waiting for it to turn into an edit box).

It will have 16 midi in/out ports even though the “Connectors” boxes are greyed out.  Here’s the way the IAC Driver Properties dialog will look when it has been put online and the port has been renamed (note, this is the name that are being used in the template files, either rename the port or revise the audio routing in Live and Logic to match).



Open Live first.

Opening Logic first may cause Logic to launch as a Rewire host and Live will then automatically open as a Rewire Slave – the whole goal of this exercise is to have Live act as the master not Logic.

Configure Preferences in Live

Configure Audio Preferences in Live to recognize Soundflower as its audio input.  This tutorial uses the 64-channel Soundflower driver.  Use the 2-channel Soundflower driver for single-instrument configuration, “multi voice” versions can use the 64-channel option if mixing is going to be done in Live.  2-channel Soundflower driver will work if multi-channel audio is going to be mixed in Logic before it is brought into Live.  Use smaller buffer sizes if latency becomes an issue for live performance.   Note that sample sizes are set in both Live and Logic.


Note for multi-channel configurations:  Only the first four channels of the Soundflower (64ch) driver are enabled by default.  To make some or all of the remaining channels visible to External Instruments, toggle them in Preferences > Audio > Input Config

Configure MIDI Preferences in Live to recognize the IAC Driver – only enable the IAC drive for MIDI Output to Logic.  Getting MIDI Input data from Logic causes the risk of MIDI loops — so leave that option turned off.


Open a new or existing project in Live.

Feel free to download the Live template I’ve posted in the “Resources” section near the beginning of this post.

Drag External Instruments into empty MIDI tracks

Configure the External Instrument MIDI output(s) to send it to Logic via one of the MIDI channels of the IAC driver (Channel 1 in the picture below).  In multichannel configurations this is the Live end of the MIDI mapping to Logic – these channel assignments are mapping the MIDI from Live into the corresponding channel in Logic.

Configure the External Instrument’s Audio input to receive audio back from Logic.  Since Soundflower has selected as the global audio-input source for Live in Preferences, the channel selections will all refer to Soundflower.  The single-number options refer to single channels, the options with two vertical bars refer to stereo pairs.  The stereo pairs are the likely choice in most situations.


Open Logic.

Ignore the warning that another Rewire host is running – this is the correct  behavior, we don’t want Logic to be the host.

Configure global preferences in Logic

Un-tick “Control surface follows track selection” in Logic Pro > Control Surface > Preferences.


Set the global Audio Output device to Soundflower .  The 64 channel version is used in this setup.


Open a new or existing Logic project.

Feel free to download the Logic template I’ve posted in the “Resources” section near the beginning of this post.

Set project-level configuration preferences

The steps in this section are to overcome the somewhat wonky multi-channel MIDI routing in Logic and are not required for driving a single channel in Logic from Live

Select “Auto demix by channel if multitrack recording” in File > Project Settings > Recording


Configure the project to only listen to MIDI from the IAC MIDI input.  Projects default to listening to all instruments.  This causes endless trouble with MIDI loops.  These steps force Logic to only listen to the MIDI being sent from Live.  Here are the steps:

Open the Environment window – Window > Environment
Select the “Click and ports” layer
Delete the connection between the “sum” and “input notes” objects
Make a connection only between the inbound IAC MIDI port (which is where the MIDI events from Live will be coming from) and the “input notes” object.

Click on this photo to get a full-sized version — it’s hard to see, but the second little triangle, coming from IAC Live -> Logic, is the only one that’s connected to the “Input Notes” object


Set up the tracks in Logic

Create or select Software Instrument track(s). 

Assign MIDI channels to correspond with the MIDI To settings in Live

Record-arm the track (or tracks)

Click on this photo to get a full-sized version — note that all channels are armed for recording, and each has a different MIDI channel assigned as seen with the light number in brackets immediately to the right of each track name.


Use the Mixer window to assign audio channels to tracks in Logic.  This is only required for multi-channel configurations. Click this picture to expand it to full size and take a hard look at the “Output” row, that’s where the assignments are made.


Switch back to Live

Test the configuration.  Logic should now respond to MIDI events from Live.  Enable the Computer MIDI Keyboard function in Live, record-arm a track or two in Live and type a few A’s, S’s and D’s on the computer keyboard.  Notes should sound in Logic on the tracks corresponding to the ones that have been record-armed in Live.

Reestablish external MIDI controllers in Live.   Bring each external controller back into the Live configuration one at a time and iron out any wrinkles that may appear.  In general problems will be caused if MIDI events leak into Logic directly, rather than being forced to pass through Live first.  Debugging all possible problems with external controllers is beyond the scope of this post.  But a likely fix is to change Logic’s MIDI Environment to listen exclusively to the IAC driver rather than the sum of all incoming MIDI (described above).


Assign B3 Organ instruments FIRST, and only to MIDI channels 1,2 and 3 – B3 type instruments (e. Vintage B3 Organ or the Legacy series of B3 organs) require more than one MIDI channel – and those channel-assignments default to MIDI channels 1,2 and 3.  Adding one of these instruments “on top” of already-assigned instruments causes unusual breakage, thus it’s a good idea to avoid these channels for anything except B3 instruments.

Drummer tracks don’t respond to external MIDI – either export the track to an audio file for use in Live, or follow these steps to create a Software Instrument track that mimics the Drummer track but that will respond to MIDI

Select/create a Software Instrument track

Copy the channel strip settings from the Drummer track (right-click on the name of the channel strip, select “Copy Channel Strip Setting”

Paste the channel strip settings into the Software Instrument Track

Adjust MIDI and audio settings in the new channel strip

Bonus – copy/paste regions from the Drummer track into the new Software Instrument track to get MIDI renditions of the region – which can then be exported into Live and used to trigger the drummer

Here’s a drawback to staying MIDI rather than exporting to audio – the “automatic hi-hat” components of the Drummer don’t come across, because they’re not imbedded in MIDI.  Best turn that feature off in the Drummer settings off if MIDI is the format you’re using.


All channels sound in Logic if any channel is record-armed in Live – check to make sure that “Auto demix by channel if multitrack recording” is enabled in Logic’s File > Project Settings > Recording.

Two channels sound in Logic, the external-keyboard channel and the record-armed one – check the project’s Environment window to make sure that Logic is only receiving MIDI from the IAC driver.

Instruments sound in Logic even if no Live tracks are record-armed – check the project’s Environment window to make sure that Logic is only receiving MIDI from the IAC driver

Instruments stop responding to MIDI as new channel strips are added in Logic – check to make sure that the “disappearing” instruments aren’t assigned to MIDI channels 1, 2 or 3 if there’s a B3 organ instrument in the mix (which uses all three of those channels).  One reminder is to delete or mute the MIDI 2 and 3 channel strips in Live if a B3 organ part of the mix in Logic.

Name Collisions II — A call for research

This post is a heads up to all uber-geeks about a terrific research initiative to try to figure out causes and mitigation of name-collision risk.  There’s a $50,000 prize for the first-place paper, a $25,000 prize for the second place paper and up to five $10,000 prizes for third-place papers.  That kind of money could buy a lot of toys, my peepul.  And the presentation of those papers will be in London — my favorite town for curry this side of India.  Interested?  Read on.  Here’s a link to the research program — you can skip the rest of this post and get right to the Real Deal by clicking here:


Background and refresher course — what is the DNS name-collision problem?

Key points

  • Even now, after months of research and public discussion, I still don’t know what’s going to happen
  • I still don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m still not convinced by their arguments right now
  • Thus, I still think the best thing to do is learn more
  • That’s why I’m so keen on this research project.

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would ultimately become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, some of them are going to be very hard and expensive to rename, and some of them are likely to break (perhaps catastrophically) when second level names in new gTLDs hit the root.  m’Kay?


Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.



Private networks, the way they may look AFTER 1400 new gTLDs get dropped into the root

The next picture shows the namespace collision problem that the research efforts should be aimed at addressing.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resource get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.



The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.

Think of all the interesting and creative ways your old network could fail.  Awesome to contemplate, no?  But wait…

What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing harm?  But wait…

What if you have old old old applications that are hard-coded to count on a consistent NXDOMAIN response from a root server.  Suppose that the application gets a new response when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  What if the person that wrote that old old old application is long gone and the documentation is…  um…   sketchy?  But wait…

To top it all off, with this rascal, problems may look like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.  It’s not going to happen on gTLD-delegation day, it’s going to happen one domain at a time.  Nice isolated random events sprinkled evenly across the world.  Hot damn.  But wait…

On the other end of the pipe, imagine the surprise when some poor unsuspecting domain-registrant lights up their shiny new domain and is greeted by a flood of email from network operators who are cranky because their networks just broke.  What are THEY going to be able to do about those problems?  Don’t think it can happen?  Check out my www.corp.com home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.

So argue all you want.  From my perch here on the sidelines it looks like life’s going to get interesting when those new gTLDs start rolling into the root.  And that, dear reader, is an introduction to the Name Collision problem.


Mitigation approaches.

Once upon a time, 3 or 4 months ago when I was young and stupid, I thought this might be a good way to approach this problem.  I’m going to put it in this post as well, but then I’m going to tell you why it won’t work.  Another explanation of why we need this research and we need it now.

Start here:

If you have private networks that use new gTLDs (look on this list) best start planning for a future when those names (and any internal certificates using those names) may stop working right. 

A bad solution:

In essence, I thought the key to this puzzler was to take control of when the new gTLDs become visible to your internal network.  It’s still not a terrible idea, but I’ve added a few reasons why it won’t work down at the end.  Here’s the scheme that I cooked up way back then.

By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.



This is the part where you look at the NXD traffic and find the trouble spots.  Then, with a mere wave of my hand and one single bullet point, I encourage you to fix all your networks.  Maybe you’ve got a few hundred nodes of a distributed system all over the world that you need to touch?  Shouldn’t be a problem, right?



This is the Good Guy part of this approach.  Of course, because we all subscribe to the One World, One Internet, Everybody Can Reach Everything credo, we will of course remember to remove the preventative blocking from our routers just as soon as possible.  Right?  Right?


The reasons why this won’t work:

The first thing that blows my idea out of the water is that you probably don’t have complete control over the DNS provider your customers use.  I still think this is a pretty good idea in tightly-run corporate shops that don’t permit end users to modify the configuration of their machines.  But in this Bring Your Own Device world we live in, there’s going to be a large population of people who configure their machines to point at DNS providers who aren’t blocking the names that conflict with your private network space.

Let’s assume for a minute that everything is fine in the internal network, and the corporate DNS resolver is blocking the offending names while repairs are being made (hopefully cheaply).  Suppose a road warrior goes out to Starbucks and start using a laptop that’s configured to point at Google’s DNS resolver.  In the old days before new gTLDs, the person would fire up their computer, go to the private name, the query would fail and they would be reminded to fire up the VPN to get to those resources.  Tomorrow, with a conflicting new gTLD in the root, that query might succeed, but they wouldn’t be going to the right place.

Slide 22


Here’s the second problem.  My tra-la-la scheme above assumes that most mitigation will be easy, and successful.  But what it it’s not?  What if you have a giant Active Directory tree which, by all accounts, is virtually impossible to rename without downtime?  What if you have to “touch” a LOT of firmware in machines that are hard-wired to use new gTLDs.  What if vendors haven’t prepared fixes for the devices that are on your network looking at a new gTLD with the presumption that it won’t route to the Internet (yet now it does)?  Or the nightmare scenario — something breaks that has to be diagnosed and repaired in minutes?

Slide 23

The research project

See why we need you to look hard at this problem?  Like, right now??  ICANN is already delegating these domains into the root.  Here’s a page that lists the ones that have already been delegated.


If you see one of your private network names on THIS list, you’re already in the game.  Hooyah!  So this is moving FAST.  This research should have been done years ago, long before we got to this stage.  But here we are.  We, the vast galaxy of network operators and administrators who don’t even know this is coming, need your help.  Please take a look at the NameCollisions.net site and see if you can come up with some cool ideas.  I hope you win — because you’ll help the rest of us a lot.  I’ll buy you a curry.



Commentary on Fadi Chehadi Montevideo Statement


I love toiling at the bottom of the bottom-up ICANN process.  And it’s also quite entertaining to watch senior ICANN “managers” running wild and free on the international stage. The disconnect between those two things reminds me of the gulf that usually exists between the faculty and administration in higher education institutions.  Both sides think they run the joint.  That same gulf exists in ICANN and, while I was hopeful for a while that the new guy (Fadi Chehadi) was going to grok the fullness, it’s starting to slide into the same old pattern.

The picture above is of the last guy (Rod Beckstrom)

The audio file linked below is 2-minute mashup of the new guy Fadi’s (quite unsatisfactory) answer to Kristina Rosette’s recent question about whether he has community and Board air cover for a recent (pretty controversial) statement he made.  Pretty inside baseball for all you regulars, but it may map pretty well to your situation even though the details differ.



to listen to the 2-minute clip (but turn the volume down if you do it at work).


Here’s a link to the public post that points at the various transcripts of the call


Interestingly, while the audio transcript is still available, the links to the written transcripts that are contained in that email have disappeared.  Since I have copies of those files from when I downloaded them earlier today, I’ve posted them to this site.  Here are links to those missing documents.

Word document — full transcript — click HERE

Word document — Adobe Chat transcript — Click HERE

What if people stop trusting the ICANN root?

Courtesy FreeDigitalPhotos.net

So once upon a time I worked at a terrific ISP in St. Paul, MN.  Back then, before the “grand bargain” that led to the shared hallucination known as ICANN, there were several pretty-credible providers of DNS that later (somewhat disparagingly) became known as “alternate” root providers.

In those days, we offered our customers a choice.  You could use our “regular” DNS that pointed at what later became the ICANN-managed root, or you could use our “extended” DNS servers that added the alternates.  No big deal, you choose, your mileage may vary, if you run into trouble we’d suggest that you switch back to “regular” and see if things go better, let us know how you like it, etc.

Well.  Fast forward almost 20 years…

The ICANN root is getting ready to expand.  A lot — like 1200 new extensions.  Your opinion about this can probably be discerned by choosing between the following.  Is that expansion of the number of top level names (ala .com, .org, .de) more like;

  • going from 20 to 1200 kinds of potato chips, or
  • going from 20 to 1200 kinds of beer?

Whatever.  The interesting thing is that suddenly the ICANN root is starting to look a lot more like our old “extended” DNS.  Kinda out there.  Kinda crazy.  Not quite as stable.  And that rascal may cause ISPs and network administrators a lot of headaches.  I’ve written a post about the name-collision issue that describes one of these puzzlers.

If those kinds of problems crop up unexpectedly, ISPs and their network administrator customers are going to look for a really quick fix (“today!”…  “now!”…).  If you’re the network admin and your whole internal network goes haywire one day, you don’t have time to be nice.  The bosses are screaming at you.  You need something that will fix that whole problem right now.  You’ll probably call your ISP for help, so they need something that will help you — right now.

One thing that would fix that is a way to get back the old “regular” DNS, the one we have now, before all those whizbang new extensions.  You know, like Coke Classic.  I know that at our ISP, we’d probably be looking for something like that.  We’d either find one, or roll our own, so we could offer it to customers with broken networks.

We’d be all good tra-la-la network citizens about it — “don’t forget to switch back to the ICANN root when your networks are fixed” and so forth.  But it would get the emergency job done (unless your DNS is being bypassed by applications, but that’s a topic for another post).

That means that the ICANN root might not forever be the first-choice most-trusted root any more.  Gaining trust is slow and hard.  Losing trust can happen in a heartbeat.  I can’t speak for today’s ISPs, but back in the day we were not shy about creative brute force solutions to problems.

We might stop trusting the ICANN root and go looking for a better one.

Oh, and one more thing.  We might put “certified NSA-free” on the shopping list.  Just sayin’


Disclaimer: I was “ICANN insider” when I wrote this post.  I don’t exactly know when that happened, but there you go.  I was a member of the ISPCP (ISP and Connectivity Provider constituency) of the GNSO (Generic Name Supporting Organization) where I participated in a lot of policy working groups and (briefly) represented the constituency on the GNSO Council.

I’m also a domain registrant – I registered a gaggle of really-generic domain names back before the web really took off.  I think I’m going to challenge John Berryhill to a Calvin-Ball debate as to whether new gTLDs help or hurt the value of those old .com names.  Back to the “chips vs beer” argument.

New gTLD preparedness project

Another scratchpad post — this time about what a “get ready for new gTLDs” project might look like.  I’ll try to write these thoughts in a way that scales from your own organization up to world-wide.

I’m doing this with an eye towards pushing this towards ICANN and new-gTLD applicants and saying “y’know, you really should be leading the charge on this.  This is your ‘product’ after all.”  Maybe we could channel a few of those “Digital Engagement” dollars into doing something useful?  You know, actually engage people?  Over a real issue?  Just sayin’

Here we go…

Why we need to do this




  • Impacts of the arrival of some new gTLDs could be very severe for some network operators and their customers
  • There may not be a lot of time to react
  • Progress on risk-assessment and mitigation-planning is poor (at least as I write this)
  • Fixes may not be identified before delegation
  • Thus, getting ready in advance is the prudent thing to do
  • We benefit from these preparations even if it turns out we don’t need them for the new gTLD rollout

The maddening thing is, we may not know what’s really going to happen until it’s too late to prepare — so we may have to make guesses.

New gTLD impacts could be very broad and severe, especially for operators of private networks that were planned and implemented long before new gTLDs were conceived of.  ISPs and connectivity providers may be similarly surprised.  Click HERE to read a blog post that I wrote about this — but here are some examples:

  • Microsoft Active Directory installations may need to be renamed and rebuilt
  • Internal certificates may need to be replaced
  • Long-stable application software may need to be revised
  • New attack vectors may arise
  • And so forth…

The key point here is that in the current state of play, these risks are unknown.  Studies that would help understand this better are being lobbied for, but haven’t been approved or launched as I write this.

A “get ready” effort seems like a good idea

Given that we don’t know what is going to happen, and that some of us may be in a high-risk zone, it seems prudent to start helping people and organizations get ready.

  • If there are going to be failures, preparedness would be an effective way to respond
  • The issues associated with being caught by surprise and being under-prepared could be overwhelming
  • “Hope for the best, prepare for the worst” is a strategy we often use to guide family decisions — that rule might be a good one for this situation as well
  • Inaction, in the face of the evidence that is starting to pile up, could be considered irresponsible.

Looking on the bright side, it seems to me that there are wide-ranging benefits to be had from this kind of effort even if mitigation is never needed.

  • We could improve the security, stability and resiliency of the DNS for all, by making users and providers of those services more nimble and disaster resistant
  • If we “over prepare” as individuals and organizations, we could be in a great position to help others if they encounter problems
  • Exercise is good for us.  And gives all factions a positive focal point for our attention.  I’ll meet you on that common ground.

Here’s a way to define success

I’m not sure this part is right, but I like having a target to shoot at when I’m planning something, and this seems like a good start.


  • Minimize the impact of new-gTLD induced failures on the DNS, private and public networks, applications, and Internet users.
  • Make technical-community resources robust enough to respond in the event of a new-gTLD induced disruption
  • Maximize the speed, flexibility and effectiveness of that response.

Who does what

This picture is trying to say “everybody can help.”  I got tired of adding circles and connecting-lines, so don’t be miffed if you can’t find yourself on this picture.  I am trying to make the point that it seems to me that ICANN and the contracted parties have a different role to play than those of us who are on the edge, especially since they’re the ones benefiting financially from this new-gTLD deal.

Note my subtle use of color to drive that home.  Also note that there’s a pretty lively conversation about who should bear the risks.



How do we get from here to there?  If I were in complete command of the galaxy, here’s a high level view of how I’d break up the work.


As I refine this Gantt chart, it becomes clear to me that a) this is something that can be done, but b) it’s going to take some planning, some resources and (yes, dearly beloved) some time.  Hey!  I’m just the messenger.

We should get started

So here you are at the end of this picture book and mad fantasy.  Given all this, here’s what I’d do if this puzzler were left up to me.


And here are the things I’d start doing right away:

  • Agree that this effort needs attention, support and funding
  • Get started on the organizing
  • Establish a focal point and resource pool
  • Broaden the base of participation
  • Start tracking what areas are ready, and where there are likely to be problems

There you go.  If you would like this in slide-deck form to carry around and pitch to folks, click HERE for an editable Powerpoint version of this story.  Carry on.


Disclaimer:  While the ICANN community scrambles to push this big pile of risk around, everybody should be careful to say where they’re coming from.  I’m a member of the ISPCP constituency at ICANN, and represent a regional IXP (MICE) there.  I don’t think this issue generates a lot of risk for MICE because we don’t provide recursive resolver services and thus won’t be receiving the name-collision notifications being proposed by ICANN staff.  I bet some of our member ISPs do have a role to play, and will be lending a hand.

I am also a first-generation registrant of a gaggle of really-generic domain names.  New gTLDs may impact the value of those names but experts are about evenly divided on which way that impact will go.  I’m retired, and can’t conceive of how I’ll be making money from any activity in this arena.