Soundscapes — audio recorders compared

I decided to compare some digital recorders for the purposes of recording soundscapes here at Prairie Haven.  I’ve got two of them, bought at different times for different purposes and I was curious to see how much different they were and whether it was worth carrying one of them instead just using my phone.  The nice thing about the phone is that it’s with me all the time, but I’d start carrying one of the others if they were better.

The contestants

Apple iPhone 6s — this is what I’ve used to record most of the soundscapes here.  It’s always available, dead simple to use, but it’s mono and doesn’t pretend to be anything fancy.

Sony ICD-SX712 — a handy digital recorder for meetings, stereo.

Tascam DR-40 — a fancier digital recorder I bought as a “logging” recorder for music-studio stuff, also stereo.

The process

I sat on the porch after the Evening Walk tonight, plunked all three recorders down on the table and hit the record button on all three.   I took the resulting digital audio files and trimmed them down to the same 10 second “scene” and made them the same loudness.  Two of the recorders are stereo and for fairness I only used one channel of their recordings so all of these are mono.  All three of them can be heard on this 30 second MP3 recording.

Listen to the samples and compare the results.

If you really want to play along, you will play this sample and decide which you like without decoding which one is which.  Here are the three test clips.

And here’s how you tell which is which.   Do the arithmetic for the one you liked (first, second or third).

1st recording is the best? — Add 11.  Add 14.  Subtract 20.

2nd recording is the best? — Add 7.  Subtract 19.  Add 22.

3rd recording is the best?  — Add 5.  Add 12 .  Subtract 13.

If you came up with “10” as your answer, you like the Sony the best ($100 device)

If you came up with “4” as your answer, you like the iPhone the best (“free” as in beer)

If you came up with “5” as your answer, you like the Tascam the best ($180 device)

 

Are projections of registration growth in generic top-level domains realistic?

Intro

Exactly five years ago today, I published this little rant about the growth rates projected for the new “generic top level domains” that were being introduced by ICANN at the time.  You know, domain names that end in things like .run or .lol or .bot (yep, those are all real alternatives to .com or .org if you’d like to strike out into new territory).

I decided to update it with the way things have turned out.

Original post – March 7, 2014

I was reading the Appendices of a recent ICANN report when I came across an interesting assumption built into their analysis.  The boffins that did this portion of the study are projecting 22% annual growth in total domains for the next five years.

Here’s the reference that caught my eye:

Growth assumptions

You can find this on page 150 of the newly-released Whois study — done by the Expert Working Group on gTLD Directory Services.  Here’s the link to the study — https://www.icann.org/en/system/files/files/final-report-06jun14-en.pdf

The thing that struck me was that whopping 22% annual growth rate.  Partly this is due to some pretty optimistic assumptions about the growth in the number of new gTLDs (I’m willing to bet any money that the beginning of 2015 will not see 200,000 gTLD domains in the root).  But leaving that aside, 22% year over year growth isn’t something we’ve seen since pre-bubble glory days.

Here’s a little chart I put together to show you the history.  We’re running about 4% growth right now.  A long way from 22%.  If I were building budgets for ICANN, I’d be seriously researching where these projections are coming from and applying a pretty serious quantity of cold water.  Just saying…

gTLD growth rate

UPDATE: March 7, 2018

I decided to update this post given that we’re at the end-point of that projection that caught my eye (start of 2018).  Here’s my last version, as of the end of 2017 – the first two chunks are the same, the reddish one is what happened.

I think this falls in the “not even close” zone.  New gTLDs never approached 22% a year growth and no new rounds of gTLDs have been added since the first batch of 1000 or so hit.   Yep, toward the end, the growth rate was headed for negative territory as lots of the aggressive “first year for free” domain names weren’t renewed.  The world needs a zillion new subsequent-round gTLDs exactly… why?

Notes to the nit-pickers:
  • The assumptions were in a scoping study done by IBM to determine how much a system would cost — so these numbers could have been pretty old, given how long the EWG had been running.  I worried at the time that ICANN was using similarly optimistic numbers in their budget projections.  Looks like they did, as they’re in retrenchment mode right now.
  • These numbers do not include ccTLDs (since IBM didn’t)
  • Verisign was quite far behind in publishing their quarterly statistics reports, so I finished up with RegistrarStats data.  No warranty expressed or implied, especially since the site was decommissioned just as that annual growth rate started to approach zero.
  • Here’s a link to the 2018 version of the file that creates the charts.  The payday tab is the first (far left) one — cleverly-named “Sheet 1” — which contains the data, the calculations and the charts.   Warnings.  The layout is ugly, the documentation is sparse.   The “Notes” page has URLs for the data sources although they may not work any more.  The rest of the tabs are (some of) the (erratic, sortof-monthly) downloads from RegistrarStats.  Click HERE for the file (about 1 mByte).

Wide tires on a Polaris Ranger EV

We’ve been noticing that the Ranger has been pretty tough on our trails here at Prairie Haven.  Our pet theory is that the EV (plug in electric) version of the Ranger is quite a bit heavier than a normal one and that the standard (narrow, aggressive-tread) tires add to the problem.

The Mission: wider tires for the Ranger EV

We’ve just mounted four Carlysle 25x11x12 Multi-Trac (574369) turf tires .  These are a little wider than the standard tires and have a much less aggressive tread pattern.  Here’s Marcie on her test drive — early returns are positive.

You can see that the footprint is much wider than the standard tires if you click on the picture and look at where the tires are dirty from Marcie’s 200 yard test drive.  I think these may be a little over-inflated as well.  Taking them down to about 7 psi may improve this even more.

The Tricky Bit: front fitment

The back tires are just mounted on the standard rims.  They’re a little too wide for the rims so we added inner tubes (we’d already done that to the stock tires, so those just moved over to the new ones).

The front tires are also mounted on the stock rims, but they need spacers to clear the suspension.  You can track down the 2-inch spacers we used by including “WP024” in your search for 2-inch spacers for a Ranger ATV/UTV.  Four of them (we only used two) will cost about $100 on Amazon/eBay.  Here’s what they look like on our Ranger.

 

Pro tip – my half-inch-drive sockets were too big to drive the spacer’s lug nuts: The lug nuts that come with the spacer need an 11/16 or 17mm deep socket and they reside at the bottom of deep wells in the spacer (take a look at the first photo — the empty holes are the wells I’m talking about – there are lug nuts at the bottom). 1/2-inch drive sockets are too big/thick to insert into those deep wells.  I bought a 3/8th-drive, 11/16ths, deep socket at the hardware store that fit fine.  Take the spacer along on the shopping trip — all this will make more sense once the spacer (and the lug nuts that come with it) are in your hands.

Here’s how it looks with the tire mounted…

The key clearance is between the tire and the front suspension.  Mounting them without spacers doesn’t quite clear.  Here’s a picture showing the clearance now that they’re mounted with the spacer.

And here’s a picture that shows that the wide tires still clear the fender at full lock — by about an inch.  Cozy, but not a problem for our laid-back use of the Ranger.  I wouldn’t want to race on this rig.

Push custom light guides and knobs from Kontakt to Komplete Kontrol

A scratchpad post to remind myself how to configure a Kontakt instrument so that light guides and knobs will show up correctly in Komplete Kontrol.  There’s a video walkthrough at the end of this post.

Here’s a picture of the destination – the light guides appear on the keyboard and Komplete Kontrol knobs are mapped to the patch in two banks.

Light Guides

Use the Factory/Utilities/Set Key Color script to set the key colors.  This is saved as part of the instrument, in Kontakt.

Mapping to knobs on Komplete Kontrol

Use “host automation” within Komplete Kontrol to map knobs to controls within the Kontakt patch.  These mappings are saved as user presets within Komplete Kontrol not Kontakt.

Use the Factory/Utilities/6 MIDI Controllers script to “make controls visible” if they’re not directly available for host automation.  This takes two “save” actions.  The script is saved into the Kontakt instrument, the host automation mapping is saved as a user preset within Komplete Kontrol.

Here’s a video walkthrough.

Scratchpad post: clearing up a failed nameserver transfer between Godaddy and Cloudflare

This one’s going to get the least hits ever, I bet.

I transferred the authoritative nameserver of a domain from Godaddy to Cloudflare and things got stuck.  The NS propagated pretty well, but it never got picked up by Google or Verisign’s public DNS (check with https://www.whatsmydns.net).  Since my ISP uses Google’s 8.8.8.8 server for customer DNS, I couldn’t reach my sites and mail got goofy.

The problem turned out to be outdated DS records that lingered at Godaddy after I tried their DNSSEC product, had all sorts of problems and turned it off.   DS records aren’t deleted automatically in that process — they need to be deleted manually on the Domain Details/Settings tab.  Who knew?  Why should I have to know??

Google (and Cloudflare, the destination authoritative server) saw the outdated DS records and ruled the domain bogus.  In the case of Cloudflare, it never completed the setup process (constantly rescanning the nameservers and saying “Pending Nameserver Update”).

cloudflare_error

Google’s public DNS simply wouldn’t resolve the names and returned SERVFAIL.  Here’s an example of the dig command when it was failing (note the period at the end of the command).

dig @8.8.8.8 www.example.com.

; <<>> DiG 9.8.3-P1 <<>> @8.8.8.8 www.example.com.
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 42587
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.example.com. IN A

Here’s the result of a query to DNSVIS.NET query of the name.  It’s pretty incomprehensible but if you get a page that looks like this, you’ve probably got the same problem I had.

dnsvis_error

This was the page that cracked the case for me:

https://developers.google.com/speed/public-dns/docs/troubleshooting

One last thing.  Here’s the page where you can flush the Google DNS cache for a domain.

https://developers.google.com/speed/public-dns/cache

WiiMote -> OSCulator -> Wekinator -> OSCulator -> Ableton Live

This is a scratchpad post to remind myself how to put together a machine-learning system on a Mac.  This won’t work on a PC as some of the software is Mac-only.  In this configuration a WiiMote (input device) is connected to Wekinator (real time interactive machine-learning software) through OSCulator (OSC bridging and routing software).  Wekinator outputs are mapped to MIDI to drive Ableton Live through another instance of OSCulator.

Here is a block diagram (clicking on it makes it bigger)

WiiMote OSCulator Wekinator Ableton Live block diagram

Before beginning

Grab a copy of the following .oscd templates to simplify the connection from OSCulator to Wekinator — https://github.com/fiebrink1/wekinator_examples/tree/master/inputs/Wiimote/WiimoteViaOsculator_MacOnly .  This example uses the 3-input template.

WiiMote to OSCulator

WiiMote to OSCulator is a built in feature of OSCulator.   Open OSCulator with the 3-input template linked above.  Open the sliding panel on the right side of the main OSCulator page, turn on the WiiMote, click “Start Pairing”.   Here’s the way it looks when it’s working.

WiiMote to OSCulator

OSCulator to Wekinator

This first instance of OSCulator translates the motions of the WiiMote into OSC messages and pushes them to Wekinator on Wekinator’s default UDP port (6448).  If you’re using the example .oscd file, this should be working now.  I’ve included the mapping if you are building this from an empty OSCulator file.

OSCulator to Wekinator

If you are starting from an empty OSCulator file, here is what the Parameters page looks like in this first instance of OSCulator.   If Wekinator is running, locating and selecting these entries should be available through the drop down menus.

OSCulator to Wekinator parameters page

Wekinator to OSCulator (this is the second instance of OSCulator)

The default Wekinator output port is 12000.  The second instance of OSCulator (instantiated through File/New) is set to listen on port 12000.  If Wekinator is running, OSCulator will pick up the Wekinator outputs and they should be displayed in the Messages column of OSCulator.

Wekinator to OSCulator 2nd instance

2nd instance of OSCulator to Ableton Live

OSCulator has been configured to convert the OSC messages from Wekinator to MIDI CC messages in this example.  I picked those message numbers because they’re within a range (85-90) that’s generally not used by other devices.

Once OSCulator is producing MIDI, Ableton Live can be trained to apply those MIDI signals in the normal way.  Turn on the MIDI Map Mode switch (blue, in the upper right corner, says “MIDI”), click on the control that should receive the MIDI signals and toggle the device on and off in OSCulator.  The mappings will appear in the box on the left (under “MIDI Mappings”) as they’re added.  I found it useful to turn all the devices off (untick the boxes in the left column) before starting the mapping.

OSCulator 2nd instance to Ableton Live

Notes and Tips

  • I found that controls sometimes wouldn’t work.  It turned out that sometimes controls were set to a higher value than the maximum value coming to them from OSCulator.  So the control wouldn’t “pick up” the MIDI signal.  Setting the unresponsive control (eg. “Warmth”) to zero solves that problem.
  • Signals coming into Live were quite jittery at first.   Cranking up the “Smoothing” settings in the 1st instance of OSCulator fixed that.
    Smoothing

Drone links

This is a scratchpad post for links to Drone articles that have caught my eye

Click here – to get back to the “Aerial Favorites” page on PrairieHaven.com

Why the Drone Revolution Can’t Get off the Ground

Someone Crashed a Drone on The White House Lawn

NYTimes article about the White House drone

Of Guns and Drones

Giving the Drone Industry Leeway to Innovate

Days of Wine and Droning – Gail Collins

Audubon Socieity – How Drones Affect Birds

The Verge – Deadmouse on Drones (Youtube video)

IEEE Spectrum – California’s No-Drone Zone

HackADay – 1 hour home-brew drone build project

Bloomberg – FAA says small drones will provide significant benefits

Politico – President Obama executive order on drone privacy due soon

NYTimes – New to the archeologist’s toolkit, the drone

NYTimes Editorial – Regulating the drone economy

IEEE Spectrum – What Might Happen If an Airliner Hit a Small Drone?

Bloomberg – What the French know about drones that Americans don’t

 

More bottom!

Samantha Dickinson Tweeted this photo from the ICANN meeting today and tagged it #VolunteerFatigue.  I’m living proof.

ccIANA

Let’s say that each of those 7 working groups needs 4 volunteers — that’s almost 30 people.  Just from the ccNSO.  Just for upcoming working groups.  Never mind the GSNO, ALAC, SSAC and GAC.  A rough extrapolation puts the total required at over 100 volunteer community members just to handle the IANA/Accountability/Transition work.

ICANN is dangerously thin at the bottom of the bottom-up process.  Are there that many people with the experience/time/expertise/will available?  What happens to all the other Working Group work in the meantime?

 

Difference between a customer and a client

There’s a difference between being a customer and being a client.  People on both ends of a relationship always get into trouble when they don’t understand this.

A customer – is always right

A client – expects to hear the truth, especially when it’s unpleasant

This is true in professional relationships.  Doctors, lawyers, accountants are expected to provide good advice in their professional relationship — but their customers are always right.  Reports should be: on time, of high quality, and delivered at a fair price.

The same goes for religious relationships.  When we are in the worship service, we are the clients of the person leading the show.  But we customers are right — the coffee should taste good, the roof shouldn’t leak and we have every right to be treated with respect.

In higher education the student customer should get good value for their tuition dollar, grades should arrive on time, campus services should be superb and their teachers should be engaged and respectful.  But the student is the client of the teacher when it comes to grades and feedback on their work.

The relationship between the regulated and the regulator is a mix of services (a customer relationship), policy and compliance (which are stakeholder and client relationships).

Be careful when you cross the streams.

 

 

Name Collisions II — A call for research

This post is a heads up to all uber-geeks about a terrific research initiative to try to figure out causes and mitigation of name-collision risk.  There’s a $50,000 prize for the first-place paper, a $25,000 prize for the second place paper and up to five $10,000 prizes for third-place papers.  That kind of money could buy a lot of toys, my peepul.  And the presentation of those papers will be in London — my favorite town for curry this side of India.  Interested?  Read on.  Here’s a link to the research program — you can skip the rest of this post and get right to the Real Deal by clicking here:

www.NameCollisions.net

Background and refresher course — what is the DNS name-collision problem?

Key points

  • Even now, after months of research and public discussion, I still don’t know what’s going to happen
  • I still don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m still not convinced by their arguments right now
  • Thus, I still think the best thing to do is learn more
  • That’s why I’m so keen on this research project.

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would ultimately become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, some of them are going to be very hard and expensive to rename, and some of them are likely to break (perhaps catastrophically) when second level names in new gTLDs hit the root.  m’Kay?

 

Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.

Slide1

 

Private networks, the way they may look AFTER 1400 new gTLDs get dropped into the root

The next picture shows the namespace collision problem that the research efforts should be aimed at addressing.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resource get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.

Slide2

 

The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.

Think of all the interesting and creative ways your old network could fail.  Awesome to contemplate, no?  But wait…

What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing harm?  But wait…

What if you have old old old applications that are hard-coded to count on a consistent NXDOMAIN response from a root server.  Suppose that the application gets a new response when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  What if the person that wrote that old old old application is long gone and the documentation is…  um…   sketchy?  But wait…

To top it all off, with this rascal, problems may look like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.  It’s not going to happen on gTLD-delegation day, it’s going to happen one domain at a time.  Nice isolated random events sprinkled evenly across the world.  Hot damn.  But wait…

On the other end of the pipe, imagine the surprise when some poor unsuspecting domain-registrant lights up their shiny new domain and is greeted by a flood of email from network operators who are cranky because their networks just broke.  What are THEY going to be able to do about those problems?  Don’t think it can happen?  Check out my www.corp.com home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.

So argue all you want.  From my perch here on the sidelines it looks like life’s going to get interesting when those new gTLDs start rolling into the root.  And that, dear reader, is an introduction to the Name Collision problem.

 

Mitigation approaches.

Once upon a time, 3 or 4 months ago when I was young and stupid, I thought this might be a good way to approach this problem.  I’m going to put it in this post as well, but then I’m going to tell you why it won’t work.  Another explanation of why we need this research and we need it now.

Start here:

If you have private networks that use new gTLDs (look on this list) best start planning for a future when those names (and any internal certificates using those names) may stop working right. 

A bad solution:

In essence, I thought the key to this puzzler was to take control of when the new gTLDs become visible to your internal network.  It’s still not a terrible idea, but I’ve added a few reasons why it won’t work down at the end.  Here’s the scheme that I cooked up way back then.

By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.

Slide3

 

This is the part where you look at the NXD traffic and find the trouble spots.  Then, with a mere wave of my hand and one single bullet point, I encourage you to fix all your networks.  Maybe you’ve got a few hundred nodes of a distributed system all over the world that you need to touch?  Shouldn’t be a problem, right?

Slide4

 

This is the Good Guy part of this approach.  Of course, because we all subscribe to the One World, One Internet, Everybody Can Reach Everything credo, we will of course remember to remove the preventative blocking from our routers just as soon as possible.  Right?  Right?

Slide5

The reasons why this won’t work:

The first thing that blows my idea out of the water is that you probably don’t have complete control over the DNS provider your customers use.  I still think this is a pretty good idea in tightly-run corporate shops that don’t permit end users to modify the configuration of their machines.  But in this Bring Your Own Device world we live in, there’s going to be a large population of people who configure their machines to point at DNS providers who aren’t blocking the names that conflict with your private network space.

Let’s assume for a minute that everything is fine in the internal network, and the corporate DNS resolver is blocking the offending names while repairs are being made (hopefully cheaply).  Suppose a road warrior goes out to Starbucks and start using a laptop that’s configured to point at Google’s 8.8.8.8 DNS resolver.  In the old days before new gTLDs, the person would fire up their computer, go to the private name, the query would fail and they would be reminded to fire up the VPN to get to those resources.  Tomorrow, with a conflicting new gTLD in the root, that query might succeed, but they wouldn’t be going to the right place.

Slide 22

 

Here’s the second problem.  My tra-la-la scheme above assumes that most mitigation will be easy, and successful.  But what it it’s not?  What if you have a giant Active Directory tree which, by all accounts, is virtually impossible to rename without downtime?  What if you have to “touch” a LOT of firmware in machines that are hard-wired to use new gTLDs.  What if vendors haven’t prepared fixes for the devices that are on your network looking at a new gTLD with the presumption that it won’t route to the Internet (yet now it does)?  Or the nightmare scenario — something breaks that has to be diagnosed and repaired in minutes?

Slide 23

The research project

See why we need you to look hard at this problem?  Like, right now??  ICANN is already delegating these domains into the root.  Here’s a page that lists the ones that have already been delegated.

http://newgtlds.icann.org/en/program-status/delegated-strings

If you see one of your private network names on THIS list, you’re already in the game.  Hooyah!  So this is moving FAST.  This research should have been done years ago, long before we got to this stage.  But here we are.  We, the vast galaxy of network operators and administrators who don’t even know this is coming, need your help.  Please take a look at the NameCollisions.net site and see if you can come up with some cool ideas.  I hope you win — because you’ll help the rest of us a lot.  I’ll buy you a curry.

 

 

Commentary on Fadi Chehadi Montevideo Statement

Beckstrom

I love toiling at the bottom of the bottom-up ICANN process.  And it’s also quite entertaining to watch senior ICANN “managers” running wild and free on the international stage. The disconnect between those two things reminds me of the gulf that usually exists between the faculty and administration in higher education institutions.  Both sides think they run the joint.  That same gulf exists in ICANN and, while I was hopeful for a while that the new guy (Fadi Chehadi) was going to grok the fullness, it’s starting to slide into the same old pattern.

The picture above is of the last guy (Rod Beckstrom)

The audio file linked below is 2-minute mashup of the new guy Fadi’s (quite unsatisfactory) answer to Kristina Rosette’s recent question about whether he has community and Board air cover for a recent (pretty controversial) statement he made.  Pretty inside baseball for all you regulars, but it may map pretty well to your situation even though the details differ.

Click

>HERE<

to listen to the 2-minute clip (but turn the volume down if you do it at work).

UPDATE:

Here’s a link to the public post that points at the various transcripts of the call

http://gnso.icann.org/mailing-lists/archives/council/msg15117.html

Interestingly, while the audio transcript is still available, the links to the written transcripts that are contained in that email have disappeared.  Since I have copies of those files from when I downloaded them earlier today, I’ve posted them to this site.  Here are links to those missing documents.

Word document — full transcript — click HERE

Word document — Adobe Chat transcript — Click HERE

What if people stop trusting the ICANN root?

Courtesy FreeDigitalPhotos.net

So once upon a time I worked at a terrific ISP in St. Paul, MN.  Back then, before the “grand bargain” that led to the shared hallucination known as ICANN, there were several pretty-credible providers of DNS that later (somewhat disparagingly) became known as “alternate” root providers.

In those days, we offered our customers a choice.  You could use our “regular” DNS that pointed at what later became the ICANN-managed root, or you could use our “extended” DNS servers that added the alternates.  No big deal, you choose, your mileage may vary, if you run into trouble we’d suggest that you switch back to “regular” and see if things go better, let us know how you like it, etc.

Well.  Fast forward almost 20 years…

The ICANN root is getting ready to expand.  A lot — like 1200 new extensions.  Your opinion about this can probably be discerned by choosing between the following.  Is that expansion of the number of top level names (ala .com, .org, .de) more like;

  • going from 20 to 1200 kinds of potato chips, or
  • going from 20 to 1200 kinds of beer?

Whatever.  The interesting thing is that suddenly the ICANN root is starting to look a lot more like our old “extended” DNS.  Kinda out there.  Kinda crazy.  Not quite as stable.  And that rascal may cause ISPs and network administrators a lot of headaches.  I’ve written a post about the name-collision issue that describes one of these puzzlers.

If those kinds of problems crop up unexpectedly, ISPs and their network administrator customers are going to look for a really quick fix (“today!”…  “now!”…).  If you’re the network admin and your whole internal network goes haywire one day, you don’t have time to be nice.  The bosses are screaming at you.  You need something that will fix that whole problem right now.  You’ll probably call your ISP for help, so they need something that will help you — right now.

One thing that would fix that is a way to get back the old “regular” DNS, the one we have now, before all those whizbang new extensions.  You know, like Coke Classic.  I know that at our ISP, we’d probably be looking for something like that.  We’d either find one, or roll our own, so we could offer it to customers with broken networks.

We’d be all good tra-la-la network citizens about it — “don’t forget to switch back to the ICANN root when your networks are fixed” and so forth.  But it would get the emergency job done (unless your DNS is being bypassed by applications, but that’s a topic for another post).

That means that the ICANN root might not forever be the first-choice most-trusted root any more.  Gaining trust is slow and hard.  Losing trust can happen in a heartbeat.  I can’t speak for today’s ISPs, but back in the day we were not shy about creative brute force solutions to problems.

We might stop trusting the ICANN root and go looking for a better one.

Oh, and one more thing.  We might put “certified NSA-free” on the shopping list.  Just sayin’

____________

Disclaimer: I was “ICANN insider” when I wrote this post.  I don’t exactly know when that happened, but there you go.  I was a member of the ISPCP (ISP and Connectivity Provider constituency) of the GNSO (Generic Name Supporting Organization) where I participated in a lot of policy working groups and (briefly) represented the constituency on the GNSO Council.

I’m also a domain registrant – I registered a gaggle of really-generic domain names back before the web really took off.  I think I’m going to challenge John Berryhill to a Calvin-Ball debate as to whether new gTLDs help or hurt the value of those old .com names.  Back to the “chips vs beer” argument.

New gTLD preparedness project

Another scratchpad post — this time about what a “get ready for new gTLDs” project might look like.  I’ll try to write these thoughts in a way that scales from your own organization up to world-wide.

I’m doing this with an eye towards pushing this towards ICANN and new-gTLD applicants and saying “y’know, you really should be leading the charge on this.  This is your ‘product’ after all.”  Maybe we could channel a few of those “Digital Engagement” dollars into doing something useful?  You know, actually engage people?  Over a real issue?  Just sayin’

Here we go…

Why we need to do this

gtld-get-ready1

 

 

  • Impacts of the arrival of some new gTLDs could be very severe for some network operators and their customers
  • There may not be a lot of time to react
  • Progress on risk-assessment and mitigation-planning is poor (at least as I write this)
  • Fixes may not be identified before delegation
  • Thus, getting ready in advance is the prudent thing to do
  • We benefit from these preparations even if it turns out we don’t need them for the new gTLD rollout

The maddening thing is, we may not know what’s really going to happen until it’s too late to prepare — so we may have to make guesses.

New gTLD impacts could be very broad and severe, especially for operators of private networks that were planned and implemented long before new gTLDs were conceived of.  ISPs and connectivity providers may be similarly surprised.  Click HERE to read a blog post that I wrote about this — but here are some examples:

  • Microsoft Active Directory installations may need to be renamed and rebuilt
  • Internal certificates may need to be replaced
  • Long-stable application software may need to be revised
  • New attack vectors may arise
  • And so forth…

The key point here is that in the current state of play, these risks are unknown.  Studies that would help understand this better are being lobbied for, but haven’t been approved or launched as I write this.

A “get ready” effort seems like a good idea

Given that we don’t know what is going to happen, and that some of us may be in a high-risk zone, it seems prudent to start helping people and organizations get ready.

  • If there are going to be failures, preparedness would be an effective way to respond
  • The issues associated with being caught by surprise and being under-prepared could be overwhelming
  • “Hope for the best, prepare for the worst” is a strategy we often use to guide family decisions — that rule might be a good one for this situation as well
  • Inaction, in the face of the evidence that is starting to pile up, could be considered irresponsible.

Looking on the bright side, it seems to me that there are wide-ranging benefits to be had from this kind of effort even if mitigation is never needed.

  • We could improve the security, stability and resiliency of the DNS for all, by making users and providers of those services more nimble and disaster resistant
  • If we “over prepare” as individuals and organizations, we could be in a great position to help others if they encounter problems
  • Exercise is good for us.  And gives all factions a positive focal point for our attention.  I’ll meet you on that common ground.

Here’s a way to define success

I’m not sure this part is right, but I like having a target to shoot at when I’m planning something, and this seems like a good start.

Objectives:

  • Minimize the impact of new-gTLD induced failures on the DNS, private and public networks, applications, and Internet users.
  • Make technical-community resources robust enough to respond in the event of a new-gTLD induced disruption
  • Maximize the speed, flexibility and effectiveness of that response.

Who does what

This picture is trying to say “everybody can help.”  I got tired of adding circles and connecting-lines, so don’t be miffed if you can’t find yourself on this picture.  I am trying to make the point that it seems to me that ICANN and the contracted parties have a different role to play than those of us who are on the edge, especially since they’re the ones benefiting financially from this new-gTLD deal.

Note my subtle use of color to drive that home.  Also note that there’s a pretty lively conversation about who should bear the risks.

gtld-get-ready2

Approach

How do we get from here to there?  If I were in complete command of the galaxy, here’s a high level view of how I’d break up the work.

gtld-get-ready3

As I refine this Gantt chart, it becomes clear to me that a) this is something that can be done, but b) it’s going to take some planning, some resources and (yes, dearly beloved) some time.  Hey!  I’m just the messenger.

We should get started

So here you are at the end of this picture book and mad fantasy.  Given all this, here’s what I’d do if this puzzler were left up to me.

gtld-get-ready4

And here are the things I’d start doing right away:

  • Agree that this effort needs attention, support and funding
  • Get started on the organizing
  • Establish a focal point and resource pool
  • Broaden the base of participation
  • Start tracking what areas are ready, and where there are likely to be problems

There you go.  If you would like this in slide-deck form to carry around and pitch to folks, click HERE for an editable Powerpoint version of this story.  Carry on.

__________________

Disclaimer:  While the ICANN community scrambles to push this big pile of risk around, everybody should be careful to say where they’re coming from.  I’m a member of the ISPCP constituency at ICANN, and represent a regional IXP (MICE) there.  I don’t think this issue generates a lot of risk for MICE because we don’t provide recursive resolver services and thus won’t be receiving the name-collision notifications being proposed by ICANN staff.  I bet some of our member ISPs do have a role to play, and will be lending a hand.

I am also a first-generation registrant of a gaggle of really-generic domain names.  New gTLDs may impact the value of those names but experts are about evenly divided on which way that impact will go.  I’m retired, and can’t conceive of how I’ll be making money from any activity in this arena.

New gTLDs and namespace collision

This is another scratch-pad post that’s aimed at a narrow audience —  network geeks, especially in ISPs and corporations.  The first bit is a 3-minute read, followed by a 20-minute “more detail” section.  If you’re baffled by this, but maybe a little concerned after you read it, please push this page along to your network-geek friends and colleagues and get their reaction.  Feel free to repost any/all of this.

Key points before we get started

  • I don’t know what’s going to happen
  • I don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m not convinced by their arguments right now
  • Thus, I think the best thing to do is learn more, hope for the best and prepare for the worst
  • My goal with this post is just to give you a heads-up

If I were you, I’d:

  • Scan my private network and see if any of my names collide with the new gTLDs that are coming
  • Check my recursive DNS server logs and see if any name collisions are appearing there
  • Start thinking about remediation now
  • Participate in the discussion of this topic at ICANN, especially if you foresee major impacts
  • Spread the word that this is coming to friends and colleagues

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, they’re going to be very hard and expensive to rename, and some of them are likely to break when new gTLDs hit the root.  m’Kay?

Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.

Slide1

This next picture shows the namespace collision problem.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resources get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.

But wait, there are more bad things that might happen.  What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing you harm?

But wait, there’s more.  What if you have old old applications that are counting on a consistent NXDOMAIN response from a root server.  Suppose that the application was written in such a way that that it falls over when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  Does this start to feel a little bit like Y2K?

Well one of the good things about Y2k was that most of the “breakage” events would have all happened on the same day — with this rascal, things might look more like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.

Imagine the surprise when some poor unsuspecting domain-registrant wakes up to a flood of email from network operators who are cranky because their networks just broke.  Don’t think it can happen?  Check out my www.corp.com home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.

Slide2

The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.  The singular goal of this post is to let you know about this possibility in advance.  Here’s the key message:

If you have private networks that use TLDs on this list, best start planning for a future when those names (and any internal certificates using those names) are going to stop working right. 

That’s it.  If you want, you can quit reading here.  I’m going to stick updates in this section, followed by the “More detail” part at the bottom.

Update 1 — Mikey’s first-try at a near-term mitigation plan

After conversations with a gaggle of smart people, I’ve decided that the following three pictures are a relatively low-impact way to address this problem in a network that you control.

In essence, I think the key to this approach is to take control of when the new gTLDs become visible to your internal network.  By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.

Slide3

 

Slide4

 

Slide5

More detail

Note: all the color, bold, highlighting in this section is mine — just to draw your eye to things that I find interesting.

There are over 1000 names on that list I linked to above.  Here is a shorter list drawn from Interisle Consulting Group’s 2-August, 2013 report entitled “Name Collisions in the DNS” [PDF, 3.34 MB].  This list is the top 100 names in order of frequency of queries that they saw in their study.  I’ve taken the liberty of highlighting a few that might be interesting for you to keep an eye out for on your network or your customer’s networks.

 

1 home 21 mail 41 abc 61 yahoo 81 gmail
2 corp 22 star 42 youtube 62 cloud 82 apple
3 ice 22 ltd 43 samsung 63 chrome 83 thai
4 global 23 google 44 hot 64 link 84 law
5 med 24 sap 45 you 65 comcast 85 taobao
6 site 25 app 46 ecom 66 gold 86 show
7 ads 26 world 47 llc 67 data 87 itau
8 network 27 mnet 48 foo 68 cam 88 house
9 cisco 28 smart 49 tech 69 art 89 amazon
10 group 29 orange 50 free 70 work 90 ericsson
11 box 30 web 51 kpmg 71 live 91 college
12 prod 31 msd 52 bet 72 ifm 92 bom
13 iinet 32 red 53 bcn 73 lanxess 93 ibm
14 hsbc 33 telefonica 54 hotel 74 goo 94 company
15 inc 34 casa 55 new 75 olympus 95 sfr
16 dev 35 bank 56 wow 76 sew 96 man
17 win 36 school 57 blog 77 city 97 pub
18 office 37 movistar 58 one 78 center 98 services
19 business 38 search 59 top 79 zip 99 page
20 host 39 zone 60 off 80 plus 100 delta

Here’s the executive summary of the InterIsle report.

Executive Summary — InterIsle Consulting Report

Names that belong to privately-defined or “local” name spaces often look like DNS names and are used in their local environments in ways that are either identical to or very similar to the way in which globally delegated DNS names are used. Although the semantics of these names are properly defined only within their local domains, they sometimes appear in query names (QNAMEs) at name resolvers outside their scope, in the global Internet DNS.

The context for this study is the potential collision of labels that are used in private or local name spaces with labels that are candidates to be delegated as new gTLDs. The primary purpose of the study is to help ICANN understand the security, stability, and resiliency consequences of these collisions for end users and their applications in both private and public settings.

The potential for name collision with proposed new gTLDs is substantial.  Based on the data analyzed for this study, strings that have been proposed as new gTLDs appeared in 3% of the requests received at the root servers in 2013. Among all syntactically valid TLD labels (existing and proposed) in requests to the root in 2013, the proposed TLD string home ranked 4th, and the proposed corp ranked 21st. DNS traffic to the root for these and other proposed TLDs already exceeds that for well-established and heavily-used existing TLDs.

Several options for mitigating the risks associated with name collision have been identified.  For most of the proposed TLDs, collaboration among ICANN, the new gTLD applicant, and potentially affected third parties in the application of one or more of these risk mitigation techniques is likely to substantially reduce the risk of delegation.

The potential for name collision with proposed new gTLDs often arises from well- established policies and practices in private network environments. Many of these were widely adopted industry practices long before ICANN decided to expand the public DNS root; the problem cannot be reduced to “people should have known better.”

The delegation of almost any of the applied-for strings as a new TLD label would carry some risk of collision.  Of the 1,409 distinct applied-for strings, only 64 never appear in the TLD position in the request stream captured during the 2012 “Day in the Life of the Internet” (DITL) measurement exercise, and only 18 never appear in any position. In the 2013 DITL stream, 42 never appear in the TLD position, and 14 never appear in any position.

The risk associated with delegating a new TLD label arises from the potentially harmful consequences of name collision, not the name collision itself.  This study was concerned primarily with the measurement and analysis of the potential for name collision at the DNS root. An additional qualitative analysis of the harms that might ensue from those collisions would be necessary to definitively establish the risk of delegating any particular string as a new TLD label, and in some cases the consequential harm might be apparent only after a new TLD label had been delegated

The rank and occurrence of applied-for strings in the root query stream follow a power- law distribution.  A relatively small number of proposed TLD strings account for a relatively large fraction of all syntactically valid non-delegated labels observed in the TLD position in queries to the root.

The sources of queries for proposed TLD strings also follow a power-law distribution. For most of the most-queried proposed TLD strings, a relatively small number of distinct sources (as identified by IP address prefixes) account for a relatively large fraction of all queries.
A wide variety of labels appear at the second level in queries when a proposed TLD string is in the TLD position. For most of the most-queried proposed TLD strings, the number of different second-level labels is very large, and does not appear to follow any commonly recognized empirical distribution.

Name collision in general threatens the assumption that an identifier containing a DNS domain name will always point to the same thing. Trust in the DNS (and therefore the Internet as a whole) may erode if Internet users too often get name-resolution results that don’t relate to the semantic domain they think they are using. This risk is associated not with the collision of specific names, but with the prevalence of name collision as a phenomenon of the Internet experience.

The opportunity for X.509 public key certificates to be erroneously accepted as valid is an especially troubling consequence of name collision. An application intended to operate securely in a private context with an entity authenticated by a certificate issued by a widely trusted public Certification Authority (CA) could also operate in an apparently secure manner with another equivalently named entity in the public context if the corresponding TLD were delegated at the public DNS root and some party registered an equivalent name and obtained a certificate from a widely trusted CA. The ability to specify wildcard DNS names in certificates potentially amplifies this risk.

The designation of any applied-for string as “high risk” or “low risk” with respect to delegation as a new gTLD depends on both policy and analysis. This study provides quantitative data and analysis that demonstrate the likelihood of name collision for each of the applied-for strings in the current new gTLD application round and qualitative assessments of some of the potential consequences. Whether or not a particular string represents a delegation risk that is “high” or “low” depends on policy decisions that relate those data and assessments to the values and priorities of ICANN and its community; and as Internet behavior and practice change over time, a string that is “high risk” today may be “low risk” next year (or vice versa).

For a broad range of potential policy decisions, a cluster of proposed TLDs at either end of the delegation risk spectrum are likely to be recognizable as “high risk” and “low risk.” At the high end, the cluster includes the proposed TLDs that occur with at least order-of-magnitude greater frequency than any others (corp and home) and those that occur most frequently in internal X.509 public key certificates (mail and exchange in addition to corp). At the low end, the cluster includes all of the proposed TLDs that appear in queries to the root with lower frequency than the least-frequently queried existing TLD; using 2013 data, that would include 1114 of the 1395 proposed TLDs.

And here is their list of risk-mitigation options.

9 Name collision risk mitigation

ICANN and its partners in the Internet community have a number of options available to mitigate the risks associated with name collision in the DNS. This section describes each option; its advantages and disadvantages; and the residual risk that would remain after it had been successfully implemented.

The viability, applicability, and cost of different risk mitigation options are important considerations in the policy decision to delegate or not delegate a particular string. For example, a string that is considered to be “high risk” because risk assessment finds that it scores high on occurrence frequency or severity of consequences (or both), but for which a very simple low-cost mitigation option is available, may be less “risky” with respect to the delegation policy decision than a string that scores lower during risk assessment but for which mitigation would be difficult or impossible.

It is important to note that in addition to these strategies for risk mitigation, there is a null option to “do nothing”—to make no attempt to mitigate the risks associated with name collision, and let the consequences accrue when and where they will. As a policy decision, this approach could reasonably be applied, for example, to strings in the “low risk” category and to some or all of the strings in the “uncalculated risk” category.

It is also important to note that this study and report are concerned primarily with risks to the Internet and its users associated with the occurrence and consequences of name collision—not risks to ICANN itself associated with new TLD delegation or risk mitigation policy decisions.

9.1 Just say no

An obvious solution to the potential collision of a new gTLD label with an existing string is to simply not delegate that label, and formally proscribe its future delegation—e.g., by updating [15] to permanently reserve the string, or via the procedure described in [9] or [16]. This approach has been suggested for the “top 10” strings by [ ], and many efforts have been made over the past few years to add to the list of formally reserved strings [15] other non-delegated strings that have been observed in widespread use [1] [9] [10] [16].
A literal “top 10” approach to this mitigation strategy would be indefensibly arbitrary (the study data provide no answer to the obvious question “why 10?”), but a policy decision could set the threshold at a level that could be defended by the rank and occurrence data provided by this study combined with a subjective assessment of ICANN’s and the community’s tolerance for uncertainty.

9.1.1 Advantages
A permanently reserved string cannot be delegated as a TLD label, and therefore cannot collide with any other use of the same string in other contexts. A permanently reserved string could also be recommended for use in private semantic domains.

9.1.2 Disadvantages
There is no disadvantage for the Internet or its users. The disadvantages to current or future applicants for permanently proscribed strings are obvious. Because the “top N” set membership inclusion criteria will inevitably change over time, this mitigation strategy would be effective beyond the current new gTLD application round only if those criteria (and the resulting set membership) were periodically re-evaluated.

9.1.3 Residual risk
This mitigation strategy leaves no residual risk to the Internet or its users.

9.2 Further study

For a string in the “non-customary risk” or “calculated risk” category, further study might lead to a determination that the “severity of consequences” factor in the risk assessment formula is small enough to ensure that the product of occurrence and severity is also small.

9.2.1 Advantages
Further study might shift a string from the “uncalculated risk” to the “calculated risk” category by providing information about the magnitude of the “severity of consequences” factor. It might also reduce the uncertainty constant in the risk assessment formula, facilitating a policy decision with respect to delegation of the string as a new TLD.

9.2.2 Disadvantages
Further study obviously involves a delay that may or may not be agreeable to applicants, and it may also require access to data that are not (or not readily) available. Depending on the way in which a resolution request arrives at the root, it may be difficult or impossible to determine the original source; and even if the source can be discovered, it might be difficult or impossible (because of lack of cooperation or understanding at the source) to determine precisely why a particular request was sent to the root.

The “further study” option also demands a termination condition: “at what point, after how much study, will it be possible for ICANN to make a final decision about this string?”

9.2.3 Residual risk
Unless further study concludes that the “severity of consequences” factor is zero, some risk will remain.

9.3 Wait until everyone has left the room

At least in principle, some uses of names that collide with proposed TLD strings could be eliminated: either phased out in favor of alternatives or abandoned entirely. For example, hardware and software systems that ship pre-configured to advertise local default domains such as home could be upgraded to behave otherwise. In these cases, a temporary moratorium on delegation, to allow time for vendors and users to abandon the conflicting use or to migrate to an alternative, might be a reasonable alternative to the permanent “just say no.” Similarly, a delay of 120 days54 before activating a new gTLD delegation could mitigate the risk associated with internal name certificates described in Sections 6.2 and 7.2.

9.3.1 Advantages
A temporary injunction that delays the delegation of a string pending evacuation of users from the “danger zone” would be less restrictive than a permanent ban.

9.3.2 Disadvantages
Anyone familiar with commercial software and hardware knows that migrating even a relatively small user base from one version of the same system to another—much less from one system to a different system—is almost never as straightforward in practice as it seems to be in principle. Legacy systems may not be upgradable even in principle, and consumer-grade devices in particular are highly unlikely to upgrade unless forced by a commercial vendor to do so. The time scales are likely to be years—potentially decades—rather than months.

Embracing “wait until…” as a mitigation strategy would therefore require policy decisions with respect to the degree of evacuation that would be accepted as functionally equivalent to “everyone” and a mechanism for coordinating the evacuation among the many different agents (vendors, users, industry consortia, etc.) who would have to cooperate in order for it to succeed.

9.3.3 Residual risk
Because no evacuation could ever be complete, the risks associated with name collision would remain for whatever fraction of the affected population would not or could not participate in it.

9.4 Look before you leap
Verisign [4] and others (including [8]) have recommended that before a new TLD is permanently delegated to an applicant, it undergo a period of “live test” during which it is added to the root zone file with a short TTL (so that it can be flushed out quickly if something goes wrong) while a monitoring system watches for impacts on Internet security or stability.

9.4.1 Advantages
A “trial run” in which a newly-delegated TLD is closely monitored for negative effects and quickly withdrawn if any appear could provide a level of confidence in the safety of a new delegation comparable to that which is achieved by other product-safety testing regimes, such as pharmaceutical and medical-device trials or probationary-period licensing of newly trained skilled craftsmen.

9.4.2 Disadvantages
The practical barriers to instrumenting the global Internet in such a way as to effectively perform the necessary monitoring may be insurmountable. Not least among these is the issue of trust and liability—for example, would the operator of a “live test” be expected to protect Internet users from harm during the test, or be responsible for damages that might result from running the test?

9.4.3 Residual risk
No “trial run” (particularly one of limited duration) could perfectly simulate the dynamics of a fully-delegated TLD and its registry, so some risk would remain even after some period of running a live test.

9.5 Notify affected parties
For some proposed TLDs in the current round, it may be possible to identify the parties most likely to be affected by name collision, and to notify them before the proposed TLD is delegated as a new gTLD.

9.5.1 Advantages
Prior notice of the impending delegation of a new gTLD that might collide with the existing use of an identical name string could enable affected parties to either change their existing uses or take other steps to prepare for potential consequences.

9.5.2 Disadvantages
Notification increases awareness, but does not directly mitigate any potential consequence of name collision other than surprise. For many proposed TLDs it might be difficult or impossible to determine which parties could be affected by name collision. Because affected parties might or might not understand the potential risks and consequences of name collision and how to manage them, either in general or with respect to their own existing uses, notification might be ineffective without substantial concomitant technical and educational assistance.

9.5.3 Residual risk
In most cases at least some potentially affected parties will not be recognized and notified; and those that are recognized and notified may or may not be able to effectively prepare for the effects of name collision on their existing uses, with or without assistance.

Here are some of the tasty bits from a risk-mitigation proposal issued by ICANN staff several days later (5-August, 2013).

[ICANN Staff] PROPOSAL TO MITIGATE RISK

LOW-RISK

The Study establishes a low-risk profile for 80% of the strings. ICANN proposes to move forward with its established processes and procedures with delegating strings in this category (e.g., resolving objections, addressing GAC advice, etc.) after implementing two measures in an effort to mitigate the residual namespace collision risks.

First, registry operators will implement a period of no less than 120 days from the date that a registry agreement is signed before it may activate any names under the TLD in the DNS1. This measure will help mitigate the risks related to the internal name certificates issue as described in the Study report and SSAC Advisory on Internal Name Certificates. Registry operators, if they wish, may allocate names during this period, i.e., accept registrations, but they will not activate them in DNS. If a registry operator were to allocate names during this 120-day period, it would have to clearly inform the registrants about the impossibility to activate names until the period ends.

Second, once a TLD is first delegated within the public DNS root to name servers designated by the registry operator, the registry operator will not activate any names under the TLD in the DNS for a period of no less than 30 days. During this 30-day period, the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it. The minimum set of requirements for the notification is described in Appendix A of this paper. This measure will help mitigate the namespace collision issues in general. Note that both no-activate- name periods can overlap.

The TLD name servers may see DNS queries for an un-delegated name from recursive resolvers – for example, a resolver operated by a subscriber’s ISP or hosting provider, a resolver operated by or for a private (e.g., corporate) network, or a global public name resolution service. These queries will not include the IP address of the original requesting host, i.e., the source IP address that will be visible to the TLD is the source address of the recursive resolver. In the event that the TLD operator sees a request for a non-delegated name, it must request the assistance of these recursive resolver operators in the notification process as described in Appendix A.

HIGH-RISK

ICANN considers that the Study presents sufficient evidence to classify home and corp as high-risk strings. Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk based on the criteria described above. An applicant for one of these strings would have the option to withdraw its application, or work towards resolving the issues that led to its categorization as high risk (i.e., those described in section 7 of the Study report). An applicant for a high-risk string can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string.

UNCALCULATED-RISK

For the remaining 20% of the strings that do not fall into the low or high-risk categories, further study is needed to better assess the risk and understand what mitigation measures may be needed to allow these strings to move forward. The goal of the study will be to classify the strings as either low or high-risk using more data and tests than those currently available. While this study is being conducted, ICANN would not allow delegation of the strings in this category. ICANN expects the further study to take between three and six months. At the same time, an applicant for these strings can work towards resolving the issues that prevented their proposed string from being categorized as low risk (e.g., those described in section 7 of the Study report). An applicant can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string. If and when a string from this category has been reclassified as low-risk, it can proceed as described above for the low-risk category strings.

CONCLUSION

ICANN is fully committed to the delegation of new gTLDs in a secure and stable manner. As with most things on the Internet, it is not possible to eliminate risk entirely. Nevertheless, ICANN would only proceed to delegate a new gTLD when the risk profile of such string had been mitigated to an acceptable level. We appreciate the community’s involvement in the process and look forward to further collaboration on the remaining work.

APPENDIX A – NOTIFICATION REQUIREMENTS

Registry operator will notify the point of contact of each IP address block that issue any type of DNS requests (the Requestors) for names under the TLD or its apex.  The point of contact(s) will be derived from the respective Regional Internet Registry (RIR) database. Registry operator will offer customer support for the Requestors or their clients (origin of the queries) in, at least, the same languages and mechanisms the registry plans to offer customer support for registry services. Registry operator will avoid sending unnecessary duplicate notifications (e.g. one notification per point of contact).

The notification should be sent, at least, over email and must include, at least the following elements: 1) the TLD string; 2) why the IP address holder is receiving this email; 3) the potential problems the Requestor or its clients could encounter (e.g., those described in section 6 of the Study report); 4) the date when the gTLD signed the registry agreement with ICANN, and the date of gTLD delegation; 5) when the domain names under the gTLD will first become active in DNS; 6) multiple points of contact (e.g. email address, phone number) should people have questions; 7) will be in English and may be in other languages the point of contact is presumed to know; 8) ask the Requestors to pass the notification to their clients in case the Requestors are not the origin of the queries, e.g., if they are providers of DNS resolution services; 9) a sample of timestamps of DNS request in UTC to help identify the origin of queries; 10) email digitally signed with valid S/MIME certificate from well- known public CA.

It’s that last appendix, where people are going to get notified, that really caught my eye.  I can imagine a day when an ISP is going to get notifications from all kinds of different registry operators listing the IP addresses of their customer-facing recursive DNS servers.  The notification will be that their customers are generating this kind of error traffic — but leaves the puzzle of figuring out which customer up to the ISP.  Presumably this leaves the ISPs to comb through DNS logs to ferret out which customer it actually was, carry the bad news to the customer, and presumably deal with the outraged fallout.  In other cases these notifications will go directly to corporate network operators with the same result.  In either case, ponder the implications of a 30 lead-time to fix these things.  Maybe easy.  Maybe not.

What’s next?  Where do we go from here?

For me, “learning more” and “spreading the word” are the next steps.  People on all sides of the argument are weighing in, but as InterIsle points out, there is a lot of analysis that should be done.  They were able to identify the number of queries, the new-TLDs that were queried and the scope of IP addresses of where queries came from.  What they point out we don’t (and need to) know is the impact of those.  How bad would the breakdowns be?   Opinions are loudly stated, but facts are scarce.

If you want to learn more, the best place to get started is probably ICANN’s “Public Comment” page on this issue.  You’ll have some reading to do, but right now (until 17-September, 2013) you have the opportunity to submit comments.  The more of you that do that the better.  The spin-doctors on all sides are hard at work — it’s very difficult to find unbiased information. There aren’t very many comments as I write this in mid-August, but they should make interesting reading as they come in — and you can read them too.

Click HERE for the ICANN  public-comment page

That’s more than enough for one blog post.  Sorry this “little bit more detail” section got so long.  There’s plenty more if you want to dig further.

DISCLAIMER:  Be aware that almost everybody in this debate is conflicted in one way or another (including me – here’s a link to my “Statement of Interest” on the ICANN site).  I participate in ICANN as the representative of a regional internet exchange point (MICE) and also as the owner of a gaggle of really generic .COM domains (click HERE for that story).  I haven’t got a clue what the impact of new gTLDs will be on my domains.  I also don’t know what the impact will be on ISPs and corporate network operators but I am very uneasy right now.  I may write some more opinionated posts about that unease, once I understand better what’s going on.

 

Repairing the road

 

So here’s a new thing for me to obsess about.  The condition of the road in the summer time.  This spring was especially tough on our road because the rain. never. stopped.  So our road, which was already getting pretty ratty, turned into a nightmare this year.

Here’s a picture from last year – note the gravel-free tracks through grass.  This is not what a gravel road is supposed to look like.  It’s supposed to have gravel in it, not grass.

 

road-project01

Here’s a picture of that same segment of road as of this morning.  See?  Gravel, not grass.  Much better.  In essence this is what I’ve been fiddling with every dry day for the last month.  There have been precious few of those, so this project has taken a lot longer than I thought it would.

road-project02

This is a piece of road we hardly ever use.  It was built so that semis can turn around when they get in here (useful for when we were building the house, and for grain trucks when we were still renting the land for row crops).  But most of the time it just sits there, and you can see that it likes to be covered with grass.

road-project03

But here’s a picture of it this spring.  One trip across it with a truck and there are giant divots in the road.

road-project04

So this was my first experiment with the land plane.  It’s starting to get grassy again because I fixed this chunk about a month ago and it’s been raining pretty much ever since.  But you can see how the divot is gone.

road-project05

Now let’s take a look at some areas that got really bad this spring.  This first one never ever gets this bad.  And never this long a stretch needing to be repaired.

road-project06

Here’s what it looked like after a few passes of the land plane.  This was the “dang, I’ve really messed this up” picture.  I was thinking that I might be doing more damage than good when I took this shot.  But fear not!  It has to get ugly before it can get pretty.  Pulling all that grass out makes a mess for a while.

road-project07

See?  This is that same segment after the very last pass.

road-project08

Here’s another view of that segment.  My first approach, before using the land plane, was to use the bucket on my other tractor.  That’s all I’ve done in prior years, but you can see that I wasn’t really making much of a dent — mostly because there was so much damage over a really long piece of the road.  I was pretty unhappy with the results.

road-project09

Here’s that same “first few passes with the land plane” shot.

road-project10

And here’s the “after last pass” shot.  It should be noted that to get through this whole project, I’ve taken something like 10-15 passes across the road.  I changed the settings a few times to try things out and have some ideas that you’ll find in the “Tips” section at the end of the post.

road-project11

THIS part of the road is always nasty — it’s going through a really wet area and is always soft.  There’s a “redo this section of the road with road fabric” project in my future here.  But you can see just how bad things got this spring.  This shot was taken AFTER I’d worked on this area with the bucket for a while.

road-project12

And here’s that last-pass shot…  It looks pretty good, but it’s still really fragile.  This smoothyness won’t last long, especially if a few trucks go over it before the rain stops.

road-project13

Another “before” shot.  Same part of the road, just a little bit around the corner and looking out into the wetland.

road-project14

And the “after” shot.  This part was really hard to do.  There’s a lot of dirt and not much gravel to dig up along here.  But even with all that, the gravel came back pretty well.  Again, the gravel along here will be pounded back into the road as the summer progresses.  The “redo with road-cloth” project is going to have to extend into this part of the road too.

road-project15

Here’s the implement — a Woods land plane, hanging on the 3-point hitch of my Kubota M-6800.  This is a really slick deal.  The two edges adjust up and down, and tilt, independently.  See the four bolts at the bottom left?  Loosening them allows that shoe at the bottom to be adjusted up and down.  I fiddled with variations of “low in front, low on one side, etc.” and have a few ideas about how to do that.  You’re looking at my “last pass” configuration — low in front, high in the back, symmetrical side to side.   This doesn’t cut into the road at all, it just rides through the loose gravel and makes it flat.  My goal when running this configuration was to have a nice amount of gravel caught by the front blade and no gravel going over the top of the back blade.  That’s why the road’s so smoothy.  But this configuration is no good for actually repairing the road, only for dressing up the gravel at the end.

road-project32

Here’s another view of the land plane, showing how the blades are on a diagonal.  In theory, this means that the gravel moves from one side to the other.  It probably does a little bit, but it’s certainly no replacement for a real rear blade if you need to move a lot of gravel from one part of the road to another.

road-project33

TIPS:

OK, you’re probably really interested in this stuff if you made it this far through the post.  Here are some lessons I learned that I’m documenting for me, since I probably won’t do this project again until next spring and will likely forget some of this stuff.

Clearing grass

The box will clog up during early grass-pulling, dirt-removing passes.  Just raise it a little bit and back up.  That’ll smooth the dirt and grass out and after a few days it’ll have dried enough that it’ll break up rather than clogging the works in a subsequent pass (have I mentioned lots of passes??).  At first I was pushing that stuff off to the side, or pulling it out by hand.  Way too hard.

Scarifiers

I ran the scarifiers right at the same level as the front blade for a while, but eventually pulled them off (they aren’t on the land plane in the pictures).  I think they would probably be really important if you were using this to stir up gravel when the road is really dry, but it’s wet here right now and the land plane did a better job of smoothing the ruts without them.

Removing ruts

I set the whole thing up at it’s mid-points all around and level (front and back, side to side, 3-point hitch level) while I was taking the ruts and grass out.  That worked OK, but I think next time I’ll try a slightly less aggressive version of this next setting.

Crowning and removing ruts

Towards the end of the project I wanted to put a little more crown in the road while removing some ruts that came in after a rain.  I set the “leading side” side of the land plane as low as it would go, front and back.  The “trailing side” got set as high as it would go.  I made the leading side bite even more by lowering that side of the box on the 3-point hitch.  So my goal was to bevel the road, with the leading side doing the cutting and then allowing the material to move over and escape out the trailing side.

Finishing and dressing the gravel

Those first two settings are fine for working divots out of the road, but they leave a lumpy surface, because a lot of material goes over the second blade.  I would try to keep that at a minimum by raising and lowering the 3-point but there’s almost no way to avoid it, because my goal was to remove ruts not leave a perfect surface.  But the last couple passes I just wanted to smooth out the gravel, not change the contour of the road.  For this setting, my goal is NO gravel going over the rear blade — that’s how I got that really smoothy surface.  So this setting was level side to side (both on the land plane and the 3-point), low in front and high in the back (to grab gravel easily with the front blade but not let much escape over the back blade).

Summary

A great project.  I borrowed the land plane from my friend Danny, but I think I’ll have to buy it from him.  He’s gonna have to pry this thing out of my cold dead hands.  I can imagine taking another pass or two several more times this summer, just to pull the grass.  Darn nifty.