Commentary on Fadi Chehadi Montevideo Statement


I love toiling at the bottom of the bottom-up ICANN process.  And it’s also quite entertaining to watch senior ICANN “managers” running wild and free on the international stage. The disconnect between those two things reminds me of the gulf that usually exists between the faculty and administration in higher education institutions.  Both sides think they run the joint.  That same gulf exists in ICANN and, while I was hopeful for a while that the new guy (Fadi Chehadi) was going to grok the fullness, it’s starting to slide into the same old pattern.

The picture above is of the last guy (Rod Beckstrom)

The audio file linked below is 2-minute mashup of the new guy Fadi’s (quite unsatisfactory) answer to Kristina Rosette’s recent question about whether he has community and Board air cover for a recent (pretty controversial) statement he made.  Pretty inside baseball for all you regulars, but it may map pretty well to your situation even though the details differ.



to listen to the 2-minute clip (but turn the volume down if you do it at work).


Here’s a link to the public post that points at the various transcripts of the call

Interestingly, while the audio transcript is still available, the links to the written transcripts that are contained in that email have disappeared.  Since I have copies of those files from when I downloaded them earlier today, I’ve posted them to this site.  Here are links to those missing documents.

Word document — full transcript — click HERE

Word document — Adobe Chat transcript — Click HERE

What if people stop trusting the ICANN root?


So once upon a time I worked at a terrific ISP in St. Paul, MN.  Back then, before the “grand bargain” that led to the shared hallucination known as ICANN, there were several pretty-credible providers of DNS that later (somewhat disparagingly) became known as “alternate” root providers.

In those days, we offered our customers a choice.  You could use our “regular” DNS that pointed at what later became the ICANN-managed root, or you could use our “extended” DNS servers that added the alternates.  No big deal, you choose, your mileage may vary, if you run into trouble we’d suggest that you switch back to “regular” and see if things go better, let us know how you like it, etc.

Well.  Fast forward almost 20 years…

The ICANN root is getting ready to expand.  A lot — like 1200 new extensions.  Your opinion about this can probably be discerned by choosing between the following.  Is that expansion of the number of top level names (ala .com, .org, .de) more like;

  • going from 20 to 1200 kinds of potato chips, or
  • going from 20 to 1200 kinds of beer?

Whatever.  The interesting thing is that suddenly the ICANN root is starting to look a lot more like our old “extended” DNS.  Kinda out there.  Kinda crazy.  Not quite as stable.  And that rascal may cause ISPs and network administrators a lot of headaches.  I’ve written a post about the name-collision issue that describes one of these puzzlers.

If those kinds of problems crop up unexpectedly, ISPs and their network administrator customers are going to look for a really quick fix (“today!”…  “now!”…).  If you’re the network admin and your whole internal network goes haywire one day, you don’t have time to be nice.  The bosses are screaming at you.  You need something that will fix that whole problem right now.  You’ll probably call your ISP for help, so they need something that will help you — right now.

One thing that would fix that is a way to get back the old “regular” DNS, the one we have now, before all those whizbang new extensions.  You know, like Coke Classic.  I know that at our ISP, we’d probably be looking for something like that.  We’d either find one, or roll our own, so we could offer it to customers with broken networks.

We’d be all good tra-la-la network citizens about it — “don’t forget to switch back to the ICANN root when your networks are fixed” and so forth.  But it would get the emergency job done (unless your DNS is being bypassed by applications, but that’s a topic for another post).

That means that the ICANN root might not forever be the first-choice most-trusted root any more.  Gaining trust is slow and hard.  Losing trust can happen in a heartbeat.  I can’t speak for today’s ISPs, but back in the day we were not shy about creative brute force solutions to problems.

We might stop trusting the ICANN root and go looking for a better one.

Oh, and one more thing.  We might put “certified NSA-free” on the shopping list.  Just sayin’


Disclaimer: I was “ICANN insider” when I wrote this post.  I don’t exactly know when that happened, but there you go.  I was a member of the ISPCP (ISP and Connectivity Provider constituency) of the GNSO (Generic Name Supporting Organization) where I participated in a lot of policy working groups and (briefly) represented the constituency on the GNSO Council.

I’m also a domain registrant – I registered a gaggle of really-generic domain names back before the web really took off.  I think I’m going to challenge John Berryhill to a Calvin-Ball debate as to whether new gTLDs help or hurt the value of those old .com names.  Back to the “chips vs beer” argument.

New gTLD preparedness project

Another scratchpad post — this time about what a “get ready for new gTLDs” project might look like.  I’ll try to write these thoughts in a way that scales from your own organization up to world-wide.

I’m doing this with an eye towards pushing this towards ICANN and new-gTLD applicants and saying “y’know, you really should be leading the charge on this.  This is your ‘product’ after all.”  Maybe we could channel a few of those “Digital Engagement” dollars into doing something useful?  You know, actually engage people?  Over a real issue?  Just sayin’

Here we go…

Why we need to do this




  • Impacts of the arrival of some new gTLDs could be very severe for some network operators and their customers
  • There may not be a lot of time to react
  • Progress on risk-assessment and mitigation-planning is poor (at least as I write this)
  • Fixes may not be identified before delegation
  • Thus, getting ready in advance is the prudent thing to do
  • We benefit from these preparations even if it turns out we don’t need them for the new gTLD rollout

The maddening thing is, we may not know what’s really going to happen until it’s too late to prepare — so we may have to make guesses.

New gTLD impacts could be very broad and severe, especially for operators of private networks that were planned and implemented long before new gTLDs were conceived of.  ISPs and connectivity providers may be similarly surprised.  Click HERE to read a blog post that I wrote about this — but here are some examples:

  • Microsoft Active Directory installations may need to be renamed and rebuilt
  • Internal certificates may need to be replaced
  • Long-stable application software may need to be revised
  • New attack vectors may arise
  • And so forth…

The key point here is that in the current state of play, these risks are unknown.  Studies that would help understand this better are being lobbied for, but haven’t been approved or launched as I write this.

A “get ready” effort seems like a good idea

Given that we don’t know what is going to happen, and that some of us may be in a high-risk zone, it seems prudent to start helping people and organizations get ready.

  • If there are going to be failures, preparedness would be an effective way to respond
  • The issues associated with being caught by surprise and being under-prepared could be overwhelming
  • “Hope for the best, prepare for the worst” is a strategy we often use to guide family decisions — that rule might be a good one for this situation as well
  • Inaction, in the face of the evidence that is starting to pile up, could be considered irresponsible.

Looking on the bright side, it seems to me that there are wide-ranging benefits to be had from this kind of effort even if mitigation is never needed.

  • We could improve the security, stability and resiliency of the DNS for all, by making users and providers of those services more nimble and disaster resistant
  • If we “over prepare” as individuals and organizations, we could be in a great position to help others if they encounter problems
  • Exercise is good for us.  And gives all factions a positive focal point for our attention.  I’ll meet you on that common ground.

Here’s a way to define success

I’m not sure this part is right, but I like having a target to shoot at when I’m planning something, and this seems like a good start.


  • Minimize the impact of new-gTLD induced failures on the DNS, private and public networks, applications, and Internet users.
  • Make technical-community resources robust enough to respond in the event of a new-gTLD induced disruption
  • Maximize the speed, flexibility and effectiveness of that response.

Who does what

This picture is trying to say “everybody can help.”  I got tired of adding circles and connecting-lines, so don’t be miffed if you can’t find yourself on this picture.  I am trying to make the point that it seems to me that ICANN and the contracted parties have a different role to play than those of us who are on the edge, especially since they’re the ones benefiting financially from this new-gTLD deal.

Note my subtle use of color to drive that home.  Also note that there’s a pretty lively conversation about who should bear the risks.



How do we get from here to there?  If I were in complete command of the galaxy, here’s a high level view of how I’d break up the work.


As I refine this Gantt chart, it becomes clear to me that a) this is something that can be done, but b) it’s going to take some planning, some resources and (yes, dearly beloved) some time.  Hey!  I’m just the messenger.

We should get started

So here you are at the end of this picture book and mad fantasy.  Given all this, here’s what I’d do if this puzzler were left up to me.


And here are the things I’d start doing right away:

  • Agree that this effort needs attention, support and funding
  • Get started on the organizing
  • Establish a focal point and resource pool
  • Broaden the base of participation
  • Start tracking what areas are ready, and where there are likely to be problems

There you go.  If you would like this in slide-deck form to carry around and pitch to folks, click HERE for an editable Powerpoint version of this story.  Carry on.


Disclaimer:  While the ICANN community scrambles to push this big pile of risk around, everybody should be careful to say where they’re coming from.  I’m a member of the ISPCP constituency at ICANN, and represent a regional IXP (MICE) there.  I don’t think this issue generates a lot of risk for MICE because we don’t provide recursive resolver services and thus won’t be receiving the name-collision notifications being proposed by ICANN staff.  I bet some of our member ISPs do have a role to play, and will be lending a hand.

I am also a first-generation registrant of a gaggle of really-generic domain names.  New gTLDs may impact the value of those names but experts are about evenly divided on which way that impact will go.  I’m retired, and can’t conceive of how I’ll be making money from any activity in this arena.

New gTLDs and namespace collision

This is another scratch-pad post that’s aimed at a narrow audience —  network geeks, especially in ISPs and corporations.  The first bit is a 3-minute read, followed by a 20-minute “more detail” section.  If you’re baffled by this, but maybe a little concerned after you read it, please push this page along to your network-geek friends and colleagues and get their reaction.  Feel free to repost any/all of this.

Key points before we get started

  • I don’t know what’s going to happen
  • I don’t know what the impact is going to be, but in some cases it could be severe
  • Others claim to know both of those things but I’m not convinced by their arguments right now
  • Thus, I think the best thing to do is learn more, hope for the best and prepare for the worst
  • My goal with this post is just to give you a heads-up

If I were you, I’d:

  • Scan my private network and see if any of my names collide with the new gTLDs that are coming
  • Check my recursive DNS server logs and see if any name collisions are appearing there
  • Start thinking about remediation now
  • Participate in the discussion of this topic at ICANN, especially if you foresee major impacts
  • Spread the word that this is coming to friends and colleagues

Do note that there is a strong argument raging in the DNS community about all this.  There are some (myself included) who never met or even heard of the DNS purists who currently maintain that this whole problem is our fault and that none of this would have happened if we’d all configured our private networks with fully-qualified domain names right from the start.

Where were those folks in 1995 when I opened my first shrink-wrapped box of Windows NT and created the name that would become the root of a huge Active Directory network with thousands of nodes?  Do you know how hard it was to get a domain name back then?  The term “registrar” hadn’t been invented yet.  All we were trying to do is set up a shared file, print and mail server for crying out loud.  The point is that there are lots of legacy networks that look like the one depicted below, they’re going to be very hard and expensive to rename, and some of them are likely to break when new gTLDs hit the root.  m’Kay?

Private networks, the way we’ve thought about them for a decade

Here’s my depiction the difference between a private network (with all kinds of domain names that don’t route on the wider Internet) and the public Internet (with the top level names you’re familiar with) back in the good old days before the arrival of 1400 new gTLDs.


This next picture shows the namespace collision problem.  This depiction is still endorsed by nobody, your mileage may vary, etc. etc.  But you see what’s happening.  At some random point in the future, when a second-level name matching the name of your highly-trusted resources get delegated, there’s the possibility that traffic which has consistently been going to the right place in your internal network will suddenly be routed to an unknown, untrusted destination on the worldwide Internet.

But wait, there are more bad things that might happen.  What if the person who bought that matching second-level name in a new gTLD is a bad-actor?  What if they surveyed the error traffic arriving at that new gTLD and bought that second-level name ON PURPOSE, so that they could harvest that error traffic with the intention of doing you harm?

But wait, there’s more.  What if you have old old applications that are counting on a consistent NXDOMAIN response from a root server.  Suppose that the application was written in such a way that that it falls over when the new gTLD gets delegated (and thus the response from the root changes from the expected NXDOMAIN to an unexpected pointer to the registry).  Does this start to feel a little bit like Y2K?

Well one of the good things about Y2k was that most of the “breakage” events would have all happened on the same day — with this rascal, things might look more like a gentle random rain of breakage over the next decade or so as 2nd-level names get sold.

Imagine the surprise when some poor unsuspecting domain-registrant wakes up to a flood of email from network operators who are cranky because their networks just broke.  Don’t think it can happen?  Check out my home page — those cats are BUSY.  That domain gets 2,000,000 error hits A DAY.  Almost all of it from Microsoft Active Directory sites.


The new TLDs may unexpectedly cause traffic that you’re expecting to go to your trusted internal networks (or your customer’s networks) to suddenly start being routed to an untrusted external network, one that you didn’t anticipate.  Donald Rumsfeld might call those external networks “unknown unknowns” — something untrusted that you don’t know about in advance.  The singular goal of this post is to let you know about this possibility in advance.  Here’s the key message:

If you have private networks that use TLDs on this list, best start planning for a future when those names (and any internal certificates using those names) are going to stop working right. 

That’s it.  If you want, you can quit reading here.  I’m going to stick updates in this section, followed by the “More detail” part at the bottom.

Update 1 — Mikey’s first-try at a near-term mitigation plan

After conversations with a gaggle of smart people, I’ve decided that the following three pictures are a relatively low-impact way to address this problem in a network that you control.

In essence, I think the key to this approach is to take control of when the new gTLDs become visible to your internal network.  By becoming authoritative for new gTLDs in your DNS servers now, before ICANN has delegated them, you get to watch the NXD error traffic right now rather than having to wait for messages from new registries.  Here’s a list of the new gTLDs to use in constructing your router configuration.






More detail

Note: all the color, bold, highlighting in this section is mine — just to draw your eye to things that I find interesting.

There are over 1000 names on that list I linked to above.  Here is a shorter list drawn from Interisle Consulting Group’s 2-August, 2013 report entitled “Name Collisions in the DNS” [PDF, 3.34 MB].  This list is the top 100 names in order of frequency of queries that they saw in their study.  I’ve taken the liberty of highlighting a few that might be interesting for you to keep an eye out for on your network or your customer’s networks.


1 home 21 mail 41 abc 61 yahoo 81 gmail
2 corp 22 star 42 youtube 62 cloud 82 apple
3 ice 22 ltd 43 samsung 63 chrome 83 thai
4 global 23 google 44 hot 64 link 84 law
5 med 24 sap 45 you 65 comcast 85 taobao
6 site 25 app 46 ecom 66 gold 86 show
7 ads 26 world 47 llc 67 data 87 itau
8 network 27 mnet 48 foo 68 cam 88 house
9 cisco 28 smart 49 tech 69 art 89 amazon
10 group 29 orange 50 free 70 work 90 ericsson
11 box 30 web 51 kpmg 71 live 91 college
12 prod 31 msd 52 bet 72 ifm 92 bom
13 iinet 32 red 53 bcn 73 lanxess 93 ibm
14 hsbc 33 telefonica 54 hotel 74 goo 94 company
15 inc 34 casa 55 new 75 olympus 95 sfr
16 dev 35 bank 56 wow 76 sew 96 man
17 win 36 school 57 blog 77 city 97 pub
18 office 37 movistar 58 one 78 center 98 services
19 business 38 search 59 top 79 zip 99 page
20 host 39 zone 60 off 80 plus 100 delta

Here’s the executive summary of the InterIsle report.

Executive Summary — InterIsle Consulting Report

Names that belong to privately-defined or “local” name spaces often look like DNS names and are used in their local environments in ways that are either identical to or very similar to the way in which globally delegated DNS names are used. Although the semantics of these names are properly defined only within their local domains, they sometimes appear in query names (QNAMEs) at name resolvers outside their scope, in the global Internet DNS.

The context for this study is the potential collision of labels that are used in private or local name spaces with labels that are candidates to be delegated as new gTLDs. The primary purpose of the study is to help ICANN understand the security, stability, and resiliency consequences of these collisions for end users and their applications in both private and public settings.

The potential for name collision with proposed new gTLDs is substantial.  Based on the data analyzed for this study, strings that have been proposed as new gTLDs appeared in 3% of the requests received at the root servers in 2013. Among all syntactically valid TLD labels (existing and proposed) in requests to the root in 2013, the proposed TLD string home ranked 4th, and the proposed corp ranked 21st. DNS traffic to the root for these and other proposed TLDs already exceeds that for well-established and heavily-used existing TLDs.

Several options for mitigating the risks associated with name collision have been identified.  For most of the proposed TLDs, collaboration among ICANN, the new gTLD applicant, and potentially affected third parties in the application of one or more of these risk mitigation techniques is likely to substantially reduce the risk of delegation.

The potential for name collision with proposed new gTLDs often arises from well- established policies and practices in private network environments. Many of these were widely adopted industry practices long before ICANN decided to expand the public DNS root; the problem cannot be reduced to “people should have known better.”

The delegation of almost any of the applied-for strings as a new TLD label would carry some risk of collision.  Of the 1,409 distinct applied-for strings, only 64 never appear in the TLD position in the request stream captured during the 2012 “Day in the Life of the Internet” (DITL) measurement exercise, and only 18 never appear in any position. In the 2013 DITL stream, 42 never appear in the TLD position, and 14 never appear in any position.

The risk associated with delegating a new TLD label arises from the potentially harmful consequences of name collision, not the name collision itself.  This study was concerned primarily with the measurement and analysis of the potential for name collision at the DNS root. An additional qualitative analysis of the harms that might ensue from those collisions would be necessary to definitively establish the risk of delegating any particular string as a new TLD label, and in some cases the consequential harm might be apparent only after a new TLD label had been delegated

The rank and occurrence of applied-for strings in the root query stream follow a power- law distribution.  A relatively small number of proposed TLD strings account for a relatively large fraction of all syntactically valid non-delegated labels observed in the TLD position in queries to the root.

The sources of queries for proposed TLD strings also follow a power-law distribution. For most of the most-queried proposed TLD strings, a relatively small number of distinct sources (as identified by IP address prefixes) account for a relatively large fraction of all queries.
A wide variety of labels appear at the second level in queries when a proposed TLD string is in the TLD position. For most of the most-queried proposed TLD strings, the number of different second-level labels is very large, and does not appear to follow any commonly recognized empirical distribution.

Name collision in general threatens the assumption that an identifier containing a DNS domain name will always point to the same thing. Trust in the DNS (and therefore the Internet as a whole) may erode if Internet users too often get name-resolution results that don’t relate to the semantic domain they think they are using. This risk is associated not with the collision of specific names, but with the prevalence of name collision as a phenomenon of the Internet experience.

The opportunity for X.509 public key certificates to be erroneously accepted as valid is an especially troubling consequence of name collision. An application intended to operate securely in a private context with an entity authenticated by a certificate issued by a widely trusted public Certification Authority (CA) could also operate in an apparently secure manner with another equivalently named entity in the public context if the corresponding TLD were delegated at the public DNS root and some party registered an equivalent name and obtained a certificate from a widely trusted CA. The ability to specify wildcard DNS names in certificates potentially amplifies this risk.

The designation of any applied-for string as “high risk” or “low risk” with respect to delegation as a new gTLD depends on both policy and analysis. This study provides quantitative data and analysis that demonstrate the likelihood of name collision for each of the applied-for strings in the current new gTLD application round and qualitative assessments of some of the potential consequences. Whether or not a particular string represents a delegation risk that is “high” or “low” depends on policy decisions that relate those data and assessments to the values and priorities of ICANN and its community; and as Internet behavior and practice change over time, a string that is “high risk” today may be “low risk” next year (or vice versa).

For a broad range of potential policy decisions, a cluster of proposed TLDs at either end of the delegation risk spectrum are likely to be recognizable as “high risk” and “low risk.” At the high end, the cluster includes the proposed TLDs that occur with at least order-of-magnitude greater frequency than any others (corp and home) and those that occur most frequently in internal X.509 public key certificates (mail and exchange in addition to corp). At the low end, the cluster includes all of the proposed TLDs that appear in queries to the root with lower frequency than the least-frequently queried existing TLD; using 2013 data, that would include 1114 of the 1395 proposed TLDs.

And here is their list of risk-mitigation options.

9 Name collision risk mitigation

ICANN and its partners in the Internet community have a number of options available to mitigate the risks associated with name collision in the DNS. This section describes each option; its advantages and disadvantages; and the residual risk that would remain after it had been successfully implemented.

The viability, applicability, and cost of different risk mitigation options are important considerations in the policy decision to delegate or not delegate a particular string. For example, a string that is considered to be “high risk” because risk assessment finds that it scores high on occurrence frequency or severity of consequences (or both), but for which a very simple low-cost mitigation option is available, may be less “risky” with respect to the delegation policy decision than a string that scores lower during risk assessment but for which mitigation would be difficult or impossible.

It is important to note that in addition to these strategies for risk mitigation, there is a null option to “do nothing”—to make no attempt to mitigate the risks associated with name collision, and let the consequences accrue when and where they will. As a policy decision, this approach could reasonably be applied, for example, to strings in the “low risk” category and to some or all of the strings in the “uncalculated risk” category.

It is also important to note that this study and report are concerned primarily with risks to the Internet and its users associated with the occurrence and consequences of name collision—not risks to ICANN itself associated with new TLD delegation or risk mitigation policy decisions.

9.1 Just say no

An obvious solution to the potential collision of a new gTLD label with an existing string is to simply not delegate that label, and formally proscribe its future delegation—e.g., by updating [15] to permanently reserve the string, or via the procedure described in [9] or [16]. This approach has been suggested for the “top 10” strings by [ ], and many efforts have been made over the past few years to add to the list of formally reserved strings [15] other non-delegated strings that have been observed in widespread use [1] [9] [10] [16].
A literal “top 10” approach to this mitigation strategy would be indefensibly arbitrary (the study data provide no answer to the obvious question “why 10?”), but a policy decision could set the threshold at a level that could be defended by the rank and occurrence data provided by this study combined with a subjective assessment of ICANN’s and the community’s tolerance for uncertainty.

9.1.1 Advantages
A permanently reserved string cannot be delegated as a TLD label, and therefore cannot collide with any other use of the same string in other contexts. A permanently reserved string could also be recommended for use in private semantic domains.

9.1.2 Disadvantages
There is no disadvantage for the Internet or its users. The disadvantages to current or future applicants for permanently proscribed strings are obvious. Because the “top N” set membership inclusion criteria will inevitably change over time, this mitigation strategy would be effective beyond the current new gTLD application round only if those criteria (and the resulting set membership) were periodically re-evaluated.

9.1.3 Residual risk
This mitigation strategy leaves no residual risk to the Internet or its users.

9.2 Further study

For a string in the “non-customary risk” or “calculated risk” category, further study might lead to a determination that the “severity of consequences” factor in the risk assessment formula is small enough to ensure that the product of occurrence and severity is also small.

9.2.1 Advantages
Further study might shift a string from the “uncalculated risk” to the “calculated risk” category by providing information about the magnitude of the “severity of consequences” factor. It might also reduce the uncertainty constant in the risk assessment formula, facilitating a policy decision with respect to delegation of the string as a new TLD.

9.2.2 Disadvantages
Further study obviously involves a delay that may or may not be agreeable to applicants, and it may also require access to data that are not (or not readily) available. Depending on the way in which a resolution request arrives at the root, it may be difficult or impossible to determine the original source; and even if the source can be discovered, it might be difficult or impossible (because of lack of cooperation or understanding at the source) to determine precisely why a particular request was sent to the root.

The “further study” option also demands a termination condition: “at what point, after how much study, will it be possible for ICANN to make a final decision about this string?”

9.2.3 Residual risk
Unless further study concludes that the “severity of consequences” factor is zero, some risk will remain.

9.3 Wait until everyone has left the room

At least in principle, some uses of names that collide with proposed TLD strings could be eliminated: either phased out in favor of alternatives or abandoned entirely. For example, hardware and software systems that ship pre-configured to advertise local default domains such as home could be upgraded to behave otherwise. In these cases, a temporary moratorium on delegation, to allow time for vendors and users to abandon the conflicting use or to migrate to an alternative, might be a reasonable alternative to the permanent “just say no.” Similarly, a delay of 120 days54 before activating a new gTLD delegation could mitigate the risk associated with internal name certificates described in Sections 6.2 and 7.2.

9.3.1 Advantages
A temporary injunction that delays the delegation of a string pending evacuation of users from the “danger zone” would be less restrictive than a permanent ban.

9.3.2 Disadvantages
Anyone familiar with commercial software and hardware knows that migrating even a relatively small user base from one version of the same system to another—much less from one system to a different system—is almost never as straightforward in practice as it seems to be in principle. Legacy systems may not be upgradable even in principle, and consumer-grade devices in particular are highly unlikely to upgrade unless forced by a commercial vendor to do so. The time scales are likely to be years—potentially decades—rather than months.

Embracing “wait until…” as a mitigation strategy would therefore require policy decisions with respect to the degree of evacuation that would be accepted as functionally equivalent to “everyone” and a mechanism for coordinating the evacuation among the many different agents (vendors, users, industry consortia, etc.) who would have to cooperate in order for it to succeed.

9.3.3 Residual risk
Because no evacuation could ever be complete, the risks associated with name collision would remain for whatever fraction of the affected population would not or could not participate in it.

9.4 Look before you leap
Verisign [4] and others (including [8]) have recommended that before a new TLD is permanently delegated to an applicant, it undergo a period of “live test” during which it is added to the root zone file with a short TTL (so that it can be flushed out quickly if something goes wrong) while a monitoring system watches for impacts on Internet security or stability.

9.4.1 Advantages
A “trial run” in which a newly-delegated TLD is closely monitored for negative effects and quickly withdrawn if any appear could provide a level of confidence in the safety of a new delegation comparable to that which is achieved by other product-safety testing regimes, such as pharmaceutical and medical-device trials or probationary-period licensing of newly trained skilled craftsmen.

9.4.2 Disadvantages
The practical barriers to instrumenting the global Internet in such a way as to effectively perform the necessary monitoring may be insurmountable. Not least among these is the issue of trust and liability—for example, would the operator of a “live test” be expected to protect Internet users from harm during the test, or be responsible for damages that might result from running the test?

9.4.3 Residual risk
No “trial run” (particularly one of limited duration) could perfectly simulate the dynamics of a fully-delegated TLD and its registry, so some risk would remain even after some period of running a live test.

9.5 Notify affected parties
For some proposed TLDs in the current round, it may be possible to identify the parties most likely to be affected by name collision, and to notify them before the proposed TLD is delegated as a new gTLD.

9.5.1 Advantages
Prior notice of the impending delegation of a new gTLD that might collide with the existing use of an identical name string could enable affected parties to either change their existing uses or take other steps to prepare for potential consequences.

9.5.2 Disadvantages
Notification increases awareness, but does not directly mitigate any potential consequence of name collision other than surprise. For many proposed TLDs it might be difficult or impossible to determine which parties could be affected by name collision. Because affected parties might or might not understand the potential risks and consequences of name collision and how to manage them, either in general or with respect to their own existing uses, notification might be ineffective without substantial concomitant technical and educational assistance.

9.5.3 Residual risk
In most cases at least some potentially affected parties will not be recognized and notified; and those that are recognized and notified may or may not be able to effectively prepare for the effects of name collision on their existing uses, with or without assistance.

Here are some of the tasty bits from a risk-mitigation proposal issued by ICANN staff several days later (5-August, 2013).



The Study establishes a low-risk profile for 80% of the strings. ICANN proposes to move forward with its established processes and procedures with delegating strings in this category (e.g., resolving objections, addressing GAC advice, etc.) after implementing two measures in an effort to mitigate the residual namespace collision risks.

First, registry operators will implement a period of no less than 120 days from the date that a registry agreement is signed before it may activate any names under the TLD in the DNS1. This measure will help mitigate the risks related to the internal name certificates issue as described in the Study report and SSAC Advisory on Internal Name Certificates. Registry operators, if they wish, may allocate names during this period, i.e., accept registrations, but they will not activate them in DNS. If a registry operator were to allocate names during this 120-day period, it would have to clearly inform the registrants about the impossibility to activate names until the period ends.

Second, once a TLD is first delegated within the public DNS root to name servers designated by the registry operator, the registry operator will not activate any names under the TLD in the DNS for a period of no less than 30 days. During this 30-day period, the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it. The minimum set of requirements for the notification is described in Appendix A of this paper. This measure will help mitigate the namespace collision issues in general. Note that both no-activate- name periods can overlap.

The TLD name servers may see DNS queries for an un-delegated name from recursive resolvers – for example, a resolver operated by a subscriber’s ISP or hosting provider, a resolver operated by or for a private (e.g., corporate) network, or a global public name resolution service. These queries will not include the IP address of the original requesting host, i.e., the source IP address that will be visible to the TLD is the source address of the recursive resolver. In the event that the TLD operator sees a request for a non-delegated name, it must request the assistance of these recursive resolver operators in the notification process as described in Appendix A.


ICANN considers that the Study presents sufficient evidence to classify home and corp as high-risk strings. Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk based on the criteria described above. An applicant for one of these strings would have the option to withdraw its application, or work towards resolving the issues that led to its categorization as high risk (i.e., those described in section 7 of the Study report). An applicant for a high-risk string can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string.


For the remaining 20% of the strings that do not fall into the low or high-risk categories, further study is needed to better assess the risk and understand what mitigation measures may be needed to allow these strings to move forward. The goal of the study will be to classify the strings as either low or high-risk using more data and tests than those currently available. While this study is being conducted, ICANN would not allow delegation of the strings in this category. ICANN expects the further study to take between three and six months. At the same time, an applicant for these strings can work towards resolving the issues that prevented their proposed string from being categorized as low risk (e.g., those described in section 7 of the Study report). An applicant can provide evidence of the results from the steps taken to mitigate the name collision risks to an acceptable level. ICANN may seek independent confirmation of the results before allowing delegation of such string. If and when a string from this category has been reclassified as low-risk, it can proceed as described above for the low-risk category strings.


ICANN is fully committed to the delegation of new gTLDs in a secure and stable manner. As with most things on the Internet, it is not possible to eliminate risk entirely. Nevertheless, ICANN would only proceed to delegate a new gTLD when the risk profile of such string had been mitigated to an acceptable level. We appreciate the community’s involvement in the process and look forward to further collaboration on the remaining work.


Registry operator will notify the point of contact of each IP address block that issue any type of DNS requests (the Requestors) for names under the TLD or its apex.  The point of contact(s) will be derived from the respective Regional Internet Registry (RIR) database. Registry operator will offer customer support for the Requestors or their clients (origin of the queries) in, at least, the same languages and mechanisms the registry plans to offer customer support for registry services. Registry operator will avoid sending unnecessary duplicate notifications (e.g. one notification per point of contact).

The notification should be sent, at least, over email and must include, at least the following elements: 1) the TLD string; 2) why the IP address holder is receiving this email; 3) the potential problems the Requestor or its clients could encounter (e.g., those described in section 6 of the Study report); 4) the date when the gTLD signed the registry agreement with ICANN, and the date of gTLD delegation; 5) when the domain names under the gTLD will first become active in DNS; 6) multiple points of contact (e.g. email address, phone number) should people have questions; 7) will be in English and may be in other languages the point of contact is presumed to know; 8) ask the Requestors to pass the notification to their clients in case the Requestors are not the origin of the queries, e.g., if they are providers of DNS resolution services; 9) a sample of timestamps of DNS request in UTC to help identify the origin of queries; 10) email digitally signed with valid S/MIME certificate from well- known public CA.

It’s that last appendix, where people are going to get notified, that really caught my eye.  I can imagine a day when an ISP is going to get notifications from all kinds of different registry operators listing the IP addresses of their customer-facing recursive DNS servers.  The notification will be that their customers are generating this kind of error traffic — but leaves the puzzle of figuring out which customer up to the ISP.  Presumably this leaves the ISPs to comb through DNS logs to ferret out which customer it actually was, carry the bad news to the customer, and presumably deal with the outraged fallout.  In other cases these notifications will go directly to corporate network operators with the same result.  In either case, ponder the implications of a 30 lead-time to fix these things.  Maybe easy.  Maybe not.

What’s next?  Where do we go from here?

For me, “learning more” and “spreading the word” are the next steps.  People on all sides of the argument are weighing in, but as InterIsle points out, there is a lot of analysis that should be done.  They were able to identify the number of queries, the new-TLDs that were queried and the scope of IP addresses of where queries came from.  What they point out we don’t (and need to) know is the impact of those.  How bad would the breakdowns be?   Opinions are loudly stated, but facts are scarce.

If you want to learn more, the best place to get started is probably ICANN’s “Public Comment” page on this issue.  You’ll have some reading to do, but right now (until 17-September, 2013) you have the opportunity to submit comments.  The more of you that do that the better.  The spin-doctors on all sides are hard at work — it’s very difficult to find unbiased information. There aren’t very many comments as I write this in mid-August, but they should make interesting reading as they come in — and you can read them too.

Click HERE for the ICANN  public-comment page

That’s more than enough for one blog post.  Sorry this “little bit more detail” section got so long.  There’s plenty more if you want to dig further.

DISCLAIMER:  Be aware that almost everybody in this debate is conflicted in one way or another (including me – here’s a link to my “Statement of Interest” on the ICANN site).  I participate in ICANN as the representative of a regional internet exchange point (MICE) and also as the owner of a gaggle of really generic .COM domains (click HERE for that story).  I haven’t got a clue what the impact of new gTLDs will be on my domains.  I also don’t know what the impact will be on ISPs and corporate network operators but I am very uneasy right now.  I may write some more opinionated posts about that unease, once I understand better what’s going on.


Repairing the road


So here’s a new thing for me to obsess about.  The condition of the road in the summer time.  This spring was especially tough on our road because the rain. never. stopped.  So our road, which was already getting pretty ratty, turned into a nightmare this year.

Here’s a picture from last year – note the gravel-free tracks through grass.  This is not what a gravel road is supposed to look like.  It’s supposed to have gravel in it, not grass.



Here’s a picture of that same segment of road as of this morning.  See?  Gravel, not grass.  Much better.  In essence this is what I’ve been fiddling with every dry day for the last month.  There have been precious few of those, so this project has taken a lot longer than I thought it would.


This is a piece of road we hardly ever use.  It was built so that semis can turn around when they get in here (useful for when we were building the house, and for grain trucks when we were still renting the land for row crops).  But most of the time it just sits there, and you can see that it likes to be covered with grass.


But here’s a picture of it this spring.  One trip across it with a truck and there are giant divots in the road.


So this was my first experiment with the land plane.  It’s starting to get grassy again because I fixed this chunk about a month ago and it’s been raining pretty much ever since.  But you can see how the divot is gone.


Now let’s take a look at some areas that got really bad this spring.  This first one never ever gets this bad.  And never this long a stretch needing to be repaired.


Here’s what it looked like after a few passes of the land plane.  This was the “dang, I’ve really messed this up” picture.  I was thinking that I might be doing more damage than good when I took this shot.  But fear not!  It has to get ugly before it can get pretty.  Pulling all that grass out makes a mess for a while.


See?  This is that same segment after the very last pass.


Here’s another view of that segment.  My first approach, before using the land plane, was to use the bucket on my other tractor.  That’s all I’ve done in prior years, but you can see that I wasn’t really making much of a dent — mostly because there was so much damage over a really long piece of the road.  I was pretty unhappy with the results.


Here’s that same “first few passes with the land plane” shot.


And here’s the “after last pass” shot.  It should be noted that to get through this whole project, I’ve taken something like 10-15 passes across the road.  I changed the settings a few times to try things out and have some ideas that you’ll find in the “Tips” section at the end of the post.


THIS part of the road is always nasty — it’s going through a really wet area and is always soft.  There’s a “redo this section of the road with road fabric” project in my future here.  But you can see just how bad things got this spring.  This shot was taken AFTER I’d worked on this area with the bucket for a while.


And here’s that last-pass shot…  It looks pretty good, but it’s still really fragile.  This smoothyness won’t last long, especially if a few trucks go over it before the rain stops.


Another “before” shot.  Same part of the road, just a little bit around the corner and looking out into the wetland.


And the “after” shot.  This part was really hard to do.  There’s a lot of dirt and not much gravel to dig up along here.  But even with all that, the gravel came back pretty well.  Again, the gravel along here will be pounded back into the road as the summer progresses.  The “redo with road-cloth” project is going to have to extend into this part of the road too.


Here’s the implement — a Woods land plane, hanging on the 3-point hitch of my Kubota M-6800.  This is a really slick deal.  The two edges adjust up and down, and tilt, independently.  See the four bolts at the bottom left?  Loosening them allows that shoe at the bottom to be adjusted up and down.  I fiddled with variations of “low in front, low on one side, etc.” and have a few ideas about how to do that.  You’re looking at my “last pass” configuration — low in front, high in the back, symmetrical side to side.   This doesn’t cut into the road at all, it just rides through the loose gravel and makes it flat.  My goal when running this configuration was to have a nice amount of gravel caught by the front blade and no gravel going over the top of the back blade.  That’s why the road’s so smoothy.  But this configuration is no good for actually repairing the road, only for dressing up the gravel at the end.


Here’s another view of the land plane, showing how the blades are on a diagonal.  In theory, this means that the gravel moves from one side to the other.  It probably does a little bit, but it’s certainly no replacement for a real rear blade if you need to move a lot of gravel from one part of the road to another.



OK, you’re probably really interested in this stuff if you made it this far through the post.  Here are some lessons I learned that I’m documenting for me, since I probably won’t do this project again until next spring and will likely forget some of this stuff.

Clearing grass

The box will clog up during early grass-pulling, dirt-removing passes.  Just raise it a little bit and back up.  That’ll smooth the dirt and grass out and after a few days it’ll have dried enough that it’ll break up rather than clogging the works in a subsequent pass (have I mentioned lots of passes??).  At first I was pushing that stuff off to the side, or pulling it out by hand.  Way too hard.


I ran the scarifiers right at the same level as the front blade for a while, but eventually pulled them off (they aren’t on the land plane in the pictures).  I think they would probably be really important if you were using this to stir up gravel when the road is really dry, but it’s wet here right now and the land plane did a better job of smoothing the ruts without them.

Removing ruts

I set the whole thing up at it’s mid-points all around and level (front and back, side to side, 3-point hitch level) while I was taking the ruts and grass out.  That worked OK, but I think next time I’ll try a slightly less aggressive version of this next setting.

Crowning and removing ruts

Towards the end of the project I wanted to put a little more crown in the road while removing some ruts that came in after a rain.  I set the “leading side” side of the land plane as low as it would go, front and back.  The “trailing side” got set as high as it would go.  I made the leading side bite even more by lowering that side of the box on the 3-point hitch.  So my goal was to bevel the road, with the leading side doing the cutting and then allowing the material to move over and escape out the trailing side.

Finishing and dressing the gravel

Those first two settings are fine for working divots out of the road, but they leave a lumpy surface, because a lot of material goes over the second blade.  I would try to keep that at a minimum by raising and lowering the 3-point but there’s almost no way to avoid it, because my goal was to remove ruts not leave a perfect surface.  But the last couple passes I just wanted to smooth out the gravel, not change the contour of the road.  For this setting, my goal is NO gravel going over the rear blade — that’s how I got that really smoothy surface.  So this setting was level side to side (both on the land plane and the 3-point), low in front and high in the back (to grab gravel easily with the front blade but not let much escape over the back blade).


A great project.  I borrowed the land plane from my friend Danny, but I think I’ll have to buy it from him.  He’s gonna have to pry this thing out of my cold dead hands.  I can imagine taking another pass or two several more times this summer, just to pull the grass.  Darn nifty.





A blog post from Fargo – a new gizmo

Dave Winer has a cool new gizmo (Fargo) that I’ve been messing around with for the last week or so (don’t get me all wrapped up in a time warp here).

Why I loves Fargo

  • I loves this gizmo because I’m addicted to outlining and I’m always on the hunt for simpler, more approachable ways to do it (and recruit other addicts). For the most part, I’ve gotten pretty solidly into the “mind mapping” groove, but that’s just a habit. When you boil my use of mind-mapping software down you find that all I’m really doing is outlining. Enough about why Fargo attracts me.

Problems this WordPress connector solve

  • The problem I was running into with Fargo was “well gee, in many cases I will eventually want to slurp it out of Fargo and push it into a traditional word processor and turn it into a report of some kind — how I do dat?”
  • Another problem I was running into was “how can I keep non-addicts up to date on the outline without forcing them into something that makes them uneasy?”

This connector between Fargo and WordPress may just be the ticket. So here’s a first-try blog post that I’ll then come back and edit a bit to test out how this gizmo works.


  • I coulda sworn I saw one of these outlines posted to a WP site in a way that the expanding/shrinking triangles came along too.
  • That would be good to know how to do — ’cause some of my outlines get
  • really big and it would be nice to allow people to open/close parts of
  • it rather than seeing the whole thing. I wonder if that’s done in CSS,
  • or if it’s a theme thing, or a plugin? Ah… maybe can do that with a public link to the post? Eeeauuu… That’s pretty homely. What about a link to view this post in Reader?
  • oops – lost all the links in that ‘graph. tried to pull them back in by copy/pasting from the WP version but the links didn’t come with.
  • i’m making hash out of this. where did all those extra Returns come from when i pasted the text back in (tried copy/paste of a portion of the paragraph)
  • hm… dragging does something. but not sure what. dragged a big chunk to the bottom of the page and it disappeared. where’s “undo” when i need it? 😉
  • How do I chop over-long paragraphs (like this one is getting to be) into chunks so I can reorganize them? Hitting Return in the middle of my long ‘graph gets me a new one at the bottom. Shift-Return? cmd-Return? alt-Return? ctr-Return? Enter? nope. Hmm. I’m constantly taking notes and tidying up afterward. Gotta be a way… Maybe it’s just a drafting habit I need to learn
    • But I think it would be nice to have a “split this headline” command. place cursor at split point, issue “split” command and wind up with the headline divided in two.
  • Ahhh. Firefox and Safari. That’s the source of my troubles. Safari is a lot nicer experience. I can cut/paste sections of headlines without getting a whole series of headlines.
  • Repainting the SC430

    OK, I admit it.  I’m kindof a lame car guy.  I love cars, but I am old and tired and hate being uncomfortable.  So about 5 years ago I bought a year-1 (2002) Lexus SC430 that had been rode hard and put away wet for the princely sum of $17000.  I’ve been bringing it back from an early grave ever since.  The first few years were devoted to repairing the driving stuff — replacing bent wheels, struts, etc.

    I also did some exterior work on my own, because the black paint (pity me, I own a black car) had gotten a really bad case of the swirlys from many years of bad car washes.  Plus the headlights had gotten really fogged, so I cleaned them up.

    But this year is the year to do what I’ve been dreaming of ever since I bought the car — a complete repainting job.  Mostly to cure all the battered-paint troubles, but also to slightly change the color to an extremely dark blue.  I’m hoping to get that effect where it looks black unless you really look at it in direct sun, at which point the blue metallic will show up.

    This is a post to chronicle the project.

    The folks who did it

    Will and Robert Latuff — of Latuff Brothers Autobody.  They look displeased, no?


    Rick, Dan, Brandon, Don and Tim — the guys that did the heavy lifting.  They look unhappy too.  Maybe they’re feeling crummy about the terrible job they did?  Or maybe they just don’t get along with each other very well.


    Huge hole in this post, waiting for a picture of Kim and Steve from Dick and Rick’s Auto Interiors in Bloomington — the folks who redid the upholstery.


    Ridiculous wallpaper photos (click on them — these thumbnails don’t do them justice)

    Being a big believer in eating my dessert first, here are some “ridiculous wallpaper photos” of the completed project, taken here at the farm.

    sc430 wallpaper 1

    sc430 wallpaper 2

    sc430 wallpaper 3


    sc430 wallpaper 5

    “Before” pictures of the body

    Click on the photos to get the full huge versions so you can see the nasties that I’m trying to fix. Dings, chips, swirlies.  The complete catastrophe.





    Not unexpectedly, this 12 year old car had some extra projects hidden inside it.  Like this crimped thingy.  I’ll have to ask Robert what it is.  My guess is that it’s one of the hoses for the headlight washers.


    This was a good one.  When the one of the prior repairs was made, the people at the body shop GLUED the front bumper on to the car.  No wonder it didn’t line up right.


    “In progress” pictures of the body

    Robert Latuff shared a whole boatload of documentation shots that he took along the way.  Thanks Robert!

    There was all kinds of detail work to do.


    And repairs to badly-done prior repairs.  This car has been through a lot, mostly at the hands of the prior 3 owners.


    There was some pretty rough hail damage, especially on the roof…


    The rear bumper needed to be reworked…


    Even the doors needed to be returned to something more closely approximating their original shape.


    Poor car, so many dents and troubles to be smoothed out.



    Here’s a series of pictures showing the car in various stages of being taken apart, repaired, primed, etc.  Again, these pictures are mostly courtesy of Robert Latuff, although there are a few of mine sprinkled in from the day Robert let me look in on the car while it was in progress.

    Ever wondered what your car looks like with all the soft cushy bits removed?


    It seems silly, but that’s where the “back seat” of an sc430 goes…



    I embarrassed Robert and forced him to stand in one of my pictures.








    Bits and pieces are coming off to get painted


    I don’t think this is street legal, but it looks like it might be fun to drive — if it had seats.


    Redoing the seats

    Speaking of seats. another part of this project was to redo those.  It started to feel like a good time to do it about half way into the repainting, since the seats had already been yanked out of the car.  So they went off to Dick and Rick’s Auto Interiors in Bloomington for a re-do.  Here are some pictures of the way they looked when we started…

    One of the prior owners must have been a cowboy that drove this car with his spurs on.  Really hard on the lower edge of the seat.



    One of the “rear seat” belts had been taped down to keep it from flapping in the wind…  Nice, huh?


    Driver’s seat didn’t look too bad from this angle, but the leather was pretty much on its last legs



    This is a weird sc430 problem that lots of owners have.  The “headrests” in the “rear seat” get clobbered by the sun, shrink, and pull away from their underlying frames.  Homely.


    Here’s the back of the “rear seat” after it’s been removed from the car — in all of it’s duct tape glory.


    And here are the front seats.


    Here’s a shot that Steve took over at the upholstery shop showing another surprise.  I wonder what took that bite out of the upper-left corner of the seat foam.  A bear?

    VINYL 006

    “After” pictures

    These are some more utilitarian pictures — not quite as snazzy as the ridiculous wallpaper pictures at the top of the post, but more documentation of this great project.  Nah, I don’t like it.  Ick.  What a misguided effort this was.

    This is one of the “before” pictures from up above, with a similar “after” picture right behind it.  Oh, one other change this year — I replaced the tires that had worn out with smaller wheels (went from 18″ to 17″) and higher-profile tires to bring the total diameter back up to roughly what it had been before.  If you’re thinking about this, I can tell you I couldn’t be happier.  It’s easy to see the comparison in these two shots.  Old = skinny tires.  New = slightly fatter tires.  It’s also pretty easy to see the slight change in color — from black to dark blue.



    Here’s another “before/after” comparison, again showing the difference in color and tires.  If you click on these thumbnails, you’ll be able to really see the difference in the paint.  Also note the lovely job that the lads did on fixing up the beat up mirror shrouds.




    I forgot to take a “before” picture of all the road rash on the front of the car.  But it’s all gone now.


    Another thing the folks at Latuff fixed was a funky gas cap cover.  It used to stick out in a weird crooked way.  Fixed.


    Hail damage to the trunk?  Fixed.


    Marcie liked the view of the clouds and the trees reflected in the hood.  I do too.


    And here are the seats!

    Here’s a “before” shot, just as a reminder…


    Are these nifty or what?  Steve and Kim over at Dick and Rick’s steered me straight on this one.  I told them that I was going for the color of an old Mercedes SL convertible and this is where we wound up.


    Note the way that the rear “head rests” look now that Steve’s been at it.


    Everybody was a little edgy about whether the remaining old black interior and seat frames were going to work with the new, different-color, upholstery.  I think they work great — I like the way they set each other off.


    The end

    So there you have it.  The Great 2013 Redo of a 2002 Lexus SC430.  I couldn’t be happier — thanks to all who helped!

    One last Ridiculous Wallpaper Picture to send you on your way.  Happy trails!

    sc430 wallpaper 4

    ICANN Intersessional meeting — LA — March, 2013

    A few photos from a “between meetings” ICANN meeting of the non-contracted parties house of the GNSO.  Click on the pictures for full-sized versions.










    You’ll definitely want to click on this panorama and take a look at the full-sized version.  This was an informal session with members of the Board who were arriving for meetings the following day.


    Front row seats




    Migrating from Snow Leopard Server to OSX Server (Mountain Lion)

    Back in late 2011 I wrote this scratchpad post to document my efforts to move from Snow Leopard Server to Lion Server.  I ran into some configuration problems that stumped the 2nd-level folks at Apple and eventually I abandoned the project and stayed on Snow Leopard.

    When Mountain Lion came out, and went through an update or two to iron the kinks out, I decided to have another go at it.  I’m crossing my fingers here, but I’ve been on OSX Server (the new/old name under Mountain Lion) for about a month now and things look pretty stable.  So here’s another scratchpad post to document what I did to put back a few things that were removed from the standard OSX Server environment.


    Stability and Reliability

    Upgrade memory

    I found that the standard 4gByte memory that shipped with my server started to get very tight as I started turning on the various Python based services (Calendar, Contacts, Wiki, etc.).  In fact, by the time I had all those services running, the machine would lock up and crash after being unreachable for a while.  I upgraded the memory to 16 gBytes (not officially supported).  Looking at this memory-use graph out of Server, you can see why the server was having trouble with 4 gBytes but it looks like 8 gBytes would work OK as well.


    Nightly auto-restarts

    I know, real men are supposed to run their servers for decades without restarting them.  But I’ve found that having the server reboot itself every night in the wee hours of the morning clears out a lot of memory-leak cruft and, combined with the added memory, has made the machine quite stable.  System Preferences/Energy Saver/Schedule is the place to do that.


    I hardly ever use it, but the idea of a completely-under-my-control VPN appeals to my tin-foil-hat privacy side.  Setting it up is a little tricky and I found this guide to setting up VPN on a Mac Mini server that’s running Mountain Lion to be really helpful.  I stepped through the process exactly as they described it and it worked.  I love that.

    Replace features that were removed

    Replace firewall capability

    The nifty firewall in Snow Leopard (IPFW) was replaced with the newer packet filter (PF) firewall in Mountain Lion.  And all of the firewall-management features were removed from Server Manager.  Most likely because the presumption is that these servers are running on a network that is already behind a firewall — and because these rascals are tricky and hard for Apple to support.  But I needed to run the PF firewall on this machine.  Doing that by hand is Too Hard, so here’s what I did.

    • Consider using IceFloor, a PF front end —
    • Note: firewall logging gets turned on every time you reload the settings.  Logging can be disabled (once you’ve got a stable set of rules) by editing the config file from the main rules tree.

    Restore MySQL

    Apple dropped MySQL from their distribution (licensing issues would be my guess).  But all of the family web sites run WordPress on top of MySQL so I need to add that back.  Here’s what I did:

    Webmail and email aliases

    Webmail is in the “nice to have in travel emergencies” category.  But the Roundcube webmail is also the best place I’ve found to replace some of the email-forwarding, email-exploder capabilities that went away in the transition from Snow Leopard to Mountain Lion.  So I put it back.  Conceptually, it’s an email client running on the server that can talk to the mail server just like any other client.  It just happens to use the web as its user interface.  Here are useful links to get you started.

    • A useful step-by-step guide –
    • I had the devil’s own time getting authentication to work properly.  In fact the only scheme that works for me is by allowing “Cleartext” as an authentication option in Server, and using LOGIN as the IMAP_AUTH setting in the RoundCube config file (  Here’s a thread that gives more detail around this, although the fix in that thread didn’t work for me —
    • Here’s how to add the “filters” capability (the most important part, for me).  The only thing to keep an eye on is that the example changes are being made to the file rather than the file.  I think this is just an error — but there may be super-cleverness going on there.  In any event, I made the changes to the live file and it’s working.  ymmv
    • I had to do a lot of debugging on this one.  The log/error files (in the /webmail directory where RoundCube is installed) are of great help in figuring out what’s going on.

    Once Roundcube is running, and supporting filters you can…

    Replace “group” emails (in other words, create multi-recipient email aliases)

    Here are the steps I would go through to create an alias

    • Set up the alias in Server Manager as a local user named “friends”
    • Use WorkGroup Manager (download here – to add additional email domains, if you need to.  In this example the “friends” user needs to have added because I host multiple email domains on this server and it would only answer to if I didn’t.
    • Log into Roundcube with the “friends” user credentials to establish the filter that will redirect the mail to the real recipients
      • Go to Settings/Filters
      • Create a new filter
      • Select the “all messages” option for the filter
      • Execute “Send message copy to” rule for each target address (there may be a limit on the number, I only use this for small lists)
      • Execute “Redirect message to” for the last addressee on your list if you don’t want to keep copies of the messages in the “friends” IMAP account on your server
      • Execute “Stop evaluating rules”

    Replace mailing list (Mailman) capability

    This was one of the hardest debugging jobs in the whole transition.  Now that I’ve been through the manual install of this system, I can see why Apple dropped it.  It must be a support nightmare for them.  But I host a couple of very active lists and I have to have this capability, losing it in the migration is a non-starter for me.

    For most of you, you can stop here.  Your email lists will be working on your new server.

    I wanted to run parallel lists under two domains, keeping the lists running under the old domain name until I had the new version up and tested on the new server and then cutting all the list members over to the new list.  If you have a low-priority list where participants can be down for a while, this is probably overkill.  Just let them know that things are going to be broken for a few days, take the lists across, redirect the domain when you change the main DNS MX entry for email and have done.  But I was trying for 100% uptime during the transition.  I bounced my users over a few rocks during this process, but we were up all the time.

    To do email lists under multiple domains in Mailman, you have to pay attention to Alias Maps.

    • I used two different sources to piece together a working configuration:
    • The first page, from Apple, gives you the right syntax for the changes you need to make to the Mailman config file (  The rest of the steps are useful too, except they are pointing to an older location for the Mailman installation (the files are now in usr/local/mailman rather than usr/share/mailman).
    • Here are the key lines in my live file, using my real domains.  The main server domain is, the other three are used for testing or delivering mailing lists.  Every goofy quote and comma matters here.
      • ##################################################     
        # Put your site-specific settings below this line.     
        MTA = 'Postfix'     
        DEFAULT_EMAIL_HOST = ''     
        DEFAULT_URL_HOST = ''     
        POSTFIX_STYLE_VIRTUAL_DOMAINS = [ ‘’, ‘’, '' ]
    • Note: do not use <angle brackets> around any of these entries.  It took me a week to realize that all the documentation was trying to do is look pretty.  But putting <angle brackets> around some of those domain name entries breaks Mailman in a really subtle way.  It works fine at receiving and sending posts to the lists.  But notification-emails to list-owners and list-admins are malformed and get rejected by the SMTP server.
    • That second link, from the GNU documentation, got me to working entries in the Postfix files.  Again, here are the two real working entries from my server.  They’re buried in the file, but that second post explains what you’re about:
      • virtual_alias_maps = $virtual_maps hash:/Library/Server/Mail/Config/postfix/virtual_users,hash:/usr/local/mailman/data/virtual-mailman
      • alias_maps = hash:/etc/aliases,hash:/usr/local/mailman/data/aliases
    • Now that all the plumbing is in place to create email lists under multiple domains, there’s one more trick.  The web-based front end to Mailman is fine if you’re creating lists in a single domain.  But it doesn’t allow you to specify which domain the list will be created in, so if you want to create a list in a domain other than the server’s default domain name, you have to use the command-line command to create the list.  It’s not hard, here’s how.
      • Enter the command line
      • Go to the following directory — you have to be in this directory in order to launch the program.  It will fail if you try it from anywhere else.
        • $ cd /usr/local/mailman/
      • Launch the newlist program and follow the prompts.  The key thing is to include the domain name in the name of the list when you’re prompted — that’s the bit that’s missing from the web front end.  Again, I’ll use live entries that work with the config stuff above.  You type the stuff in bold.
        • sudo bin/newlist
          Enter the name of the list:
          Enter the email of the person running the list:
          Initial bgnws-testing password:
          Hit enter to notify bgnwstest owner...
      • To restart mailman
        • sudo bin/mailmanctl restart
      • Finally, once the new list is created, here are the steps I went through to keep people on the air during the transition period.  My goal was to have the old list keep working while the new one was being built, and then have it wind up that people could send notes to either the old or the new address of the list and wind up in the same place.  This may be needlessly complicated, but it’s the way I did it.
        • Create an email alias in WG manager on the old server – same name, but forwards to the new-server address.  This alias won’t work until the old list is deleted with the rmlist command, coming up in a second.  (note, different domain names are needed for this to work, because I don’t want to migrate all the email/lists at the same time – this would be much easier if you’re just cutting over from an old server to new)
        • Create a forwarding account on the new server – NOT the same name as the new list (so it doesn’t conflict with the new list) but with an alias to the OLD domain name.  Use Roundcube forwarding to push old-domain posts along to the new-domain address of the list.
        • Create a duplicate list on the new server, along with all members and settings
        • Delete the old-server list – now the alias on the old server will kick in and redirect mail to the new-server address.
        • Transition is complete when old-server DNS is moved to new-server – list continues to answer to either new or old domain name because of the forwarding done by the alias account on the new server.



    Update, late 2013:  Preparing for the NEXT upgrade — the road to Mavericks

    I’ve started a thread over on the Apple Support Community to see if there are any impacts to these additions with an in-place upgrade to Mavericks.  It took me a really long time to get from Snow Leopard to Mountain Lion (my attempt to get to Lion never succeeded).  I’m hoping that the road won’t be quite as bumpy this time, but we’ll see.  Here’s a link to the thread.

    So far it looks like Roundcube may need to be updated, although the update looks pretty cool.  One of the appealing things is that address books may be available in the Roundcube environment.  That alone makes it intriguing.

    Loading a Comodo free email cert into the Mac OSX and iOS

    The previous post was all about self-signed certs on my Mac.  Worked fine until I tried to export the cert to my iPhone.  Then I ran into the dreaded “no valid certificates” problem when trying to authorize the profile to sign and encrypt outbound mail.  My homebrew cert worked fine for enabling s/MIME on the device, but it was crippled.  So I ran off and got me a Comodo free email cert and pounded that in.

    Get the cert — using your Mac

    Go HERE — but don’t use Firefox, use Safari on your Mac.  If your default browser is Firefox, copy and paste this link into Safari.  You’ll thank me later.  It works fine in Firefox, but it doesn’t install the cert in a way that actually talks to email.  Their download is highly automated and there’s breakage along the way.

    Follow the steps on the Comodo site and keep your fingers crossed, by the end of the normal process the cert will be correctly installed.  Go look in Downloads for the Collectccc.p7s file if the Comodo site stalls on the “attempting to collect and install your free certificate” step.  Double-click that file and the Keychain Access app will pop up and start prompting for the password you created when you configured the cert at Comodo.

    Click HERE for more detail on managing email certs in the Keychain app.  I deleted the old cert once I had the new cert installed and the included in the cert-key access-control tab

    Put a reminder on your calendar to renew the cert before it expires in a year.

    Configuring Mail to use the cert, on the Mac

    If the cert has been properly loaded, restart Mail and the signing and encrypting buttons should show up when launching a new email message.  Note that they’re toggles — pay attention to what state they’re in.  Otherwise you’ll be signing or encrypting all your mail which may make your recipients a little crazy.

    Configuring iOS to use the cert

    I sure hope this post never goes away.  That’s what I used to learn how to load the cert on my iPhone.  I’m going to put a shorthand version here, just to preserve it (since I’m going to need to repeat this every year when I renew the cert).

    Find the Comodo cert in the Keychain Access app.  UPDATE: Open the Keychain Access app, Click the “My Certificates” choice in Category, select the cert with your email address.  This will solve the “.p12 option greyed out” problem that PY Schobbens noted in the comments.

    Export it in Personal Information Exchange (.p12) format.  Pay attention to the password you put on the export file, you’ll need it on the other end.

    Email the exported cert (drag it into a Mail message to yourself) to the iOS device that’s using the same email address as your Mac.

    Open the attached cert on the iOS device and blast through the “Unsigned Profile” warning.  This is where that password will come in handy.

    Enable s/MIME on the phone (Settings/Mail, Contacts, Calendars/<your email account>/Advanced).  Check to make sure that the signing and encrypting options actually find your cert.  Then take care to back up a layer and tap “Done” to actually write the change to the account.   Note:  this bold highlighting is mostly a message to myself — surely you won’t skip that last step.  But if you send yourself a test message from your phone and it isn’t signed, that’s probably the cause.

    Note: with the arrival of iOS 8, the toggles for encrypting have changed.  So now the “encrypt” option is available at email-sending-time even when “Encrypt by default” is toggled off for the account — much better arrangement for those of us who only encrypt to a few people.

    Notes: adding and using a self-signed s/MIME email certificate to OSX Mail in Mountain Lion

    This is just a scratchpad post to remind myself what I did to get a self-signed cert into Mail under OSX Mountain Lion.

    This first post is all about using a self-generated cert — which will work fine unless you ALSO want to use it on an iOS device.  In which case, skip to the NEXT post, where I cracked the code of getting a Comodo cert installed on my Mac and my iPhone.  Sheesh, this is harder than it needs to be.

    Generating a self-signed certificate

    Click HERE to read the post that laid out the step by step process I followed to create that self-signed cert.  That post goes through the openssl commands to do the deed.  The instructions are written for a Windows user so I’ve rewritten them for a Mountain Lion user

    • Note: openssl is already installed on Mountain Lion, so you shouldn’t need to do any installation
    • make sure to create the cert with the email address you are using in Mail.  In addition, I used that email address as the answer to the “common name” request during the prompting that happens in the Certificate Request part of the process (Steps 2 and 3 below).  I’m not sure that’s required, but it’s part of the formula that worked for me.

    Here are the command-line commands (mostly lifted from the blog post)

    1.    Generate a RSA Private Key in PEM format

    Type: (one time, just to drop into the openssl environment):



    genrsa -out my_key.key 2048


    my_key.key  is the desired filename for the private key file
    2048  is the desired key length of either 1024, 2048, or 4096

    2.    Generate a Certificate Signing Request:


    req -new -key my_key.key -out my_request.csr


    my_key.key is the input filename of the previously generated private key
    my_request.csr  is the output filename of the certificate signing request

    3.    Follow the on-screen prompts for the required certificate request information.

    4.    Generate a self-signed public certificate based on the request.


    x509 -req -days 3650 -in my_request.csr -signkey my_key.key -out my_cert.crt


    my_request.csr  is the input filename of the certificate signing request
    my_key.key is the input filename of the previously generated private key
    my_cert.crt  is the output filename of the public certificate
    3650 are the duration of validity of the certificate. In this case, it is 10 years (10 x 365 days)
    x509 is the X.509 Certificate Standard that we normally use in S/MIME communication

    This essentially signs your own public certificate with your own private key. In this process, you are now acting as the CA yourself!

    5.    Generate a PKCS#12 file:


    pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in my_cert.crt -inkey my_key.key -out my_pkcs12.pfx -name “my-name”


    my_cert.crt  is the input filename of the public certificate, in PEM format
    my_key.key  is the input filename of the private key
    my_pkcs12.pfx  is the output filename of the pkcs#12 format file
    my-name  is the desired name that will sometimes be displayed in user interfaces.

    6.    (Optional) You can delete the certificate signing request (.csr) file and the private key (.key) file.

    7.    Now you can import your PKCS#12 file to your favorite email client, such as Microsoft Outlook or Thunderbird. You can now sign an email you send out using your own generated private key. For the public certificate (.crt) file, you can send this to others when requesting them to send an encrypted message to you.

    Importing a self-signed certificate into the OSX Keychain Access application

    I double-clicked the .pfx (PKCS) file that I’d just created.  That fired up the Keychain Access app and loaded it into the keychain.   I told it to trust the cert when it asked about that.

    Getting OSX Mountain Lion Mail to recognize the self-signed certificate

    Part of what derailed me in this process was that the transition from Lion to Mountain Lion eliminated the account-setup option to select a cert.  It’s automatic now.  So if the email address that’s in the cert matches the email address of the account, the s/MIME capability simply appears when composing a new message.  But in order for this to work, there’s one step needed in order to pull the cert in:

    • restart the Mail app



    Test shots from my new camera (Sony HX30V)


    Hi all.  This is partly a test of the new photo-uploader in WordPress 3.5 — along with a chance to show off pictures from the new camera.  Click through the photos if you want to pixel-peep the ginormous originals.

    This first one is a test of the panorama feature…  This shot is looking from the point behind the house down Pat’s Prairie towards Highway 88 (south).  The nice thing is how it’s picking up the two valleys that form the point I’m standing on.

    Indian Grass Panorama
    Indian Grass Panorama

    This next one is an HDR shot, just using the HDR in the camera instead of shooting 3 bracketed shots and then post-processing them in Photomatix Pro

    Marcie's workstation
    Marcie’s workstation

    This one is a macro shot, taken in really low light, of a gizmo sitting on my desk last night.  I did nothing — this is Superior Auto (super-idiot) mode.

    Stupid macro
    Stupid macro

    This next one is completely silly macro.  Push camera up against the screen of the computer and pull the trigger.  I don’t think there really is a “minimum distance” for macro focus.

    Silly macro
    Silly macro

    The rest of this series is just “stuff that sits behind my desk” in town.  All these shots were taken at night with available light, in Superior Idiot (er, Auto) mode.

    This is one of Dad’s sewing-thread weaving.  It’s double-weave, which means both pieces of cloth are woven at the same time.  REALLY small threads, peepul.

    Paul O'Connor bead double weave
    Paul O’Connor bead double weave

    A cute little turtle that Mom found somewhere in her travels.

    Granny Pat's turtle
    Granny Pat’s turtle

    And an elephant…

    Granny Pat's elephant
    Granny Pat’s elephant

    And two little folks.  This shot was taken from across the room — they’re about half an inch tall.  This is a good example of 20x zoom at work.

    Granny Pat's couple
    Granny Pat’s couple

    Same deal here — shot from across the room.  The bell is a couple inches high.

    Granny Pat's bell
    Granny Pat’s bell

    Another across the room shot…

    Granny Pat's paper Xmas tree
    Granny Pat’s paper Xmas tree

    The Flying Spaghetti Monster, who has unfortunately lost an eye…

    Flying Spaghetti Monster
    Flying Spaghetti Monster

    Ah HA!  Houston, we have the “unexpected rotation” problem.  This picture of folding paper art is rotated 90 degrees to the left.

    Folding paper art
    Folding paper art

    This next one is a really memory-laden picture by an old friend of my parents

    Ried Hastie painting
    Ried Hastie painting

    Mom made baskets for us kids — with pithy phrases worked in the bottoms of them.  This is one of my favorites — “Reach into life”

    Granny Pat basket - Reach Into Life
    Granny Pat basket – Reach Into Life













    Logic Pro runs lots better on my new MacBook Pro

    This is mostly a post that only Logic Pro music-software users will like.  But hey.

    I just went through an agonized decision-making process to upgrade my MBP.  I knew that the cool new CPUs were coming this summer, but then Apple threw the “Retina” curve-ball at me.  I chewed and chewed on that and finally went with the old-style machine because:

    • It’s cheaper (I saved enough to buy a Thunderbolt display)
    • It’s just as fast (how fast?  see below)
    • It’s got lotsa ports (btw, my Axiom Pro keyboard works fine on the USB 3.0 port)
    • My aging eyes don’t benefit from the Retina display
    • It’s easier to upgrade and repair

    So I finally got it home and just ran a “run Logic Pro” comparison between the 2012 MBP and the 2010 MBP and the results are exactly what I had hoped for.  Logic nicely spreads the load across all the cores in the new CPU.  The result is a MUCH cooler-running machine, whereas the 2010 MBP kicks its fans on and starts heating up right away.

    Here’s the picture that tells the tale.  Same song, at the same place, running on the two machines — these are screen-grabs of the CPU displays.  Sure, your mileage may vary.  But I’m really pleased with the first impression.  This is seeming like a machine I can really truly rely on in a live setting.


    I did make one choice that I’d avoid if I were doing it over — I’d skip the hi-resolution screen upgrade.  Yep, even on the old-style MBP you can opt for a slightly denser 1680 x 1050 display rather than the normal 1440 x 900 one.  It turns out my 60+ year old eyes struggle with the now-smaller font size.  Probably means it’s time to upgrade the trifocals.  🙂


    DSSA — DNS Security and Stability Analysis working group

    I’ve been spending a fair amount of time working on an ICANN cross-constituency working group that’s taking a look at the risks to the DNS.  Our gang just posted a major report today and I thought I’d splash this up here so I can brag about our work on Twitter and Facebook.

    That first picture is a summary of the methodology we built (we had to build a lot of stuff in order to get our work done).  It’s basically a huge compound sentence that you read from left to right in order to assess risk.  By the way, click on the pictures if they’re too small/fuzzy to read.

    This second picture shows where we, the DSSA gang might fit in a much larger DNS security context.  We also had a lot of stuff to puzzle through about where we “fit” in the larger DNS security landscape.

    And that last picture is a super high-level summary of what we found.  There’s lots more ideas and pictures in our report — but these three give you kindof a taste of what we’ve been working on.  I think it’s darn nifty.

    If you’re interested in the whole scoop, head over to the DSSA web site.  You’ll find links to the full report, a cool Excel worksheet that crams the whole methodology on to one page (complete with scoring) and more.



    Grinnell Reunion 2012 — a life of happy accidents

    I gave a talk at my Grinnell College reunion last weekend and decided to build this post to share a bunch of links to things that I talked about.  This ain’t a’gonna make any sense to the rest of you.  But the stuff is interesting.  🙂

    This is a story of rivers of geeks.  I described the rivers that I swam in during my career, but these are by no means all of the species of geeks that ultimately built the Internet.  I was lucky to be a part of a gang of 10’s maybe 100’s of thousands of geeks that came together in the giant happy accident that resulted in this cool thing that we all use today.  But don’t be confused — it was a complete accident, at least for me and probably for all of us.  Here’s a diagram…


    The opening “bookend” of the talk was to introduce the idea of “retrospective sense-making” which I first learned about from Karl Weick when I was getting my MBA at the Cornell business school

    I talked a little bit about what it was like as an Asperger guy showing up at Grinnell in the fall of 1968 — when everything was changing.  We Asperger folks have a pretty rough time dealing with changes.  Several people spoke with me about this part of the talk later in the weekend.  The really-short version of my reply was “just give us more runway.”  Many of the geeks that built the Internet are Asperger folks.

    Another giant gaggle of geeks is the “community radio” gang that I was part of.  That part of the talk opened with a discussion of Lorenzo Milam, one of the folks who inspired many of us community-radio organizers to go out and do ridiculous impossible things.

    • These days Lorenzo hangs out at Mho and Mho Works (and Ralph Magazine)
    • He put the word “sex” in the title of his handbook about starting a community radio station, Sex and Broadcasting, just to get your attention and this was the book that got a lot of us going

    Which led into a discussion of my involvement with the community radio movement — Tom Thomas, Terry Clifford and Bill Thomas are all still very much involved in public and community radio these days.

    Then there was a musical interlude (you cannot believe how much the music went off the rails — almost all the technology failed — oh well).

    The next series of accidents revolved around the “learn my chops in brand-name consulting organizations” part of the saga.  Another of the rivers of geeks — many people of the Internet construction workers came from big firms like Arthur Andersen and Coopers and Lybrand, the two places I worked.  Probably the biggest things I learned there were Structured Programming and project management.  And this…

    The next accidents ran this Forrest Gump type guy through a couple of now long-dead mainframe companies , another BIG source of internet-building geeks.  First ETA Systems, the hapless wannabe competitor to Cray.  Then Control Data, where I learned how to do mass layoffs in an imploding manufacturing company.  Ugh.

    I was an early personal computer enthusiast as were almost all Internet geeks.  I live in the Midwest, so I missed out on the Homebrew Computer Club in Silicon Valley.  Dang.  But relatively cheap modems showed up about that time which led to the rise of the Bulletin Board System (BBS) movement which provided the gathering places for a lot of us Internet geeks. Boardwatch Magazine, published by Jack Rickard, was the glue that held us together — Jack inspired me much the same way that Lorenzo Milam did.  The arrival of FidoNet allowed email to flow beyond the local boundaries of a BBS and brought a lot of us geeks together for the first time.

    Another giant pile of Internet geeks came from the ham radio movement.  My call is KZ0C and I’m completely lame — I hardly do anything ham radio related these days.  But a whole giant tradition of “makers” comes out of that gang.  We hams were darn early adapters of the packet networking protocols that underpin the Internet.  We turned that stuff into packet radio.

    So there’s the list of pre-Internet geek communities that I was a part of in one way or another.  No wonder some of my friends call me a Forrest Gump of Internet technology.  So what happened next?  This is what happened next…


    That’s a picture of the first four-node ARPANET network in the late 60’s.  The network grew slowly over the next couple decades and by the mid-80’s had been opened up to include institutions of higher education.  I worked at the University of Minnesota which, when I was there, was home to the Gopher protocol and the POP3 email protocol — another great gaggle of geeks.  I was a Dreaded Administrator, there to fix a financial system problem, but I loved those geeks ’cause they were the ones that turned me on to the Internet.

    The next kind of geeks that still play a huge role in the Internet are the folks that work at Internet Service Providers (ISPs).  Ralph Jenson and I started an ISP in my basement and called it  That project grew into an amazing gang that eventually got rolled up as the ISP market consolidated in the late 90’s and thereafter.  Lots of the geeks I’ve described in this post were involved in starting those early pioneering ISPs — what a time…

    The last geek that I mentioned in my talk is Hubert Alyea, the role-model for the Disney films about the Absent Minded Professor.  Professor Alyea was another great Asperger geek who was quite emphatic in telling me about lucky accidents, great discoveries and the prepared mind.  Click HERE to see movies of some of his lectures on Youtube — they’re astounding.

    What are Mike and Marcie obsessing about now?

    The rest of this post is a series of links to projects that I mentioned during the talk.

    The final thing I need to throw into this post is three little graphs I made up to describe the half life of knowledge — in which I choose to view the glass as half full.  As the half-life shortens, it takes less and less time to become an expert!