Loading a Comodo free email cert into Mac OSX Mountain Lion and iOS

January 26th, 2013

The previous post was all about self-signed certs on my Mac.  Worked fine until I tried to export the cert to my iPhone.  Then I ran into the dreaded "no valid certificates" problem when trying to authorize the profile to sign and encrypt outbound mail.  My homebrew cert worked fine for enabling s/MIME on the device, but it was crippled.  So I ran off and got me a Comodo free email cert and pounded that in.

Get the cert -- using your Mac

Go HERE -- but don't use Firefox, use Safari on your Mac.  If your default browser is Firefox, copy and paste this link into Safari.  You'll thank me later.  It works fine in Firefox, but it doesn't install the cert in a way that actually talks to email.  Their download is highly automated and there's breakage along the way.

Follow the steps on the Comodo site and keep your fingers crossed, by the end of the normal process the cert will be correctly installed.  Be not dismayed if the site stalls on the "attempting to collect and install your free certificate" step.  Go look in Downloads -- it's there, with a name like Collectccc.p7s.  Double-click that file and you will find that the Keychain Access app will pop up and start prompting for the password you created when you configured the cert at Comodo.

Configuring Mail to use the cert, on the Mac

Nothing to it.  If the cert has been properly loaded, all you should have to do is restart Mail and the signing and encrypting buttons should show up when you launch a new email message.  Note that they're toggles -- so you'll want to pay attention to what state they're in.  Otherwise you'll be signing or encrypting all your mail which may make your recipients a little crazy.

Configuring iOS to use the cert

I sure hope this post never goes away.  That's what I used to learn how to load the cert on my iPhone.  I'm going to put a shorthand version here, just to preserve it (since I'm going to need to repeat this every year when I renew the cert).

Find the Comodo cert in the Keychain Access app.

Export it in Personal Information Exchange (.p12) format.  Pay attention to the password you put on the export file, you'll need it on the other end

Email the exported cert (drag it into a Mail message to yourself) to the iOS device that's using the same email address as your Mac.

Open the attached cert on the iOS device and blast through the "Unsigned Profile" warning.  This is where that password will come in handy.

Enable s/MIME on the phone (Settings/Mail, Contacts, Calendars/<your email account>/Advanced) and check to make sure that the signing and encrypting options actually find your cert.  Then take care to back up a layer and tap "Done" to actually write the change to the account.

Note: you may want to leave the "signing" and "encrypting" options turned off most of the time.  Otherwise all of your outbound mail will be signed with your public public key, and anybody who you've "trusted" will get encrypted mail.  On the other hand, maybe that's not such a bad arrangement?  I don't know.


Notes: adding and using a self-signed s/MIME email certificate to OSX Mail in Mountain Lion

January 26th, 2013

This is just a scratchpad post to remind myself what I did to get a self-signed cert into Mail under OSX Mountain Lion.

This first post is all about using a self-generated cert -- which will work fine unless you ALSO want to use it on an iOS device.  In which case, skip to the NEXT post, where I cracked the code of getting a Comodo cert installed on my Mac and my iPhone.  Sheesh, this is harder than it needs to be.

Generating a self-signed certificate

Click HERE to read the post that laid out the step by step process I followed to create that self-signed cert.  That post goes through the openssl commands to do the deed.  The instructions are written for a Windows user so I've rewritten them for a Mountain Lion user

  • Note: openssl is already installed on Mountain Lion, so you shouldn't need to do any installation
  • make sure to create the cert with the email address you are using in Mail.  In addition, I used that email address as the answer to the "common name" request during the prompting that happens in the Certificate Request part of the process (Steps 2 and 3 below).  I'm not sure that's required, but it's part of the formula that worked for me.

Here are the command-line commands (mostly lifted from the blog post)

1.    Generate a RSA Private Key in PEM format

Type: (one time, just to drop into the openssl environment):



genrsa -out my_key.key 2048


my_key.key  is the desired filename for the private key file
2048  is the desired key length of either 1024, 2048, or 4096

2.    Generate a Certificate Signing Request:


req -new -key my_key.key -out my_request.csr


my_key.key is the input filename of the previously generated private key
my_request.csr  is the output filename of the certificate signing request

3.    Follow the on-screen prompts for the required certificate request information.

4.    Generate a self-signed public certificate based on the request.


x509 -req -days 3650 -in my_request.csr -signkey my_key.key -out my_cert.crt


my_request.csr  is the input filename of the certificate signing request
my_key.key is the input filename of the previously generated private key
my_cert.crt  is the output filename of the public certificate
3650 are the duration of validity of the certificate. In this case, it is 10 years (10 x 365 days)
x509 is the X.509 Certificate Standard that we normally use in S/MIME communication

This essentially signs your own public certificate with your own private key. In this process, you are now acting as the CA yourself!

5.    Generate a PKCS#12 file:


pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in my_cert.crt -inkey my_key.key -out my_pkcs12.pfx -name “my-name”


my_cert.crt  is the input filename of the public certificate, in PEM format
my_key.key  is the input filename of the private key
my_pkcs12.pfx  is the output filename of the pkcs#12 format file
my-name  is the desired name that will sometimes be displayed in user interfaces.

6.    (Optional) You can delete the certificate signing request (.csr) file and the private key (.key) file.

7.    Now you can import your PKCS#12 file to your favorite email client, such as Microsoft Outlook or Thunderbird. You can now sign an email you send out using your own generated private key. For the public certificate (.crt) file, you can send this to others when requesting them to send an encrypted message to you.

Importing a self-signed certificate into the OSX Keychain Access application

I double-clicked the .pfx (PKCS) file that I'd just created.  That fired up the Keychain Access app and loaded it into the keychain.   I told it to trust the cert when it asked about that.

Getting OSX Mountain Lion Mail to recognize the self-signed certificate

Part of what derailed me in this process was that the transition from Lion to Mountain Lion eliminated the account-setup option to select a cert.  It's automatic now.  So if the email address that's in the cert matches the email address of the account, the s/MIME capability simply appears when composing a new message.  But in order for this to work, there's one step needed in order to pull the cert in:

  • restart the Mail app



Test shots from my new camera (Sony HX30V)

January 24th, 2013


Hi all.  This is partly a test of the new photo-uploader in WordPress 3.5 -- along with a chance to show off pictures from the new camera.  Click through the photos if you want to pixel-peep the ginormous originals.

This first one is a test of the panorama feature...  This shot is looking from the point behind the house down Pat's Prairie towards Highway 88 (south).  The nice thing is how it's picking up the two valleys that form the point I'm standing on.

Indian Grass Panorama

Indian Grass Panorama

This next one is an HDR shot, just using the HDR in the camera instead of shooting 3 bracketed shots and then post-processing them in Photomatix Pro

Marcie's workstation

Marcie's workstation

This one is a macro shot, taken in really low light, of a gizmo sitting on my desk last night.  I did nothing -- this is Superior Auto (super-idiot) mode.

Stupid macro

Stupid macro

This next one is completely silly macro.  Push camera up against the screen of the computer and pull the trigger.  I don't think there really is a "minimum distance" for macro focus.

Silly macro

Silly macro

The rest of this series is just "stuff that sits behind my desk" in town.  All these shots were taken at night with available light, in Superior Idiot (er, Auto) mode.

This is one of Dad's sewing-thread weaving.  It's double-weave, which means both pieces of cloth are woven at the same time.  REALLY small threads, peepul.

Paul O'Connor bead double weave

Paul O'Connor bead double weave

A cute little turtle that Mom found somewhere in her travels.

Granny Pat's turtle

Granny Pat's turtle

And an elephant...

Granny Pat's elephant

Granny Pat's elephant

And two little folks.  This shot was taken from across the room -- they're about half an inch tall.  This is a good example of 20x zoom at work.

Granny Pat's couple

Granny Pat's couple

Same deal here -- shot from across the room.  The bell is a couple inches high.

Granny Pat's bell

Granny Pat's bell

Another across the room shot...

Granny Pat's paper Xmas tree

Granny Pat's paper Xmas tree

The Flying Spaghetti Monster, who has unfortunately lost an eye...

Flying Spaghetti Monster

Flying Spaghetti Monster

Ah HA!  Houston, we have the "unexpected rotation" problem.  This picture of folding paper art is rotated 90 degrees to the left.

Folding paper art

Folding paper art

This next one is a really memory-laden picture by an old friend of my parents

Ried Hastie painting

Ried Hastie painting

Mom made baskets for us kids -- with pithy phrases worked in the bottoms of them.  This is one of my favorites -- "Reach into life"

Granny Pat basket - Reach Into Life

Granny Pat basket - Reach Into Life













Logic Pro runs lots better on my new MacBook Pro

July 13th, 2012

This is mostly a post that only Logic Pro music-software users will like.  But hey.

I just went through an agonized decision-making process to upgrade my MBP.  I knew that the cool new CPUs were coming this summer, but then Apple threw the "Retina" curve-ball at me.  I chewed and chewed on that and finally went with the old-style machine because:

  • It's cheaper (I saved enough to buy a Thunderbolt display)
  • It's just as fast (how fast?  see below)
  • It's got lotsa ports (btw, my Axiom Pro keyboard works fine on the USB 3.0 port)
  • My aging eyes don't benefit from the Retina display
  • It's easier to upgrade and repair

So I finally got it home and just ran a "run Logic Pro" comparison between the 2012 MBP and the 2010 MBP and the results are exactly what I had hoped for.  Logic nicely spreads the load across all the cores in the new CPU.  The result is a MUCH cooler-running machine, whereas the 2010 MBP kicks its fans on and starts heating up right away.

Here's the picture that tells the tale.  Same song, at the same place, running on the two machines -- these are screen-grabs of the CPU displays.  Sure, your mileage may vary.  But I'm really pleased with the first impression.  This is seeming like a machine I can really truly rely on in a live setting.


I did make one choice that I'd avoid if I were doing it over -- I'd skip the hi-resolution screen upgrade.  Yep, even on the old-style MBP you can opt for a slightly denser 1680 x 1050 display rather than the normal 1440 x 900 one.  It turns out my 60+ year old eyes struggle with the now-smaller font size.  Probably means it's time to upgrade the trifocals.  :-)


DSSA — DNS Security and Stability Analysis working group

June 11th, 2012

I've been spending a fair amount of time working on an ICANN cross-constituency working group that's taking a look at the risks to the DNS.  Our gang just posted a major report today and I thought I'd splash this up here so I can brag about our work on Twitter and Facebook.

That first picture is a summary of the methodology we built (we had to build a lot of stuff in order to get our work done).  It's basically a huge compound sentence that you read from left to right in order to assess risk.  By the way, click on the pictures if they're too small/fuzzy to read.

This second picture shows where we, the DSSA gang might fit in a much larger DNS security context.  We also had a lot of stuff to puzzle through about where we "fit" in the larger DNS security landscape.

And that last picture is a super high-level summary of what we found.  There's lots more ideas and pictures in our report -- but these three give you kindof a taste of what we've been working on.  I think it's darn nifty.

If you're interested in the whole scoop, head over to the DSSA web site.  You'll find links to the full report, a cool Excel worksheet that crams the whole methodology on to one page (complete with scoring) and more.



Grinnell Reunion 2012 — a life of happy accidents

June 6th, 2012

I gave a talk at my Grinnell College reunion last weekend and decided to build this post to share a bunch of links to things that I talked about.  This ain't a'gonna make any sense to the rest of you.  But the stuff is interesting.  :-)

This is a story of rivers of geeks.  I described the rivers that I swam in during my career, but these are by no means all of the species of geeks that ultimately built the Internet.  I was lucky to be a part of a gang of 10's maybe 100's of thousands of geeks that came together in the giant happy accident that resulted in this cool thing that we all use today.  But don't be confused -- it was a complete accident, at least for me and probably for all of us.  Here's a diagram...


The opening "bookend" of the talk was to introduce the idea of "retrospective sense-making" which I first learned about from Karl Weick when I was getting my MBA at the Cornell business school

I talked a little bit about what it was like as an Asperger guy showing up at Grinnell in the fall of 1968 -- when everything was changing.  We Asperger folks have a pretty rough time dealing with changes.  Several people spoke with me about this part of the talk later in the weekend.  The really-short version of my reply was "just give us more runway."  Many of the geeks that built the Internet are Asperger folks.

Another giant gaggle of geeks is the "community radio" gang that I was part of.  That part of the talk opened with a discussion of Lorenzo Milam, one of the folks who inspired many of us community-radio organizers to go out and do ridiculous impossible things.

  • These days Lorenzo hangs out at Mho and Mho Works (and Ralph Magazine)
  • He put the word "sex" in the title of his handbook about starting a community radio station, Sex and Broadcasting, just to get your attention and this was the book that got a lot of us going

Which led into a discussion of my involvement with the community radio movement -- Tom Thomas, Terry Clifford and Bill Thomas are all still very much involved in public and community radio these days.

Then there was a musical interlude (you cannot believe how much the music went off the rails -- almost all the technology failed -- oh well).

The next series of accidents revolved around the "learn my chops in brand-name consulting organizations" part of the saga.  Another of the rivers of geeks -- many people of the Internet construction workers came from big firms like Arthur Andersen and Coopers and Lybrand, the two places I worked.  Probably the biggest things I learned there were Structured Programming and project management.  And this...

The next accidents ran this Forrest Gump type guy through a couple of now long-dead mainframe companies , another BIG source of internet-building geeks.  First ETA Systems, the hapless wannabe competitor to Cray.  Then Control Data, where I learned how to do mass layoffs in an imploding manufacturing company.  Ugh.

I was an early personal computer enthusiast as were almost all Internet geeks.  I live in the Midwest, so I missed out on the Homebrew Computer Club in Silicon Valley.  Dang.  But relatively cheap modems showed up about that time which led to the rise of the Bulletin Board System (BBS) movement which provided the gathering places for a lot of us Internet geeks. Boardwatch Magazine, published by Jack Rickard, was the glue that held us together -- Jack inspired me much the same way that Lorenzo Milam did.  The arrival of FidoNet allowed email to flow beyond the local boundaries of a BBS and brought a lot of us geeks together for the first time.

Another giant pile of Internet geeks came from the ham radio movement.  My call is KZ0C and I'm completely lame -- I hardly do anything ham radio related these days.  But a whole giant tradition of "makers" comes out of that gang.  We hams were darn early adapters of the packet networking protocols that underpin the Internet.  We turned that stuff into packet radio.

So there's the list of pre-Internet geek communities that I was a part of in one way or another.  No wonder some of my friends call me a Forrest Gump of Internet technology.  So what happened next?  This is what happened next...


That's a picture of the first four-node ARPANET network in the late 60's.  The network grew slowly over the next couple decades and by the mid-80's had been opened up to include institutions of higher education.  I worked at the University of Minnesota which, when I was there, was home to the Gopher protocol and the POP3 email protocol -- another great gaggle of geeks.  I was a Dreaded Administrator, there to fix a financial system problem, but I loved those geeks 'cause they were the ones that turned me on to the Internet.

The next kind of geeks that still play a huge role in the Internet are the folks that work at Internet Service Providers (ISPs).  Ralph Jenson and I started an ISP in my basement and called it gofast.net.  That project grew into an amazing gang that eventually got rolled up as the ISP market consolidated in the late 90's and thereafter.  Lots of the geeks I've described in this post were involved in starting those early pioneering ISPs -- what a time...

The last geek that I mentioned in my talk is Hubert Alyea, the role-model for the Disney films about the Absent Minded Professor.  Professor Alyea was another great Asperger geek who was quite emphatic in telling me about lucky accidents, great discoveries and the prepared mind.  Click HERE to see movies of some of his lectures on Youtube -- they're astounding.

What are Mike and Marcie obsessing about now?

The rest of this post is a series of links to projects that I mentioned during the talk.

The final thing I need to throw into this post is three little graphs I made up to describe the half life of knowledge -- in which I choose to view the glass as half full.  As the half-life shortens, it takes less and less time to become an expert!








Frac sand mining

March 4th, 2012

Pore old Haven2.  'lil old blog's being neglected.  I was going to blog about frac sand mining here but the issue kinda exploded into such a big deal that it needed a URL all its own and I forgot to cross-post a link to the new site back here at the ranch.

So here's a link for those of you that follow me on this blog.  Sorry about that.  Things got a little crazy there for a while and I'm just now circling back to do the housekeeping.


this is a teaser from the front page over there...

Here’s an overview of the concerns that have been raised about frac sand mining.  We’ll glue this to the front page, update it as we go and write more-detailed posts on specific topics.  But here’s a list to summarize things;

  • Community — The old quote “if we don’t hang together, we’ll hang separately” comes to mind.  Will trusted leaders emerge to get us through this?
  • Economic development — Who is thinking through the tradeoff between new jobs versus old, short-term jobs versus long-term ones, money that stays in the region versus fortunes made elsewhere at our expense?
  • Environmental — Is anybody keeping an eye on air emissions and pollution, impact on groundwater, loss of natural and agricultural land, impact on forest projects, nuisance noise and dust, etc.?
  • Health — Who’s minding the impacts of silica dust in the air and processing-plant chemicals in water supplies?
  • Infrastructure — Who will pay for road repairs and will those commitments be honored?
  • Leadership — Town and County officials are used to deciding issues like where to site a farmer’s barn.  Are they ready to handle the onslaught of a sophisticated billion-dollar industry?
  • Prices — Are local sand-producers getting a fair prices for their sand, or are they getting ripped off too — by slick mining-company representatives?  Do you know what that sand is worth?
  • Property values — Who’s keeping an eye out for the innocent bystanders who can’t escape the blight because their savings are tied up in nearby land?
  • Regional development — This isn’t a one-county conversation or even a one-state conversation.  Who’s reaching out to make sure that one unprepared county doesn’t become the easy “target of opportunity”?
  • Restoration – How does the land get repaired once the sand is gone and designed-to-disappear local mining-companies have vanished?
  • Road Safety — What’s the impact of 100′s or even 1000′s of heavy trucks (on a tight schedule) running across our sub-par roads?  Who’s going to be accountable when a schoolbus full of kids gets in an accident with a runaway sand truck on one of our dugway roads?
  • Transparency — Who is publishing good, fair, accurate, non-confrontational information about what’s going on?  Is “good information” going to be available only for the insiders at the expense of all others?  Rumors breed in the dark.


Adding capabilities to Mac OS X Lion Server

December 18th, 2011


I never converted to Lion Server.  You can sortof see things unraveling in the middle of this post.  I'm taking another run at it now that Mountain Lion Server (now renamed back to OSX Server) is getting stable.  I sympathize with what Apple is trying to do.  If you're kindof the power user in the office, the newer version of Server is much better for you.  But for those of us who were using the server to do slightly more complicated stuff, it's been a long hard road.

I'll write another post pretty soon that summarizes how I put stuff back into Mountain Lion Server.  It's still not easy, but it's going better -- at least so far.  For now, just ignore the rest of this post.  It's out of date, and it didn't result in a working server.



This is another "scratchpad" post as I make the transition from Snow Leopard to Lion on my little family cloud server.  Here's why the struggle is worth it for me;

  • Staying with the current release means Apple is updating my platform, which in turn means...
  • Better security/stability
  • Better compatibility with the iGadgets
  • Ease of use

The design philosophy for Server changed just a bit from Snow Leopard to Lion.  Lion Server is built on pretty much the same foundation, but the user-interface has been dramatically thinned out with the aim of making Server something that regular people could use.  I get that, and thing it's a rational decision by Apple.  I was astounded to learn however that I'm in the "advanced user" category and lost some capabilities when this happened.  Who'da thunk it??  :-)

So I've got to go looking for ways to "put back" some of the things I use the server for.  My goal is to either find work-arounds within Lion Server or find bits and pieces of software that I can run on top of Lion to do those things.

This post will be the place where I post my findings -- both about installing and configuring Lion, and solving the little work-around problems.  Should be fun.

Installation puzzlers

Running Lion in a VMWare virtual machine

Turns out that VMWare 4 brought in support for running instances of Lion in a virtual machine.  Kewl!  So I ran off and bought Lion Onna Stick (USB flash drive) from Apple, plugged it into my MacBook Pro, pointed VMWare Fusion at it, accepted the defaults, took a nap and when I came back I had me a Lion machine running on top of Snow Leopard.  Things to do differently from just accepting defaults;

  • Give the VM at least two cores in the CPU (runs a lot better -- I may bump it to four the next time around).  Once Server is installed, my little Lion VM runs just dandy on the 2009 MacBook Pro -- consumes about 5% of the CPU when idle.  Sweet.
  • When building Lion (not server, just Lion) pick a user/computer name that's not a real personal type name -- I ran into conflicts with my personal name in Open Directory because I'd already used it for the core Lion account.
  • Pay attention to networking -- you'll be using the Ethernet adapter a lot more rigorously than the default NAT configuration in VMWare -- I set mine to go directly to the gateway router rather than using the default virtual-NAT.
  • Since we're configuring the basis for a server here (especially if you want it to run Open Directory), this is the best time to get the DNS stuff sorted out.  I waited until later the first few times and the Server install vacuumed up a bunch of wrong-settings as a result.  I think I'll do a little "Networking and DNS" section about all that.  Open Directory's auto-configuration/startup process will break badly if DNS isn't set up right.  I never figured out how to fix it after the fact -- clean install with proper DNS was my path to success.
  • Take lots of snapshots of the VM.  The basic Lion install was pretty clean (except for the wrong-DNS stuff, see below), but I had to fall back to it several times before I got Server settled in properly (especially Open Directory).  The nice thing is that the App Store was quite happy to let me re-download the Server stuff and re-install it once I'd bought it.  I don't know if there's a limit, but I've re-installed Server on top of my clean Lion at least five times so far.  The word "Doh!" covers the reasons-why pretty well.

Networking and DNS for Lion Server

One of the things that really caught me was installing Lion Server behind an at-home gateway router.  In the past I've always been using a data-center router as the gateway and DNS was a no-brainer -- just set up an A Record pointing at the server in DNS and go.  But home routers have a different job to do and those differences got pulled into the configuration of the server in ways that I wasn't expecting.  Here are lessons-learned.

  • I'd never paid attention to the network name of my home router because in normal circumstances it doesn't matter.  But since I am now using it as a gateway out to the "real" internet, it does.
  • My router thought it was in the "lan" domain -- which is fine for a NAT-providing home router.  The trouble came when Lion Server pulled that domain into the name of the server when it talked to Lion during install.  Lion had in turn pulled in that "lan" domain through DHCP during install and built the computer-name with it (Mikes-Mac.lan or somesuch).  Again, this normally doesn't matter, but that's not a good name for a machine that is going to be put out on the public Internet.
  • My solution was to pound the real domain into the home router (CloudMikey.com in my case) before building Lion (yes Lion -- don't wait for the Server install -- many headaches avoided).  That way all the computer-name bits and bobbins will have a real internet-routeable name instead of a non-routeable name.

Replacing Functionality

The good news about Lion Server is that it's built on the same platform as all the earlier versions of Server.  The bad news is that the user interface has been redesigned with a different user in mind.  Not complaining, I get why they did this and it makes sense to me.  But I need to hunt around a bit to "add back" some of the tools that disappeared.  Here's where I'll take notes about that -- my first pass will be based on scouring the Apple discussion-list for Lion Server and then I'll see where I go from there.

Mail -- Mail-forwarding and email-group accounts

My use of the mail server is pretty standard, but I have a few accounts which forward mail to a different address (mostly family members that retrieve their mail from their ISP's server but want a consistent email address, or multiple people instead of just one).  I used the "Mail" tab in Workgroup Manager to do this on Snow Leopard, but that tab is missing in the Lion version of Workgroup Manager.

  • In Lion -- build a filter using the webmail interface.  Once the account has been set up in the Workgroup Manager, log into the account with webmail and add filters that redirects messages to the downstream addresses.  One filter per address (rather than multiple addresses, separated by commas).  There's a limit of 4 destinations per account, which is fine for me -- most of mine are single destination forwarding accounts.  There's a hack to expand that 4-destination limitation but I haven't had to use it.

Mail -- Hosting multiple domains for email

I use several domains for email.  Under Snow Leopard I would add them as as either Local Host Aliases or Virtual Domains in the Mail/Advanced/Hosting tab of Server Admin.  Doh!  They're still there in the new version.  I was looking at Server rather than Server Admin.  Silly me.

Mail -- Email aliases

These work the same as before -- Workgroup Manager.

Web -- SSL on sites

Initial post:

SSL encryption is pretty important to me, especially on web-based versions of wiki, mail, calendar, contacts, etc.  Don't want people logging into those over an unencrypted connection, thank you very much.  So we gotta turn SSL on for some sites, but not all.

Argh.  I struggled with this for far too long. Did all kinds of fooling around with the files in the Apache "sites" folder, only to watch them get overwritten by Server each time I restarted it.  Worked all the way into the "readme" file in the Apache folder, on and on.  Terrible pain in the neck.  Nothing worked

Then I discovered the "Help" system in the Server app (not Server Admin, although the help system is fine there too).  SSL for virtual sites done in a different place.  Which Help told me.  Bah.  Went to the "Hardware/Server/Settings/SSL Certificate/Edit" menu, picked a certificate for the virtual site (and maybe restarted the web service) and it was set.  Does exactly the right thing too -- when somebody goes to an SSL-enabled virtual site, they're automatically redirected to the SSL version.


Unfortunately, this returns to the "open issue, broken" status.  I've managed to wedge the Server app so that there are two states:

  • State 1 -- everything turned off in the Server app including "web"
  • httpd daemon is running (sites respond to external requests, but with the /var/empty folder)
  • no functionality
  • relatively quiet logs (sample: Jan  9 01:05:32--Jan  9 05:05:31)
  • something odd going on with MySQL, probably unrelated)
  • Jan  9 01:06:29 server SubmitDiagInfo[4016]: Submitted shutdown stall report: file://localhost/Library/Logs/DiagnosticReports/ipfwloggerd,mysqld,sh_2012-01-01-080056_localhost.shutdownStall
  • something odd going on with xscertd (once an hour)
  • 1/9/12 6:05:24.632 AM sandboxd: ([6369]) xscertd(6369) deny job-creation
  • State 2 -- "web" turned on, but NO SSL certificates assigned
  • httpd daemon is running (sites respond to external requests, but with the /var/empty folder)
  • no functionality
  • quiet logs -- check logs around 6:52;28 AM for startup messages.  here are interesting ones;

1/9/12 6:52:28.713 AM xscertd: Starting xscertd/1.0.0 (MacOS X Server)
1/9/12 6:52:28.721 AM sandboxd: ([6723]) xscertd(6723) deny job-creation
1/9/12 6:52:31.176 AM servermgrd: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.

  • State 3 -- "web" turned on AND an SSL certificate is assigned
  • httpd daemon is NOT running (browser returns "problem loading page" and "unable to connect" errors
  • To get to this state -- 1) shut down "web" in Server.app at 7:00:08 2) assign cert at 7:01:16 3) restart "web" at 7:03:46 4) shut off "web" again at 7:29:19 5) removed cert at 7:30:43
  • Here's an extract of the interesting log messages:shut down "web" in Server app - 7:00:08Jan  9 07:00:08 server sandboxd[6807] ([6806]): xscertd(6806) deny job-creation
    Jan  9 07:00:09 server servermgrd[808]: servermgr_web: Disabling port forwarding for port 80
    Jan  9 07:00:11 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:00:12 server servermgrd[808]: servermgr_web: Enabling port forwarding for port 80
    Jan  9 07:01:10 server CoreCollaborationServer[6852]: [main.m:103 40a280 +0ms] HTTP server listening at loopback:4444
    Jan  9 07:01:10 server com.apple.collabd[6852]: Jan  9 07:01:10 server.cloudmikey.com CoreCollaborationServer[6852] <Warning>: [main.m:103 40a280 +0ms] HTTP server listening at loopback:4444
    Jan  9 07:01:10 server com.apple.launchd[1] (com.apple.collabd[6852]): Tried to setup shared memory more than once
    Jan  9 07:01:10 server wikiadmin[6858]: Updating schema...
    Jan  9 07:01:10 server com.apple.collabd[6852]: 2012-01-09 07:01:10.231 wikiadmin[6858:307] Updating schema...
    Jan  9 07:01:10 server wikiadmin[6858]: Schema updates completed.
    Jan  9 07:01:10 server com.apple.collabd[6852]: 2012-01-09 07:01:10.235 wikiadmin[6858:307] Schema updates completed.
    Jan  9 07:01:15 server servermgrd[808]: servermgr_notification[I]: External configuration change detected, re-loading: c2s.xml
    Jan  9 07:01:15 server servermgrd[808]: servermgr_notification[I]: External configuration change detected, re-loading: Jan  9 07:01:17 server com.apple.launchd[1] (org.apache.httpd[6892]): Exited with code: 1
    Jan  9 07:01:17 server com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
    Jan  9 07:01:17 server servermgrd[808]: servermgr_notification[N]: jabberd service startup completed.
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [, port=57627] connect
    Jan  9 07:01:18 server com.apple.APNBridge[6901]: http server appears to have started
    Jan  9 07:01:18 server com.apple.APNBridge[6901]: Connected to XMPP server
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [, port=57627] authenticated as apn.server.cloudmikey.com
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [apn.server.cloudmikey.com] online (bound to, port 57627)
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [, port=57628] connect
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [, port=57628] authenticated as pubsub.server.cloudmikey.com
    Jan  9 07:01:18 server jabberd_notification/router[6886]: [pubsub.server.cloudmikey.com] online (bound to, port 57628)
  • restart "web" at 7:03:46
  • Jan  9 07:03:09 server xscertd-helper[6808]: idle timer triggered, exiting
  • Jan  9 07:03:46 server servermgrd[808]: servermgr_web: enabling
    Jan  9 07:03:48 server sandboxd[6979] ([6978]): xscertd(6978) deny job-creation
    Jan  9 07:03:49 server servermgrd[808]: servermgr_web: Disabling port forwarding for port 443
    Jan  9 07:03:50 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:03:55: --- last message repeated 3 times ---
    Jan  9 07:03:55 server servermgrd[808]: servermgr_web: Enabling port forwarding for port 443
    Jan  9 07:03:55 server servermgrd[808]: servermgr_web: Cannot confirm Apache was started; missing or invalid pid file
    Jan  9 07:07:25 server xscertd-helper[6980]: idle timer triggered, exitingshut off "web" again at 7:29:19
    Jan  9 07:29:19 server servermgrd[808]: servermgr_web: Disabling port forwarding for port 443
    Jan  9 07:29:20 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:29:20 server com.apple.launchd[1] (org.apache.httpd[7792]): Exited with code: 1
    Jan  9 07:29:20 server com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
    Jan  9 07:29:21 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:29:25: --- last message repeated 3 times ---
    Jan  9 07:29:25 server servermgrd[808]: servermgr_web: Enabling port forwarding for port 443
    Jan  9 07:29:25 server servermgrd[808]: servermgr_web: Cannot confirm Apache was started; missing or invalid pid fileremoved cert at 7:30:43
    Jan  9 07:29:19 server servermgrd[808]: servermgr_web: Disabling port forwarding for port 443
    Jan  9 07:29:20 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:29:20 server com.apple.launchd[1] (org.apache.httpd[7792]): Exited with code: 1
    Jan  9 07:29:20 server com.apple.launchd[1] (org.apache.httpd): Throttling respawn: Will start in 10 seconds
    Jan  9 07:29:21 server servermgrd[808]: servermgr_web: waiting for pid, file /private/var/run/httpd.pid.
    Jan  9 07:29:25: --- last message repeated 3 times ---
    Jan  9 07:29:25 server servermgrd[808]: servermgr_web: Enabling port forwarding for port 443
    Jan  9 07:29:25 server servermgrd[808]: servermgr_web: Cannot confirm Apache was started; missing or invalid pid file
  • 1/9/12 6:52:37.981 AM com.apple.SecurityServer: setupThread failed rcode=-2147418111

UPDATE 12-Jan:

The road to recovery.  I spoke with Apple Support and worked my way up to a Tier-2 support person who helped me out a lot.  He gave me a bunch of great pointers which I'll post here as I use them.  He was very careful to point out that some of this is for experienced folks only, your mileage may vary, if you break it you bought it and some of this may result in something that's so broken that it falls outside the normal free telephone support.  Be careful!

The problem seems to be caused by the way I set the server up.  Y'see, I built the server at the farm and then moved it to the data center.  So the IP address changed.  That IP address gets "baked in" to a bunch of things, and especially the SSL certificate that gets created when the server is first configured.  Moving the server to a new IP-address puts it out of sync with the information in the certificate and that's very likely what's causing the problem.

Step 1 -- Set the Web server back to defaults.

Here's a link to the page in the Advanced Administration guide for Lion Server -- https://help.apple.com/advancedserveradmin/mac/10.7/#apd163efc3a-1465-4a44-ad2d-c76094144512

My sequence of steps was this;

  • Toggle off all the services in the Server application and turned off the SSL cert
  • Run "sudo serveradmin command web:command=restoreFactorySettings" (omit the quotes) repeatedly while at the same time watching the logs in Server.  The command failed several times because it couldn't find copies of various default versions of config files in the /var/apache2/sites/ folder.  Fortunately, I have backup copies of those files so I just replaced them one at a time until the command ran to the end successfully.

Step 2 -- Create a new SSL cert

  • Created a new SSL certificate in the Server application (Hardware/YourServerName/Settings/"Edit" SSL certificate/select the "gears" dropdown/select "manage certificates"/click the "+" button to add a new certificate/select "create a certificate identity"/accept the defaults/)

Step 3 -- Cycle the server and cross fingers

  • Rebooted the server
  • Waited for the logs to quiet down
  • Started the Web service and watched it create it's config files in the apache2/sites folder -- logs were still quiet
  • Assigned the newly-created SSL cert (I wish I could delete the old one but I can't) -- logs are still quiet
  • Turned on the Wiki service -- logs are still quiet
  • So far so good!  I think I'll leave things like this for a while before adding back the other services and the custom web sites.  More updates to follow.

Web -- MySQL

Lion switched from MySQL to PostGres (rumbles of ORACLE lawsuits no doubt) so I've got to start running a "real" version of MySQL so that all the little WordPress sites continue to function.

  • Hm.  MySQL only supports OS X through Snow Leopard -- looks like we're kinda out here on our own.  <shrug, what could go wrong?>
  • Downloads are here  - http://dev.mysql.com/downloads/mysql/ (roll down to the DMG file -- way easier install)
  • Installation instructions are here - http://dev.mysql.com/doc/mysql-macosx-excerpt/5.5/en/macosx-installation.html
  • Documentation is here - http://dev.mysql.com/doc/index.html (haven't used it yet)
  • PHP needs to be tweaked - https://support.apple.com/kb/HT4844 (I only did the "change-sockets to /tmp/mysql.sock" thingy)
  • Installed Sequel Pro (http://www.sequelpro.com/) and tested the installation by creating and dropping a database.

Web - loading up a WordPress site

Let's see how much of the Lion stuff I can use...

  • Point a domain at the server (an A record in DNS)
  • Create a new site in the Server app (using the same domain name)
  • Copy in WordPress files (download them from http://www.wordpress.org)
  • Give ownership to _www user (CD into the folder *above* the folder for the site is and type "sudo chown _www your-site's-foldername" in Terminal)
  • Transmit ownership to all files in the folder (Finder/Get info/Unlock/Permissions/Apply to enclosed items)
  • Create a database (I use Sequel Pro -- create an empty database and a user that has full rights to the database)
  • Create the wp-config.php that points at the database

Web -- point multiple URLs at the same site

I don't do this often, but sometimes I point more than one variant of a domain at a site.

  • Lion way -- create an addition site in the Server app -- new URL, pointed at the same content directory as the first site.  Works fine  Ooops...  things get sticky when doing this -- I wound up with a bunch of Apache site configuration files, and thus the opportunity of conflicts.  Better way...
  • Set the site up in the Server app with *just* the domain name (leave the "www" variant for the next step)
  • Edit the site configuration (file etc/apache2/sites/ip-address-stuff_port-number_domain-name.conf) and add ServerAlias records at the very bottom of the file, just before the closing </VirtualHosts> entry.
  • Like this:
  • ServerAlias www.example.com
  • ServerAlias good.example.com
  • ServerAlias bad.example.com
  • Restart the web server (and clear the browser cache) to check

Web -- redirects

I like to throw redirects into sites from time to time.  In Snow Leopard, this was easily done through Server Administrator but that's gone in Lion.  Adding them into the Apache files isn't too bad though.  Here's how.

  • Open the site file (etc/apache2/sites/ip-address-stuff_port-number_domain-name.conf -- I like the TextWrangler editor for this kind of stuff)
  • Insert a section that looks like this (I lifted this from my Haven2.com file on the Snow Leopard file and stuck it into my Dissembling.com test site);

<IfModule mod_alias.c>
Redirect temp "/rss.xml" "http://feeds.feedburner.com/Haven"

  • Only need one set of bracketed "IfModule" statements, and stick in as many "Redirect temp" statements as needed.
  • I'll probably just copy these sections over from their files on the Snow Leopard server and see how they work out.
  • Restart the web server (toggle Web off and back on in the Server app)

Web -- separate log files

Some of my domains get a lot of traffic and it's handy to be able to strain out their stuff into a separate log file.  Not a show-stopper but handy.  Once again, the site files in Apache seem to be the place to do this.

  • Open the site file (etc/apache2/sites/ip-address_port-number_domain-name.conf)
  • Change the CustomLog and ErrorLog statements to point at a unique file rather than the default
  • Restart the web server
  • Check to make sure things are working by looking in var/log/apache2 for the new files after the restart
  • Best to open the log files with the Console app -- lots easier to read the files (and get real-time updates)

Web -- rotate log files

I like to have the log files break themselves up into weekly chunks so i can go clear out the old ones every once in a while.  In Snow Leopard, this was easy -- just tick the little box and it did it.  Lion makes me work harder.

  • Open the site file (etc/apache2/sites/ip-address_port-number_domain-name.conf)
  • Change the CustomLog from this:

CustomLog "/var/log/apache2/example_access_log"

  • To this:

CustomLog '|/usr/sbin/rotatelogs "/var/log/apache2/example_access_log" 604800 -360' "%h %l %u %t \"%r\" %>s %b"

  • Change the ErrorLog from this:

ErrorLog "/var/log/apache2/example_error_log"

  • To this:

ErrorLog '|/usr/sbin/rotatelogs "/var/log/apache2/example_error_log" 604800 -360'

One wonders if making these changes to the default version of the configuration file would drive this stuff in automagically.  Might just research that some day.

Web -- permalinks in WordPress sites

WordPress has the ability to change the format of the URLs for posts and pages from the ugly PHP link to a prettier "permalink" structure.  Apache needed to be tweaked in Snow Leopard to make this work right, and it still does in Lion.  Here's how.

  • The etc/apache2/httpd.conf file needs to be changed (only once, the first time through) so that the "AllowOverride" statement in the "/Library/WebServer/Documents/" section reads "AllowOverride All" (there are several AllowOverride statements in httpd.com -- pay attention to which one is being changed).  Note: I'm not sure this step is really required -- my testing was a little horked up and I'm too lazy to repeat it to verify
  • Open the site file (etc/apache2/sites/ip-address_port-number_domain-name.conf)
  • Change the statement "AllowOverride None" to "AllowOverride All" in the "Directory" section
  • Create a .htaccess file in the site directory (use Terminal, CD to the site directory, "sudo touch .htaccess")
  • Change ownership of the .htaccess file to the "_www" user ("sudo chown _www .htaccess") -- this lets WordPress modify the .htaccess file with the permalink rules.
  • Restart the web service in the Server app
  • When all else fails (I had a heck of a time getting the server to write the .htaccess file correctly -- although restarting Finder [Apple-menu/Force-quit.../Finder/Restart] may have cured that problem) I manually edit the .htaccess file.   Here's the code that needs to be in it:
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress


Well, none of this is real tough -- so I think I'm about ready to start moving stuff over to the Lion environment.  I'll probably wind up running it under a virtual machine until I've converted everything.  Then I'll explore moving it out of the virtual machine back into a native Lion install on my tiny little server.  Or maybe not.  That's for another day.

The Cadillac

September 6th, 2011

Dearly beloved, once I was young.  I know it's hard to believe, but it's true.  And once I owned this Cadillac...














Taking out a beaver dam

August 21st, 2011


Uh oh...  That swirl in the water?  That's a beaver.  That grass at my feet?  That's our culvert.  Beavers like to build dams just on the upstream side of our culvert, which leads to trouble like The Big Flood.  So this beaver-project must be removed...

Here's one of the culprits...  Making a getaway...

Here's their handiwork.  This got built in one night (I know this 'cause Marcie and I took a dam out yesterday in exactly the same spot -- the beavers like it so much they rebuilt it overnight).  So this one turned out to be really small and really easy to take out (unlike the one before which was a lot more challenging).

The compleat beaver-dam extraction module.  Note the smile on her face -- taking out beaver dams is a boatload of fun.  I had a little too much fun on the one we took out yesterday and wiped out my finger.  So Marcie sidelined me and took over the job.

This is the "before" picture -- Marcie's getting ready to attack.

Here's the "after" picture.  All gone.  Along with another little one that they'd started just downstream from the culvert (an equally big problem for flooding).

Here's Marcie holding a giant Angelica stem in the downstream-dam.  That's not a little tree.  That's a flower stalk!

Here's the dam -- loaded on Trakdor for disposal.  We'll see whether they try again tomorrow.


Fold-out circular table

August 17th, 2011


This is a series of pictures of our dining-room table.  The cool thing about it is how it folds out -- so most of the time it's a modest little table that four people can sit around.  But folded out, we've crammed twelve people around it.  Also great for poker.  This series of pictures shows how it's put together.

Here's the table, in it's 4-person folded-up configuration.

This is how it looks half way in between -- the four extensions have been pulled out.

Here's the "large" configuration.  Eight people sit in style, a little squishing and you can get twelve around it.

OK, so how does it work?  Here's a picture of the halfway-out view, with the center surface removed so you can see all the wiring.

A closer look at the inner workings, this time with the extensions pushed back into the "small" configuration.

Even closer.  There's some pretty amazing geometry in there.

Here's one extension pulled out -- to show the relationship between pushed-in and pulled out inner workings.  As extensions come out, they also need to rise vertically so that they're bringing that foldout piece of the table up into the same plane as the center part.

Another view -- showing how the "east west" extensions are different than the "north south" ones.

Here's one extension pulled all the way out of its little track -- see the shape under there, that's the trick to the "rise vertically" solution.

Close in view of the complex shape of the extension-support -- the geometry is different on all four of these in order for them to fit together when the table is closed.


Here's the trough that the extension rides in -- more rise-vertically geometry.

Trough II, the sequel.

Now all the extensions are extended.

Leg detail.  About the least complicated part, but look at those matching inner and outer curves.  Sheesh, it would take me years to get that right.

Here's a detail of the hinge on the extension.

And here's a shot of how the "flopping down" end of the extension mates with its neighbor.

The bottom of the top -- showing the two big pegs that align it properly.

Detail of the pegs.


Broadband connection improvements — avoiding DNS-interception and “buffer bloat”

August 13th, 2011

This whole saga started when I read an Ars-Technica article called "Small ISPs use "malicious" DNS servers to watch web searches, earn cash."  Here's the lede that got my attention:

Nearly 2 percent of all US Internet users suffer from "malicious" domain name system (DNS) servers that don't properly turn website names like google.com into the IP addresses computers need to communicate on the 'Net. And, to make matters worse, the problem isn't caused by hackers or malware, but by the local ISPs people pay for access to the Internet.

As I read more about this issue, I came across the ICSI Netalyzr which is a nifty network-diagnostic tool that tests a bunch of dimensions of a broadband connection and will detect this DNS-interception if it's happening.  The good news, is that none of my broadband connections have this problem.  BUT, the Netalyzr did discover another problem called "buffer bloat" on my connection at the farm, which explains some of the erratic network behavior here.  The rest of this post is the saga of a delightful geek project to get this fixed -- and documentation to remind me what I did plus provide some goodies for anybody who'd like to follow along.

Buffer-bloat mitigation -- Background

First up -- what is "buffer bloat?"  I came across a post by Jim Getty called "Mitigations and Solutions of Bufferbloat in Home Routers and Operating Systems" which is mostly focused on a strategy to fix the problem (and is the basis for the stuff I've done here).  Fersure read this post -- but if you're a geek who's interested in understanding what the problem is, also read the "surrounding" posts on his blog.  I'm left pretty completely in the dust by the technical discussion, but I follow it enough to share Jim's concern that this could become a really interesting puzzle.

The short version of what I'm doing with this project is to protect the Internet from my over-eager home computers by putting my own traffic meter (just like the one at a freeway on ramp) on my Internet connection.  I will tell you true -- taking 10-20% off the top speed of my Internet connection makes it "feel" a WHOLE LOT faster.  Formerly-unuseable video streaming (Vimeo streams were the worst, but YouTube was pretty crummy too) is now just fine.  My VoIP phone service from Vonage is now rock solid even when we're doing lots of other uploading/downloading, etc.  I like it a lot and based on this experience I'm going to do the same thing at my other connections as well.

Ingredients -- a new router and Gargoyle

I have been interested in the idea of putting open-source software on a consumer router for a long time, but hadn't had a good reason until I read Jim's piece.  Unfortunately, the Apple AirPort Extreme sitting in the basement isn't on the list of routers that can be treated that way (and, interestingly, also doesn't provide any bandwidth-shaping capability).  So it was off to the Gargoyle site to do some shopping for a new router, one that would be a good target for an upgrade to Gargoyle.   I wound up getting a TP-Link TL-WR1043ND because it's cool looking with its 3 antennae and has lots of CPU-horsepower and memory so performance was likely to be spiffy.

Installation tips

It's always a little nerve-wracking to venture into a whole new realm of activity for me, so I took it pretty slow and easy on the actual set-up process.  I set the new router up with a completely "standard" configuration and ran it that way for a day or two before getting into the exciting Gargoyle stuff.  One thing that interested me was that the TP-Link router software had bandwidth-shaping capability already and I wanted to see if I could mitigate the buffer-bloat just using that.  That didn't work -- see "Tests" below -- but it provided some good entertainment for a day or two, running the tests.  Here's what I did after that:

  • Upgrade the router software.  I went out to TP-Link's web site and pulled down the latest version of the WR1043ND firmware and updated the firmware in the router to the current release.  This had the added bonus of providing me with a "factory" copy of firmware if I needed to fall back from the Gargoyle software.
  • Install Gargoyle on the router.  I followed these instructions for loading Gargoyle on the WR1043ND that are published on the Gargoyle site.  There are two things to note.  The first is that those drop-down menus aren't really drop-down menus, they're just pictures of them.  To actually get the software, follow your nose through the download section until you get to the place that's described by those graphics.  But here's the other note -- the graphics are a little old and point to the 1.3.14 version of Gargoyle -- I jumped ahead to the 1.3.16 version and it's been fine (for the big 24 hours that I've been running it).  The rest of the installation went without a hitch -- I used the "firmware upgrade" function on the standard software, pointed at the Gargoyle file I'd just downloaded, had a couple sips of a beverage and the router rebooted itself into Gargoyle.
  • Test the "fallback to factory software" scenario.  Before messing around with Gargoyle, I tested rolling the router back to a standard configuration.  I used the slightly-modified "factory" software from the Gargoyle page, ran it through Gargoyle's "Update Firmware" process and scared the heck out of myself when the upgrade didn't complete.  I thought I'd turned the router into a brick -- but it turns out that the web-interface just isn't smart enough to know that the router has rebooted itself.  I logged back into the router and found factory screens rather than Gargoyle screens.  Whew.  Then I upgraded the software to the software I'd downloaded from TP-Link and got myself back to a completely-factory router again.  Once I'd gotten through all that I repeated the process of loading Gargoyle on the router and that's where it sits today.
  • By the way, Gargoyle's default password is "password", not the typical "admin" -- just a note to save time the next time I upgrade the firmware.

Tests and observations.

One nice thing about Netalyzr is that it leaves permanent copies of the results out on the net so's you can refer anybody to them.  Here's the series of tests I ran at the farm.  Unfortunately, I forgot to capture the permalink of the very couple tests (with the Apple router and the WR1043 in default configurations).  Dang.  So I'll skip forward to a series of tests running the Gargoyle software on the new router.

1st test -- New router, Gargoyle software, default configuration, QoS turned off.  Note the Red-bordered part of the results -- which show 5400ms of buffering on the uplink and 509ms on the downlink.  This is bad -- this is what got me started on this project in the first place.

2nd test -- New router, Gargoyle software, default configuration, QoS turned on.   Buffer-bloat is dramatically lower -- uplink is 220ms and downlink is 44oms.  BUT, there's a cost.  The default settings in Gargoyle limit the speed of the connection to 300k upstream and 3000k downstream, which is almost cutting the bandwidth in half.  On the other hand, it proves that buffering can be mitigated.

3rd test -- New router, Gargoyle software, bandwidth QoS settings increased to 500k downstream x 5000k upstream, QoS turned on.  Uplink buffering remains around 220ms (same as before -- this is good), downstream buffering is starting to creep up at 680ms.  This is where I've left it for now -- more experimentation to follow, but this gives you a sense of the thing.  Upstream buffering is less than half what it was, downstream buffering is reduced almost ten-fold.

IPHouse test -- You want to see a perfect score on the Netalyzr test?  I ran the test from my little server over at IPHouse.  Perfection -- no flags at all.  What else would you expect from IPHouse?  It proves that you CAN configure a network correctly and eliminate buffer-bloat.

So there you have it.  The "real world" results are still coming in, but so far the connection here at the farm "feels" more solid.  I downloaded a few videos and they don't stutter they way they used to.  The Vonage line is now getting top priority in QoS and should be less subject to disruption when we're doing a lot of uploading (although that will have to wait for a teleconference for confirmation).  All good, an easy project and a neat new router/software combo in the basement.

Image: jscreationzs / FreeDigitalPhotos.net

Mikey in the high branches.

June 22nd, 2011

This is a post that most readers of this blog are going to scratch their heads over.  I volunteered a fair amount of my time to ICANN (the organization that works on the domain-name and numbering systems that underpin the Internet).  Until yesterday.  I got pretty cranky over an email exchange that I (as a working-group member at the bottom of ICANN's bottom-up policy-making process) had with a couple Big Kids on the Council that manages our working-group-based policy-making process.  I loudly resigned over this -- here's a link to my grouchy email to the community.

Kieren McCarthy wrote a great article that places my resignation in context and that article kinda went viral in the community yesterday afternoon.  A bunch of people have asked me "hey Mikey, what that heck put you up in the high branches like that??"  So I've decided to post the email dialog that so got me going.  Sorry to those of you regular readers who will be scratching your head over this weird post.

Cast of characters in the tragedy;

Mikey -- that would be me

Tim Ruiz -- one of the people who represents Registrars on the GNSO Council.  Tim works for GoDaddy.com, which is by far the largest registrar (essentially the Wal-Mart of domain-name registration outfits).  With those two hats, Tim pulls considerable weight in the organization.

Stéphane Van Gelder -- another Registrar representative and also Chair of the GNSO Council.  Another heavy hitter.

The Dialog;

Mikey: hi all,

i'm just lobbing a suggestion into the "locking during UDRP"-recommendation discussion that's going on in advance of the Council meeting coming up later today.  this note is primarily aimed at my Councilors, colleagues in the BC and fellow members of the IRTP-WG, but i've copied a few others just because i can.

as a member of a working group that's wrapping up two years of work on this stuff, i am hoping that the Council will not rewrite our recommendations on its own.  this is a repeat of the "i'm trainable" comment i made in SFO.  what i'm hoping is that the Council will vote the recommendation up or down and, if it would like, sends the defeated recommendation back to the working group for refinement.  you can even include suggestions if you like.  but please don't make changes to our recommendations without giving us a chance to participate in the process.

you can invoke all the historic "Council should be *managing* the policy process, not being a legislative body" arguments in this paragraph if you like.

i'm still trainable.  :-)

Tim Ruiz: My goal is not to derail the rest of the work over this since that rec was already acted on. The locking question has already been picked up in the UDRP issues report (done in response to the RAP report).

Mikey: yep -- i get that Tim.  i'm really zeroed in on the process, though.  it would be fine to push it back to the WG with your comment as annotation.  this issue is the perfect one to use as a test-case for the very reasons you describe.  my worry is that some day we'll get to a tough/complex issue  on a WG report and the Council will roar off and try to fix it on the fly rather than pushing it back to the people who've devoted the time to get up to speed on the nuances.

as a WG member i'd much rather hear "hey WG folks, can you fix this?" than "we fixed it for you."

Tim Ruiz: There is nothing for the WG  to fix and the Council is not changing any recs. We just want to consider that one with the UDRP issue it is already tied in with. I am all for process, but we can protect that without duplicating efforts.

Mikey: you folks get to do whatever you want to do -- but like i said, i'm trainable.  if you as the Council are going to make that call, without engaging the WG in the conversation, you're setting precedents that the Council may come to regret when it is trying to recruit volunteers to devote years of their lives to efforts like that in the future.

all you have to do is ask us, rather than telling us.

Tim Ruiz: Mikey,

My record is pretty clear on process. I defend it fiercly. But you are really blowing this out of proportion. If you are trainable, let it show. Let's discuss further F2F.


Mikey: Tim, i'd much rather have this conversation over a limited-scope test-case issue that's relatively straightforward to resolve than a really hard one.

if working groups are the place where policy gets made, then let the WG fix this minor problem for you rather than fixing it yourselves.

Tim Ruiz: I'd rather not. I've explained it to you. You either don't get it or don't want to. If you want to discuss F2F let me know.

Stéphane Van Gelder: Mikey,

I think the GNSO Council has a clear understanding of its role in the policy development process.



Mikey: yep.  and so does this volunteer WG member.  i'm now fully trained.


I'm calming down (and was much appreciative of all of you who reached out to help me with that).  So I'm clambering down out of the high branches (while sitting in the Tokyo airport transit lounge on the trip home -- not exactly the best place for reflective writing).  Thanks all of you who reached out.  I'll write you direct notes tomorrow after I'm back in the midwest.

Music workstation

June 3rd, 2011

I decided to take a picture of the current state of the music workstation.  I wish I'd done this a few times in the past so I could reflect on how it evolves but there you go.  Anyway, here's the first in a series.

List of Stuff

Computer -- home-brew PC (hidden behind a wall so's not to noise-pollute the mic when I'm podcasting)

Software -- SONAR 8.5 Producer, Jamstix


Yamaha PSR-1500 (my favorite for banging around in a jam session)

Yamaha S-08 (the "serious hard-core" keyboard)

Edirol PCR-300 (the little one -- super handy for composition)

Tenori-On -- a gizmo I'm still trying to figure out

Audio -- Crown Powertech 3.1 (500 watts/channel at 8 ohms) into EV Sx300 speakers, couple Behringer mixers, MXL-2001 mics

Update -- about a year later -- June 3rd, 2011


My goodness what a difference a year makes.  Here's the current state of affairs.  I finally switched back to the Mac for music-making after a long time away.  I gave up on the PC -- the platform was just too unstable.  Yes, I changed everything (hardware, software, peripherals, cables) trying to diagnose the repeated-crashing/freezing problems.  Don't want to go there.  The Mac "just works" and it's on a laptop so I can haul it around with me.  I'm now enjoying a much higher ratio of "making music" to "fixing the setup" time.

The new additions:

MacBook Pro

Logic Pro

M-Audio Axiom Pro

The old Roland JV880 (hanging on the music stand down there under the S08)

I'm liking this new rig a lot.


UPDATE: 27-February, 2012

Another Big Rearrangement.  Here's the picture (click on it to get a full-sized version)

The big change is the arrival of an OnStage Stands WS8700 that holds all this stuff up.  I'm still ironing out the kinks, but I really like having all the stuff in one place.  The computer "commutes" from my desk (where all of the "office type stuff" like printers, back up drives, and so forth are plugged into a USB hub) over to this pile o'wires where all the music peripherals are hooked together in a USB hub.  It takes about a minute to move the laptop and I'm all set.

The trouble with this layout is that it doesn't go on the road -- it takes about 4 hours to set it up.  So I'm going to have to come up with a thinner version for gigs, but it's great for working at home.

WordPress gallery can’t save or link to external URLs

May 28th, 2011

UPDATE:  Several years have passed and this problem still exists.  However now there is a nice simple plugin that fixes it.  It's called WP Gallery Custom Links.  It's working for me.  Hopefully it will for you too.  I updated the first of the three images in my broken example gallery to an external URL as a test.

Sorry about this lame-o post right in the middle of my blog, but this is a bug that's best documented with a post so's the WordPress folks can see what's going on.

I'm running the current version of WordPress here (3.1.3 as of this writing)

I, and many, would like to be able to insert a gallery of pictures into our posts and specify external links for each picture in the little gallery-editor that comes with WordPress. The problem used to be that WordPress users could not make the Save function work, the Link URL wasn't being saved in the editor. That problem is documented in the WordPress bug-tracking system as problem number 13429 .

Sergey came up with a work-around plugin that people can add to their WordPress which solves part of the problem.  Downloading and enabling this temporary plug-in indeed fixes "I can't save external URLs in the gallery editor" problem.  I've linked all of the images on this page to three of my domains (www.geezercast.com, www.kz0c.com and www.bar.com) to illustrate the problem as it stands right now.

If I add all the images individually, they will all point at external URLs, like this;




BUT that's not what people want to do -- they want to be able to post the whole gallery (in this case all three images) at the same time, and have the thumbnails in the gallery point to external links, not to the image file or an attachment page.  I'll insert the gallery this time, using the "attachment page" option as an example.  What people want to happen is exactly what happened above -- 3 pictures pointing, in this case, to geezercast.com, kz0c.com and bar.com.  What happens instead is this;


Changing that "Attachment page" to "image file" option just makes the thumbnails link to the images.  So in neither case can a person use the WordPress gallery to link a series of pictures to external URLs.

So this may be a combination of a bug and a feature.  The "can't save Link URLs" problem is solved with Sergey's little shim, and will hopefully be released into the production release soon.  But the real problem, "I can't point gallery-thumbnails at external URLs," still exists.

Here's a gallery that behaves "the right way" -- using this great hack of the NextGen Gallery and NextGen Gallery Custom Fields plugins.


Any chance that we could get that ability in the normal gallery?  The hack wasn't hard, but it's pretty intimidating for "normal" users and seems like an easy add-on to the existing Gallery function.