macOS 10.14.5 beta, Notarization and Stapling Review

Editor’s Note: Once again, this is a moment frozen in time, designed to educate about a passing moment in time. This post is one of a series, so please be sure to read the other posts in this series, and recognize that things are changing constantly.

Related posts

macOS 10.14.5 beta 2: Kernel Extension Notarization, UAMDM, Whitelisting & You


Last time on this blog, I talked about a new requirement that is present in the early betas of macOS 10.14.5. Kernel Extensions that are installed on a 10.14.4 system that is upgraded to 10.14.5 may not operate correctly if they are not notarized by Apple. In this situation, if the kernel extension is whitelisted (aka UAKEL) by a user-accepted MDM (aka UAMDM), you have nothing to worry about for now. If you’re not using UAKEL and UAMDM, and you are installing kernel extensions that are not signed and notarized by Apple, you’re going to have a bad time. These extensions will not load, and the applications that depend on them will not operate, if they are built and signed after the demarcation date, which is currently 11 March 2019, but may change in the future.

An Example

Recently, DisplayLink released a new version of their kernel extension:

The release notes state:

Software package notarized by Apple as required for macOS 10.14.5 onwards.

DisplayLink Release Notes

However, should one download the software, and inspect it, one might find that things are lacking:

Persephone:Downloads tom$ stapler validate -v DisplayLink\ USB\ Graphics\ Software\ for\ macOS\ 5.1.1.dmg 
Processing: /Users/tom/Downloads/DisplayLink USB Graphics Software for macOS 5.1.1.dmg
Properties are {
    NSURLIsDirectoryKey = 0;
    NSURLIsPackageKey = 0;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Disk Image";
    NSURLTypeIdentifierKey = "";
    "_NSURLIsApplicationKey" = 0;
Creating synthetic cdHash for unsigned disk image, DisplayLink USB Graphics Software for macOS 5.1.1.dmg. Humanity must endure.
Signing information is {
    cdhashes =     (
        <fd2d35b7 cea70fab 2e22850b 3f39070d a7fa0f52>
    "cdhashes-full" =     {
        2 = <fd2d35b7 cea70fab 2e22850b 3f39070d a7fa0f52 781113f0 7b8686a8 7803c116>;
    cms = <>;
    "digest-algorithm" = 2;
    "digest-algorithms" =     (
    flags = 2;
    format = "disk image";
    identifier = ADHOC;
    "main-executable" = "file:///Users/tom/Downloads/DisplayLink%20USB%20Graphics%20Software%20for%20macOS%205.1.1.dmg";
    source = "explicit detached";
    unique = <fd2d35b7 cea70fab 2e22850b 3f39070d a7fa0f52>;
Stored Codesign length: 12 number of blobs: 0
Total Length: 12 Found blobs: 0
DisplayLink USB Graphics Software for macOS 5.1.1.dmg does not have a ticket stapled to it.

Well, they didn’t staple the DMG file, how about the kext itself?

Persephone:Extensions tom$ stapler validate -v DisplayLinkDriver.kext/
Processing: /Library/Extensions/DisplayLinkDriver.kext
Properties are {
    NSURLIsDirectoryKey = 1;
    NSURLIsPackageKey = 1;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Kernel Extension";
    NSURLTypeIdentifierKey = "dyn.ah62d4qmuhk2x445ftb4a";
    "_NSURLIsApplicationKey" = 0;
Props are {
    cdhash = <c90f6a0c 1076a443 e73cf694 9fe11422 f63f383e>;
    digestAlgorithm = 2;
    flags = 65536;
    secureTimestamp = "2019-04-12 09:34:45 +0000";
    signingId = "com.displaylink.driver.DisplayLinkDriver";
    teamId = 73YQY62QM3;
DisplayLinkDriver.kext does not have a ticket stapled to it.

Nope, no joy there, either. How about the package inside the DMG?

Persephone:Extensions tom$ stapler validate -v /Volumes/DisplayLink\ Installer/DisplayLink\ Software\ Installer.pkg 
Processing: /Volumes/DisplayLink Installer/DisplayLink Software Installer.pkg
Properties are {
    NSURLIsDirectoryKey = 0;
    NSURLIsPackageKey = 0;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Installer package";
    NSURLTypeIdentifierKey = "";
    "_NSURLIsApplicationKey" = 0;
Sig Type is RSA. Length is 3
Sig Type is CMS. Length is 3
Package DisplayLink Software Installer.pkg uses a checksum of size 20
We do not know how to deal with trailer version 41376. Exepected 1
DisplayLink Software Installer.pkg does not have a ticket stapled to it.

Well, if they notarized any of the parts, they didn’t actually complete the process in a way that allows us to verify the process offline.

When I ran the installer package on my machine, I did receive a UAKEL alert during install that indicates that the payload was being blocked until I accepted the kext, which means that the kext was notarized, just not stapled.

So, what would lead a developer to think that they have notarized their kernel extension successfully, but the operating system would believe otherwise? I can’t be sure of what happened in DisplayLink’s case, but there’s a possibility that it was built on an airgapped system where Xcode could compile the code, and then when it was submitted to Apple for signing and notarization, the final step of stapling the returned ticket to the application was not completed. If the ticket isn’t stapled, Gatekeeper will recognize the unstapled object, because Gatekeeper can talk with Apple and ask for a check based on other factors.

Apple’s Developer Documentation says:

Notarization produces a ticket that tells Gatekeeper that your app is notarized. After notarization completes successfully, the next time any user attempts to run your app on macOS 10.14 or later, Gatekeeper finds the ticket online. This includes users who downloaded your app before notarization.

So, if you deliver an unstapled object, as DisplayLink has, it may still pass muster, but that requires your machine to be able to talk with Apple at the time of install. If you are operating a network which embraces 802.1X user certificates, and you install software at the login window (with Munki, say) you may run into a circumstance where the software is actually notarized by Apple, but without that stapled ticket, you’re stuck if you can’t talk to Apple to prove it. This will result in a failed install.

So, Who Do You Need To Talk To?

According to Apple:

In addition, stapler uses CloudKit to download tickets, which requires access to the following IP address ranges, all on port 443:

If you can’t open up your network to those segments, consider that failure to do so will mean you cannot run what you need to run to make your Mac endpoints successful.

So, What Can I Do?

Well, you might be able to try stapling on your own. If it’s been validated by Apple during a notarization process, but the distributed resources are unstapled, you may be able to “fix” that by trying to staple the necessary objects yourself. They’re notarized, after all, just not by you! You can attempt this yourself.

xcrun stapler staple /path/to/DisplayLinkDriver.pkg

This results in a different result when you review the dmg file:

Persephone:Extensions tom$ stapler validate -v ~/Desktop/DisplayLink\ Software\ Installer.pkg 
Processing: /Users/tom/Desktop/DisplayLink Software Installer.pkg
Properties are {
    NSURLIsDirectoryKey = 0;
    NSURLIsPackageKey = 0;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Installer package";
    NSURLTypeIdentifierKey = "";
    "_NSURLIsApplicationKey" = 0;
Sig Type is RSA. Length is 3
Sig Type is CMS. Length is 3
Package DisplayLink Software Installer.pkg uses a checksum of size 20
Terminator Trailer size must be 0, not 2073
{magic: t8lr, version: 1, type: 2, length: 2073}
Found expected ticket at 7812133 with length of 2073
JSON Data is {
    records =     (
            recordName = "2/1/5362032c46062ca6e74bab1bf6ce672f6a578989";
 Headers: {
    "Content-Type" = "application/json";
Domain is
Response is <NSHTTPURLResponse: 0x7f85265134a0> { URL: } { Status Code: 200, Headers {
    "Apple-Originating-System" =     (
    Connection =     (
    "Content-Encoding" =     (
    "Content-Type" =     (
        "application/json; charset=UTF-8"
    Date =     (
        "Thu, 18 Apr 2019 20:23:51 GMT"
    Server =     (
    "Strict-Transport-Security" =     (
        "max-age=31536000; includeSubDomains;"
    "Transfer-Encoding" =     (
    Via =     (
        "icloudedge:sv05p01ic-ztde010811:7401:19RC85:San Jose"
    "X-Apple-CloudKit-Version" =     (
    "X-Apple-Request-UUID" =     (
    "X-Responding-Instance" =     (
    "access-control-expose-headers" =     (
        "X-Apple-Request-UUID, X-Responding-Instance",
    "apple-seq" =     (
    "apple-tk" =     (
} }
Size of data is 3377
JSON Response is: {
    records =     (
            created =             {
                deviceID = 2;
                timestamp = 1555062296808;
                userRecordName = "_d28c74d190a3782e89496b0a13437fef";
            deleted = 0;
            fields =             {
                signedTicket =                 {
                    type = BYTES;
                    value = "snipped for simplicity.";
            modified =             {
                deviceID = 2;
                timestamp = 1555062296808;
                userRecordName = "_d28c74d190a3782e89496b0a13437fef";
            pluginFields =             {
            recordChangeTag = judvxvj5;
            recordName = "2/1/5362032c46062ca6e74bab1bf6ce672f6a578989";
            recordType = DeveloperIDTicket;
Downloaded ticket has been stored at file:///var/folders/tk/qhvvt21x7z3fzt125dpgjlym0000gp/T/95f1738a-0da3-441e-abe4-982d57970d51.ticket.
The validate action worked!

This will mean that, as admins, if we want to install notarized software in a circumstance where network access won’t permit a conversation with the Apple CloudKit servers, you’re going to want to make sure the notarization ticket is stapled to the installer. This may require changes to our workflows, and now’s a good time to start thinking about what that will mean for automatic download and interpretations of installers.

Thanks as always to the gang from #notarization on the Mac Admins Slack for providing good discussion of a difficult topic.

macOS 10.14.5 beta 2, Kernel Extension Notarization, UAMDM, Whitelisting and You

Editor’s Note: This is an evolving topic and by the time you come across this in a search engine, circumstances may have changed. Treat this post as a frozen moment in time, things may have evolved for better or worse in the intervening weeks.

BLUF: If you are whitelisting kernel extensions on Macs with UAMDM, by Team ID, or by Team ID and Bundle ID, notarization is not necessarily required as of beta 2 of macOS 10.14.5. Those without UAMDM-defined kernel extension whitelists will need to make sure that kernel extensions are installed with both valid signatures and a correct notarization secureTimestamp.

Kernel Extension Signing in macOS 10.14.5 beta 2

Let’s begin with the recitals: beginning with macOS 10.14.5’s release, kernel extension signing is no longer sufficient. Kernel extensions updated after March 11th, 2019, or created for the first time after that date, will need to be notarized as well as signed. This means that your application and all attendant parts must have been signed and notarized by Apple. Here is how Apple explains this:

Notarization gives users more confidence that the Developer ID-signed software you distribute has been checked by Apple for malicious components. Notarization is not App Review. The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly. If there are no issues, the notary service generates a ticket for you to staple to your software; the notary service also publishes that ticket online where Gatekeeper can find it.

Notarizing Your App Before Distribution, Apple Developer Documentation

We had two easy tests for how this operated. Once macOS 10.14.5 beta 2 was installed on my daily driver, I downloaded updates to two of the apps we use that have kernel extensions and had been updated after March 11th: VMware Fusion Pro 11.0.3 and Kerio’s VPN Client 9.3.0.

On install of the new VPN Client, I received the following dialog:

Rejection Dialog from macOS for an invalid kernel extension

Kerio’s VPN Client was now dead in the water and not functional, no matter what I could do to follow up. An inspection (which requires Xcode 10.2 and not just the command line tools) of the kvnet.kext file in /Library/Extensions indicated I did not have a valid kernel extension any longer:

Persephone: tom$ stapler validate -v /Library/Extensions/kvnet.kext/
Processing: /Library/Extensions/kvnet.kext
Properties are {
    NSURLIsDirectoryKey = 1;
    NSURLIsPackageKey = 1;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Kernel Extension";
    NSURLTypeIdentifierKey = "dyn.ah62d4qmuhk2x445ftb4a";
    "_NSURLIsApplicationKey" = 0;
Props are {
    cdhash = <5bf723ec 9f7a0027 4592266d 0514db04 5f1760bb>;
    digestAlgorithm = 1;
    flags = 0;
    secureTimestamp = "2019-04-08 12:34:03 +0000";
    signingId = "com.kerio.kext.kvnetnew";
    teamId = 7WC9K73933;
kvnet.kext does not have a ticket stapled to it.

Without a valid ticket stapled to the kext, I was going to have a problem running it, as the secureTimestamp value is after 2019-03-11.

Well crap. I need that kernel extension to work for my VPN to client locations to work, so how am I going to get around it? Thanks to #notarization on the Mac Admins Slack, and Allen Golbig at NASA Glenn, Graham Pugh, and the help of others, the answer was already in our hands: User-Accepted Mobile Device Management and Team ID Whitelisting in the Kernel Extensions Whitelisting payload in MDM.

If you have a Mac with UAMDM (either via actual user acceptance, or via implied acceptance through Automated Enrollment), and you are specifying the Team ID of kernel extensions that you want to be whitelisted the new requirement of kernel extension whitelisting is transitive, meaning checks are not made to the notarization of the kernel extension, as the signing of the kernel extension is sufficient to its privileged execution.

MacADUK 2019: Highlights & What I’m Taking Forward

St. Paul’s and the Thames

This year’s MacADUK Conference is in the books, and I’ve made it back to the States in one piece. It was a busy week, full of socializing and engaging with colleagues, as well as learning about new topics in client management and deployment workflows, encryption details, and security philosophy. My sincerest thanks to Ben Toms and James Ridsdale from Datajar who chaired this year’s conference, and to the team at Amsys that handled logistics and details.


Park nearby Prospero House

This year’s conference had some great sessions. When the videos are out, I would strongly recommend seeing the following sessions on the small screen:

Armin Briegel, Deployment: Modern deployment workflows for business

Deployment is a source of opportunity for every IT out there. It’s literally your coworkers’ first impression of your operation, so why aren’t you putting your best foot forward with customized deployment via Automated Enrollment and Mobile Device Management. Figuring out how to replace the older technologies of ASR-based imaging with new deployment strategies is a challenge worth embracing.

Chris Chapman, macOS in a Docker Container for Development

There’s no question that Docker and Kubernetes are key components of modern software development stacks, especially for web-oriented applications. Chris Chapman of MacStadium has taken this to a whole new level, by writing a boot loader for Kubernetes and Docker for Apple hardware, allowing you to deploy a macOS image through orchestration and docker. The more I think about this, the crazier it is, but it demonstrates a flexibility that wasn’t possible before. I’m sure this is completely unsupported, but what a phenomenal way to think about the underlying tool chains we build from. It’s called Orka, and MacStadium is looking for beta sites.

David Acland, All about digital signatures

We spend a whole lot of our admin life making sure that signatures align and are approved, but how does that process actually happen? What’s the working relationship between a hash and a signature? What’s the actual cryptographic process used to take a hash and sign it as a measure of identification integrity? David took us through the details, and it was a real pleasure. And my head didn’t explode.

Ben Goodstein, Working with APIs: Power up your scripts with real time information

APIs as part of scripts is table stakes for adminry these days, and where better to get a refresher than with a low-stakes custom API that Ben wrote for accepting data from a script. He also told us about Insomnia, a GUI app for practicing with, in order to review what’s come down from an API call, and help better gather information. It was a great session, and I learned a lot of useful things to iterate against.

Commit No Nuisance


I had a few big thought lines that came back a few times during the conference, and lead to some noodling in my head on walks through London. We’re once again at an inflection point in macOS Administration, much as we were in the 10.8/10.9 period, the 10.7 period, and the 10.5 period. There are changes to our management structures that are no longer flashes in a pan:

MDM is not optional.

Deployment should be automated.

Manage as little as you need to retain a presence on the platform.

Managing more than you need to results in Shadow IT and Loopholes.

IT Operations relies on trust. Not just mechanized and automated trust chains established through TLS certificates and certificate authorities, it relies on a human trust that is implicit between Management and IT, and IT and the end users, your coworkers. For any IT policy to succeed, it must come with buy-in from your coworkers, not just in your department, but in your whole organization. Systems that are deemed too complicated will be ignored. Systems that are deemed too cumbersome to be operated will spark grudges. Systems that are deemed to be unpersonalizeable will result in shadow IT usage on personal equipment.

The balance between security, usability, and management philosophy remains the single most important challenge of any IT environment, large or small. If you have a bad balance, your coworkers will fight with you, resent you, and eventually work around you and cut you out.

Having a light hand on your workstations will be fought by internal and external security guidelines, though, and you’ll need to be ready with justifications based on feedback in the event that your choices are questioned. Obviously, there are some guidelines you can’t ignore. But, the security of the platform needs to be part of your process, not bolted on, not thought of after, but holistically part of your deliberations and choices. Self-healing management is a part of that, as is centralized reporting mechanisms designed to track the state of a machine.

If IT isn’t working to enhance the culture of your organization by extending and embracing systems of participation and training, your value will be subsumed by internal groups that are doing these things. That means providing good guard rails, but also providing knowledge and power at the workstation level to enhance your colleagues’ ability to do their jobs.

IT is a human-facing department in 2019. We serve the people. We just also serve the machines they use.

Update Your Understanding of Wi-Fi: Workshop Opportunities

As an Apple-focused Admin, one of the tools that most belongs in your toolbox is an understanding of how Wi-Fi works on a fundamentals level. You need to know how devices interact with this network layer as much or more than you do how they interact with TCP/IP and other Layer 3 technologies. In our day-long workshop, Chris Dawe and I are going to be talking about how Wi-Fi works, from a history of the technology (now in its 3rd decade!), to how your Apple devices interact with Wi-Fi, to how to troubleshoot networks and design better ones.

We’ll be doing this workshop at X World in Sydney in late June, and then again at Mac Admins in July. We’re thrilled to be working with two of the Apple Admin world’s best conferences.

We’ll be starting with a thorough history of Wi-Fi, moving to the nuts and bolts of how Wi-Fi works between a client and access point, and introductory sections on network design, network troubleshooting, and network security, and then an overview of the survey and analysis tools. We’ll wrap it all up with how Apple devices interact with all of the above, and all the specialized knowledge you’ll need to make sure that your networks are tuned properly for your fleets.

Physics Always Wins is the name of the workshop, and a stern reminder that there are good rules of the road to follow for good Wi-Fi. Discerning what the best settings are for your environment feels like an artisan’s job these days, given the layers of marketing speak and incomplete understandings of the radio frequency world. We’re hoping to give more Apple Admins a firmer understanding of how it all works, so you can make the right engineering choices.

Apple Ships Blindingly Fast New iMacs With Old, Slow Storage

I will guarantee you that the single greatest bottleneck in terms of speed on the base 4K iMac is that slow spinning disk drive. People who spend $1299 for a 4K iMac in 2019 deserve not to see a spinning beach ball—but they probably will. This is one case where Apple should either take the hit on profit margin or just raise the price if it has to.

Jason Snell, Six Colors, “The iMac and spinning-disk disappointment

The 2019 iMacs have at their core incredible Intel processors, large amounts of RAM, market-leading displays and powerful 3D cards. These are machines that can game with the best, display beautiful movies and photographs with incredible color fidelity, and rip through even the most complicated processing needs in a bare minimum amount of time.

And they also ship by default with old, slow 5400rpm hard disks that came to the marketplace in 2007 in 1TB capacities. When Hitachi released the first Deskstar with 1TB that year, at a whopping price of $399, they boasted a cost-per-GB of $0.40. Now you can have a SATA SSD for less than $0.25 per GB, and an M.2 SSD for $0.35 per GB.

Sure, some of the newly released 1TB drives in the iMac are mated to small SSDs, yes, but the Fusion Drive isn’t a substitute for full-size SSD. The speed just isn’t there. The maximum throughput of a spinning disk is around 1.5Gbps. That’s rarely achieved under all but the best conditions. Most of the time it’s under 1Gbps for a 7200rpm drive, let alone a 5400rpm drive, which will top out around 800Mbps under perfect conditions. A SATA-based SSD can do four to six times that throughput. topping out at 6Gbps. Those are available from OWC at these prices, retail:

If you move up to an M.2 SSD, similar to the kind that are using the in 2014-2015 MacBook Pros, the prices increase. So does the speed, up to about 16 times the average read and write speed of the 5400 rpm drives, topping out at 16Gbps. The current generation of MacBook Pros tops out closer to 20Gbps.

Apple has a hard job, to serve a wide clientele with varying needs, from home users, to education marketplaces, to corporate fleets, to small businesses and more. However, I can’t imagine that a 5400rpm drive on the desk of an Apple Executive, senior or otherwise, would last more than an afternoon. Why should it land on the desk of your average staffer, when they’re often the heart of an enterprise’s productivity? Why should it land in our schools’ computer labs, or on a creative’s desk?

We live in an era of unrivaled external and internal storage options and speed, with external disks coming in faster/better/larger/stronger, and cloud storage available without limits. An era where there are not 1, but 2 40Gbps buses on the back of the 2019 iMac ready to plug into a 1TB SanDisk SSD available for under $200.

But we also live in an era where every iMac’s base configuration has a 5400rpm drive still at its heart, just like 2007. Apple is selling us a sports car with a monster V-12, only they’re disabling six of the cylinders to save a few bucks. Given the importance of the iMac’s design to Apple’s brand, and to their brand awareness, this seems an odd choice.

Three-toed sloth is a photo by Magnus Bråth and used under a Creative Commons Attribution License.

A Hero’s Return

One of Technolutionary’s first purchases in 2006 was this Mac mini, which today returned to our offices after 13 years of duty in co-location at Solid Space in North Carolina. It’s been a file server, a mail server, and more, and we absolutely, positively got our money’s worth on this beauty.

Thanks for the service, Mac mini, your retirement awaits. Thank you Apple, for building technology we can depend on.

A sunset over Puget Sound, from the offices of Dropbox Seattle

Mac Admins Podcast 112: Live From Seattle

This past week, I traveled to Seattle, Washington to join the Apple Admins of Seattle and the Greater Northwest at their monthly meetup, and to record an episode of the Mac Admins Podcast with my friends Chris Dawe, Jonathan Spiva and Vi Lynk, as well as my new friend Ashley Smith. The topic was fairly simple: let’s talk about career paths and career trajectories and all the crazy things a life in IT can bring.

We talk a lot about technology in our jobs, but we don’t talk a lot about our jobs in technology, and it was great to sit down and chat about how we’ve gotten to where we are, where we’re headed, and what we’re learning about working with people, machines, and applications. In particular, I found Vi’s conversation about relationships mattering in IT to be illuminating. How we fit our departments and businesses into each other is so important. It made me go watch her talk from Penn State again, to remind myself of who my internal folks are, who my external folks are, and so I can close the loop with so many of those people again in the near term.

Ashley Smith reminded me of the importance of being willing to do the legwork on a topic when you don’t know the answer, and that the best response to a question you don’t know the answer to is “I don’t know, but I’ll find out.” We grind so many people through the grist mill of Tier 1 support, but we don’t spend time letting them learn, in favor of metrics that likely don’t have a good backing in objective reality. As part of managing service desks, we need to make sure that we’re not blindly adhering to metrics over the development of our people.

This week’s episode is a break from the minutiae of the job, in favor of some of the bigger picture. It’s worth your time to listen in the browser, or out on Overcast.

Seattle was a marvelous city to visit, even in the midst of winter, and I had so many incredible meals (Seven Stars Pepper! Harbor City! Jack’s! Arctic Club! Beer Star!) and conversations that it will remain in my heart. My thanks to organizers of the event, and to everyone that I got to see while I was out there for three days. Getting out to meet people in Mac IT all over the country is the best part of my job, and I can’t wait to do it again in March.

Everything I Know (Now) About The 13-inch MacBook Pro (non Touch Bar) Solid-State Drive Service Program

This Fall, Apple announced a service program for the non-Touch Bar MacBook Pros (also known as the MacBook Escape, for the hardware Esc key that they still have), specifically around the solid state drive that stores the operating system and user data. Think of a service program a lot like a car’s technical service bulletin program: designed to identify a potential failing of a given make and model of machine, and resolve that defect before it turns serious.

The Apple documentation for this repair is clear: the machine will have all of its data wiped during the firmware fix. Apple states: “Prior to service, it’s important to do a full back up of your data because your drive will be erased as part of the service process.” This means that you must backup the data before you take the machine to Apple. In our case, where Time Machine backups exist, we will perform a final update to the backup before the machine goes in. Where one does not exist, we will use Carbon Copy Cloner to backup to a disk image.

Today, I got to watch as a technician completed this process on a client computer, and I wanted to catalog what happened, as there’s not a step-by-step guide available for admins. In this case, I had three affected machines, and a Genius Bar appointment. Two of the machines failed the diagnostic portion of the firmware fix, and one was successful, which gave me a look at both cases of the SSD Firmware Update.

The Basics of the Solid-State Service Program

Before the process began, each of our machines was inspected and made sure to be in operating condition. After a brief check to determine OS level and functioning status, the machine was restarted, its PRAM zapped, and then it was run through standard onboard diagnostics (ie, hold Shift-D at boot). Our friendly Genius also reminded us for the third time that all data should be backed up at this point, or forever hold your peace. Now the machine was ready for the next step.

The firmware update process was handled in a NetBoot environment, as these machines are not T2 machines, and thus can be NetBooted. A specifically-created NBI was used by the Genius to boot the machine to a single-use tool. The appearance of this tool was very similar to booting into recovery, where a standard window appears and offered a single tool, the SSD Firmware Update.

The actual process of running the SSD Firmware Update is quick. I clocked it at well less than three minutes. If there’s a failure, it’s even faster.

In The Event of a Failure

If the mechanism doesn’t pass muster, a failure dialog is displayed, and it advises that the machine’s SSD needs to be replaced. This is not something Apple was ready to do on the spot, and said it would need to go to depot for repairs. There was a silver lining here: the existing volume was preserved with its information. This allowed us to take the machine back and do a direct transfer of data to an alternate loaner machine and schedule the depot repair at our convenience. In short, the machine’s ready to go back to use for the time being, and you’ve got a good backup.

In The Event of Success

If the mechanism does pass muster, you will get one last confirmation before everything is wiped from the drive. This is the fourth time I was asked if there was a backup of the volume. There was, we proceeded.

After a short period — three to five minutes by my recollection — the firmware was updated and we could proceed. It was then booted into Internet Recovery, and we used Disk Utility to create a new APFS volume on the otherwise-vacant SSD. After the firmware update, there was nothing on the disk, not even an empty volume. In order for the OS to be reinstalled, a volume had to be created first.

Once that was completed, the OS was reloaded, and twenty minutes later we had a working machine again.

Summary And Opinions

The process here was, thankfully, fairly painless. The machines that failed the upgrade weren’t erased and can go gingerly into the hands of their users until we can identify sufficient loaners. The machine that succeeded is now deemed cured and shouldn’t have this problem again. But that takes us to the problem’s mere existence. We had 40 MacBook Pros that fit the description of the warranty program, and something like 22 of them have to go to Apple in the coming months. I feel particularly awful about the company where 11 of their 18 machines have to go in.

The effect of this service program occasionally requiring a depot repair is also deeply unfortunate, because how many loaners is a 15-person company supposed to keep around? In this case, it should be possible for an org to arrange to just have these machines replaced in their entirety. Machines that have this defect can just stop working in their entirety, leaving a trusted member of your staff facing a nightmare scenario of recovery. Worse, depot repair is 5-7 days.

To bolster good will, I would hope that Apple would consider a new machine swap for these machines to get them transferred in a way that was more respectful of the time of Mac Admins and Apple customers in general. It is also quite frustrating to arrange with Apple to do these firmware fixes en masse. It takes an hour to prepare the machine, an hour to transport it to Apple and wait in the store, and then another hour to two hours to restore the operating system and user data to the machines. In addition, this service program requires Apple to participate. For shops that are using internal technicians who are Apple-certified, this tool is apparently not available via Global Service Exchange or GSX. That means you either have to find an AASP who will help you, but still require you to bring in the machines to their bench, or you have to make Genius Bar appointments for these machines.

All of them.

This isn’t a good experience for the companies that pay to be part of GSX, or the organizations that can’t participate on that scale. And these machines are fairly popular, as they represented a good balance between cost and functionality in a world where the Touch Bar is still a bit of an unknown quantity.

Yes, this is a special situation. It’s unlikely that any future machine will need this fix, due to the migration of the storage controller into the T2 silicon that Apple uses for its storage controllers. That, however, underscores the need for a better customer experience to fix this issue in the longterm.

We now have to go back to users and request their permission to disrupt them again in the future, and that’s not a fun experience. Just swap out the defective hardware for new, and populate the refurb store with the difference. It’s the least Apple could do.

Point to Point Wireless with LiteBeam

From time to time, we get asked a question like “Hey, I need to get signal to a building that’s not part of our regular building. Can you do that?” and the answer is usually, “Sure, we could bury a fiber, or fly a cable,” mostly because we haven’t felt the loss in speed and signal makes sense. We recently had a situation that called out for a wireless point to point link, though, and that got us thinking.

Our client took a new space on an upper floor of a warehouse building, across the loading dock from their storage space. They have a staff of two or three on the far side of the gap, and they wanted to extend their current connection to this space without paying for a second internet connection, relying on cellular hotspots, and the building is such that a flown cable or a trenched fiber was impractical.

They’re a Ubiquiti shop, and so we looked at our options. There are the NanoStation and NanoBeam options, but our reseller house of choice was badly backordered, so we ended up with a LiteBeam AC Gen2 setup. I think, given what we found regarding our mounting situation, it’s fortunate we ended up with the antenna geometry and power pairing that was present in the LiteBeam.

The LiteBeam gear is powered by 24V passive injectors, or, if your switch is capable, it can take 24V passive POE directly off a switch. Most places aren’t going to have switches capable of 24V power, and it’s a real bummer that’s what this requires. I’m still scratching my head why this won’t just take standard 802.3af.

When we toured the space, the client suggested that we could mount the warehouse dish on the exterior of the building and “easily” plumb the cable into their space. On the office side, we could position the dish in the north-facing window. There was no roof access, and definitely no exterior penetrations permitted in their space. So through the looking glass we went.

The LiteBeam antennas are parabolic reflector dishes approximately 14″ wide by 10″ tall by 10″ deep. They come with adjustable mounting equipment, including a super helpful hoseclamp mount.

Specifications of the LiteBeam Gen2

Assembly is fairly rapid. The dish ships in three panels which slot together nicely, then screwed together, the feed receiver attaches via tension tab mounts, and the antenna feed snaps into place. From there, you can attach the elevation and azimuth mounts, and which then attach to the pole mount kit.

But, what if we don’t have a pole to mount to?

It was off to the hardware store to talk to my friend neighborhood Annie’s Ace Hardware folks about ways to handle this. What we settled on was a set of galvanized flanges and pipe joints, which easily allowed us to mount an elbowed pipe to the vertical wall of the warehouse, and an offset pipe mounted to a piece of 2×4 with lag bolts for screwing into the window frame. This gave us superb stability at a cost of less than $50.

Two LiteBeam dishes with attached mounting kits, resting on a dining room table. A LiteBeam dish hanging from a pipe mount beneath a 2x4

Having mounted the office side, we went to mount the warehouse side. After several broken concrete anchors, and a trip for a bigger drill and better anchors, and a lot of creative cabling, we were able to get the second dish properly mounted. Time had come to setup and test.

Now, we’d laid the groundwork ahead of time, and everything had been firmware updated and tested and prepared from inside the warm office, before heading out into the cold. We knew these things should easily sync up, we just had to get there, and get the dishes aligned.

LiteBeam Wireless Link mounted in its final position

If we were smart, I’d have picked up a green laser pointer to help with the alignment of the two dishes, but Mark I Eyeball still does the job pretty well. On our first attempt we got the wireless link close enough to register without having to futz with the positioning, we’d gotten close enough for a functioning link:

An image from the setup up showing functional links

The patient lives! We were getting about 20Mbps through the link, on a connection that is often twenty times that fast, so we knew we had work to do. We were able to get the signal up to 40dB of signal, and that was about as good as we could get. With the LiteBeam good for kilometers, we knew we should be doing better at a distance of under 200 feet.

To test our theory, we unmounted the dish and stood outside with it, and sure enough, signal strength spiked back up to the top of the range. The window’s coating was messing with our signal. There was, unfortunately, no fix for that, as glaziers weren’t in the budget for the move, but we did get service on the far side of the link up to 50Mbps on our speed test, more than adequate for a staff of two primarily doing light streaming and office work.

Lessons Learned:

Building penetrations are never as easy as they say they are.

Window glass can be a tougher barrier to signal than you’d think.

A laser sight of some sort is required for point to point wireless.

Sometimes $50 at the hardware store is going to be plenty for creative mounting solutions.

The LiteBeam Gear is pretty awesome, but you need 24V Passive POE to power it, which is not awesome.

Supraventricular Tachycardia: Or, A Trip to the ER with my Apple Watch

Overall, I’m a pretty healthy person. My blood pressure’s normal, my resting heart rate is in the low 70s, my cholesterol is normal, my blood sugar is normal, and I can go for a good long bike ride or walk without feeling winded. I’m heavy — my BMI is obese — but I’m in good health overall. (General reminder that BMI is BS.)

I bought my Apple Watch Series 4 when Apple announced it this summer, an upgrade from my Series 2. I was attracted by the fall detection (I’m an award-winning accident prone fellow) and also by the new ECG feature. I have a family history of atrial fibrillation, and I’m now 40, so some precautions seemed wise.

This afternoon, I was helping a client move offices, mostly just deconstructing a simple network rack and moving access points into new space. I was doing some physical work, but nothing anyone would mistake for exercise. But, then I felt it. My heart was pounding. I got dizzy. Tunnel vision. I had to sit down.

heart rate city

I took my heart rate on the watch and it was over 200. I spent five years as a competitive swimmer, and to my knowledge I never got above 195. Even riding up Box Hill on Zwift didn’t get me over 170 this winter. 200 is scary territory. I remembered the ECG functionality, and googled how it worked. I took a reading.


I didn’t know how to read it, and I knew I was in a bit of trouble, so I had a coworker take me up to MedStar Washington Hospital Center, a mile or two away. Triage saw me rapidly, and I unlocked my phone to show the nurse. She was setting up a more complicated EKG, but because my heart rate had dropped back toward normal, it might not have any clear result they could read beyond just normal operation.

As soon as the tele-doc came on screen, the nurse rotated my phone and put it up to the camera to show the doctor the rapid rhythm from half an hour earlier.

“Oh, that’s an SVT,” he said immediately.

I didn’t see what it had to do with Ford’s Special Vehicle Team, but he clarified that he meant Supraventricular Tachycardia. They wanted to make sure labs were taken, and that nothing abnormal in my blood work showed a more troubling cause. But the diagnosis was there in an instant, thanks to my wrist watch.

Both the attending and her supervisor wanted a look before the day was done, and I was sent home with instructions to go see my doctor (don’t worry, I’m going on Thursday), but now I’ve got something to show my medical team, as well.

Sure, a lot of the time it feels like we live in a dystopian version of the future, and I’m still not sure where the flying cars are, but today I used my wrist computer — list price $399 — to take an ECG before arriving at the emergency room, where a doctor, appearing in my room via video conference, was able to read that medical diagnostic and make a snap judgment that I was probably going to be alright for now.

Apple remains a company that exists five to ten years into the future, building bridges back to the present. Touch ID and Face ID. Secure Enclave. Device Enrollment Program. Apple Watch Series 4 Health Tools. Perfect? No. Better than the rest? By miles and miles.

Thanks, Apple. My heart is in your hands, it seems.