macOS 10.14.5 beta 2, Kernel Extension Notarization, UAMDM, Whitelisting and You

Editor’s Note: This is an evolving topic and by the time you come across this in a search engine, circumstances may have changed. Treat this post as a frozen moment in time, things may have evolved for better or worse in the intervening weeks.

BLUF: If you are whitelisting kernel extensions on Macs with UAMDM, by Team ID, or by Team ID and Bundle ID, notarization is not necessarily required as of beta 2 of macOS 10.14.5. Those without UAMDM-defined kernel extension whitelists will need to make sure that kernel extensions are installed with both valid signatures and a correct notarization secureTimestamp.

Kernel Extension Signing in macOS 10.14.5 beta 2

Let’s begin with the recitals: beginning with macOS 10.14.5’s release, kernel extension signing is no longer sufficient. Kernel extensions updated after March 11th, 2019, or created for the first time after that date, will need to be notarized as well as signed. This means that your application and all attendant parts must have been signed and notarized by Apple. Here is how Apple explains this:

Notarization gives users more confidence that the Developer ID-signed software you distribute has been checked by Apple for malicious components. Notarization is not App Review. The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly. If there are no issues, the notary service generates a ticket for you to staple to your software; the notary service also publishes that ticket online where Gatekeeper can find it.

Notarizing Your App Before Distribution, Apple Developer Documentation

We had two easy tests for how this operated. Once macOS 10.14.5 beta 2 was installed on my daily driver, I downloaded updates to two of the apps we use that have kernel extensions and had been updated after March 11th: VMware Fusion Pro 11.0.3 and Kerio’s VPN Client 9.3.0.

On install of the new VPN Client, I received the following dialog:

Rejection Dialog from macOS for an invalid kernel extension

Kerio’s VPN Client was now dead in the water and not functional, no matter what I could do to follow up. An inspection (which requires Xcode 10.2 and not just the command line tools) of the kvnet.kext file in /Library/Extensions indicated I did not have a valid kernel extension any longer:

Persephone: tom$ stapler validate -v /Library/Extensions/kvnet.kext/
Processing: /Library/Extensions/kvnet.kext
Properties are {
    NSURLIsDirectoryKey = 1;
    NSURLIsPackageKey = 1;
    NSURLIsSymbolicLinkKey = 0;
    NSURLLocalizedTypeDescriptionKey = "Kernel Extension";
    NSURLTypeIdentifierKey = "dyn.ah62d4qmuhk2x445ftb4a";
    "_NSURLIsApplicationKey" = 0;
}
Props are {
    cdhash = <5bf723ec 9f7a0027 4592266d 0514db04 5f1760bb>;
    digestAlgorithm = 1;
    flags = 0;
    secureTimestamp = "2019-04-08 12:34:03 +0000";
    signingId = "com.kerio.kext.kvnetnew";
    teamId = 7WC9K73933;
}
kvnet.kext does not have a ticket stapled to it.

Without a valid ticket stapled to the kext, I was going to have a problem running it, as the secureTimestamp value is after 2019-03-11.

Well crap. I need that kernel extension to work for my VPN to client locations to work, so how am I going to get around it? Thanks to #notarization on the Mac Admins Slack, and Allen Golbig at NASA Glenn, Graham Pugh, and the help of others, the answer was already in our hands: User-Accepted Mobile Device Management and Team ID Whitelisting in the Kernel Extensions Whitelisting payload in MDM.

If you have a Mac with UAMDM (either via actual user acceptance, or via implied acceptance through Automated Enrollment), and you are specifying the Team ID of kernel extensions that you want to be whitelisted the new requirement of kernel extension whitelisting is transitive, meaning checks are not made to the notarization of the kernel extension, as the signing of the kernel extension is sufficient to its privileged execution.

MacADUK 2019: Highlights & What I’m Taking Forward

St. Paul’s and the Thames

This year’s MacADUK Conference is in the books, and I’ve made it back to the States in one piece. It was a busy week, full of socializing and engaging with colleagues, as well as learning about new topics in client management and deployment workflows, encryption details, and security philosophy. My sincerest thanks to Ben Toms and James Ridsdale from Datajar who chaired this year’s conference, and to the team at Amsys that handled logistics and details.

Highlights

Park nearby Prospero House

This year’s conference had some great sessions. When the videos are out, I would strongly recommend seeing the following sessions on the small screen:

Armin Briegel, Deployment: Modern deployment workflows for business

Deployment is a source of opportunity for every IT out there. It’s literally your coworkers’ first impression of your operation, so why aren’t you putting your best foot forward with customized deployment via Automated Enrollment and Mobile Device Management. Figuring out how to replace the older technologies of ASR-based imaging with new deployment strategies is a challenge worth embracing.

Chris Chapman, macOS in a Docker Container for Development

There’s no question that Docker and Kubernetes are key components of modern software development stacks, especially for web-oriented applications. Chris Chapman of MacStadium has taken this to a whole new level, by writing a boot loader for Kubernetes and Docker for Apple hardware, allowing you to deploy a macOS image through orchestration and docker. The more I think about this, the crazier it is, but it demonstrates a flexibility that wasn’t possible before. I’m sure this is completely unsupported, but what a phenomenal way to think about the underlying tool chains we build from. It’s called Orka, and MacStadium is looking for beta sites.

David Acland, All about digital signatures

We spend a whole lot of our admin life making sure that signatures align and are approved, but how does that process actually happen? What’s the working relationship between a hash and a signature? What’s the actual cryptographic process used to take a hash and sign it as a measure of identification integrity? David took us through the details, and it was a real pleasure. And my head didn’t explode.

Ben Goodstein, Working with APIs: Power up your scripts with real time information

APIs as part of scripts is table stakes for adminry these days, and where better to get a refresher than with a low-stakes custom API that Ben wrote for accepting data from a script. He also told us about Insomnia, a GUI app for practicing with, in order to review what’s come down from an API call, and help better gather information. It was a great session, and I learned a lot of useful things to iterate against.

Commit No Nuisance

Takeaways

I had a few big thought lines that came back a few times during the conference, and lead to some noodling in my head on walks through London. We’re once again at an inflection point in macOS Administration, much as we were in the 10.8/10.9 period, the 10.7 period, and the 10.5 period. There are changes to our management structures that are no longer flashes in a pan:

MDM is not optional.

Deployment should be automated.

Manage as little as you need to retain a presence on the platform.

Managing more than you need to results in Shadow IT and Loopholes.

IT Operations relies on trust. Not just mechanized and automated trust chains established through TLS certificates and certificate authorities, it relies on a human trust that is implicit between Management and IT, and IT and the end users, your coworkers. For any IT policy to succeed, it must come with buy-in from your coworkers, not just in your department, but in your whole organization. Systems that are deemed too complicated will be ignored. Systems that are deemed too cumbersome to be operated will spark grudges. Systems that are deemed to be unpersonalizeable will result in shadow IT usage on personal equipment.

The balance between security, usability, and management philosophy remains the single most important challenge of any IT environment, large or small. If you have a bad balance, your coworkers will fight with you, resent you, and eventually work around you and cut you out.

Having a light hand on your workstations will be fought by internal and external security guidelines, though, and you’ll need to be ready with justifications based on feedback in the event that your choices are questioned. Obviously, there are some guidelines you can’t ignore. But, the security of the platform needs to be part of your process, not bolted on, not thought of after, but holistically part of your deliberations and choices. Self-healing management is a part of that, as is centralized reporting mechanisms designed to track the state of a machine.

If IT isn’t working to enhance the culture of your organization by extending and embracing systems of participation and training, your value will be subsumed by internal groups that are doing these things. That means providing good guard rails, but also providing knowledge and power at the workstation level to enhance your colleagues’ ability to do their jobs.

IT is a human-facing department in 2019. We serve the people. We just also serve the machines they use.

Update Your Understanding of Wi-Fi: Workshop Opportunities

As an Apple-focused Admin, one of the tools that most belongs in your toolbox is an understanding of how Wi-Fi works on a fundamentals level. You need to know how devices interact with this network layer as much or more than you do how they interact with TCP/IP and other Layer 3 technologies. In our day-long workshop, Chris Dawe and I are going to be talking about how Wi-Fi works, from a history of the technology (now in its 3rd decade!), to how your Apple devices interact with Wi-Fi, to how to troubleshoot networks and design better ones.

We’ll be doing this workshop at X World in Sydney in late June, and then again at Mac Admins in July. We’re thrilled to be working with two of the Apple Admin world’s best conferences.

We’ll be starting with a thorough history of Wi-Fi, moving to the nuts and bolts of how Wi-Fi works between a client and access point, and introductory sections on network design, network troubleshooting, and network security, and then an overview of the survey and analysis tools. We’ll wrap it all up with how Apple devices interact with all of the above, and all the specialized knowledge you’ll need to make sure that your networks are tuned properly for your fleets.

Physics Always Wins is the name of the workshop, and a stern reminder that there are good rules of the road to follow for good Wi-Fi. Discerning what the best settings are for your environment feels like an artisan’s job these days, given the layers of marketing speak and incomplete understandings of the radio frequency world. We’re hoping to give more Apple Admins a firmer understanding of how it all works, so you can make the right engineering choices.

Apple Ships Blindingly Fast New iMacs With Old, Slow Storage

I will guarantee you that the single greatest bottleneck in terms of speed on the base 4K iMac is that slow spinning disk drive. People who spend $1299 for a 4K iMac in 2019 deserve not to see a spinning beach ball—but they probably will. This is one case where Apple should either take the hit on profit margin or just raise the price if it has to.

Jason Snell, Six Colors, “The iMac and spinning-disk disappointment

The 2019 iMacs have at their core incredible Intel processors, large amounts of RAM, market-leading displays and powerful 3D cards. These are machines that can game with the best, display beautiful movies and photographs with incredible color fidelity, and rip through even the most complicated processing needs in a bare minimum amount of time.

And they also ship by default with old, slow 5400rpm hard disks that came to the marketplace in 2007 in 1TB capacities. When Hitachi released the first Deskstar with 1TB that year, at a whopping price of $399, they boasted a cost-per-GB of $0.40. Now you can have a SATA SSD for less than $0.25 per GB, and an M.2 SSD for $0.35 per GB.

Sure, some of the newly released 1TB drives in the iMac are mated to small SSDs, yes, but the Fusion Drive isn’t a substitute for full-size SSD. The speed just isn’t there. The maximum throughput of a spinning disk is around 1.5Gbps. That’s rarely achieved under all but the best conditions. Most of the time it’s under 1Gbps for a 7200rpm drive, let alone a 5400rpm drive, which will top out around 800Mbps under perfect conditions. A SATA-based SSD can do four to six times that throughput. topping out at 6Gbps. Those are available from OWC at these prices, retail:

If you move up to an M.2 SSD, similar to the kind that are using the in 2014-2015 MacBook Pros, the prices increase. So does the speed, up to about 16 times the average read and write speed of the 5400 rpm drives, topping out at 16Gbps. The current generation of MacBook Pros tops out closer to 20Gbps.

Apple has a hard job, to serve a wide clientele with varying needs, from home users, to education marketplaces, to corporate fleets, to small businesses and more. However, I can’t imagine that a 5400rpm drive on the desk of an Apple Executive, senior or otherwise, would last more than an afternoon. Why should it land on the desk of your average staffer, when they’re often the heart of an enterprise’s productivity? Why should it land in our schools’ computer labs, or on a creative’s desk?

We live in an era of unrivaled external and internal storage options and speed, with external disks coming in faster/better/larger/stronger, and cloud storage available without limits. An era where there are not 1, but 2 40Gbps buses on the back of the 2019 iMac ready to plug into a 1TB SanDisk SSD available for under $200.

But we also live in an era where every iMac’s base configuration has a 5400rpm drive still at its heart, just like 2007. Apple is selling us a sports car with a monster V-12, only they’re disabling six of the cylinders to save a few bucks. Given the importance of the iMac’s design to Apple’s brand, and to their brand awareness, this seems an odd choice.

Three-toed sloth is a photo by Magnus Bråth and used under a Creative Commons Attribution License.

A Hero’s Return

One of Technolutionary’s first purchases in 2006 was this Mac mini, which today returned to our offices after 13 years of duty in co-location at Solid Space in North Carolina. It’s been a file server, a mail server, and more, and we absolutely, positively got our money’s worth on this beauty.

Thanks for the service, Mac mini, your retirement awaits. Thank you Apple, for building technology we can depend on.

A sunset over Puget Sound, from the offices of Dropbox Seattle

Mac Admins Podcast 112: Live From Seattle

This past week, I traveled to Seattle, Washington to join the Apple Admins of Seattle and the Greater Northwest at their monthly meetup, and to record an episode of the Mac Admins Podcast with my friends Chris Dawe, Jonathan Spiva and Vi Lynk, as well as my new friend Ashley Smith. The topic was fairly simple: let’s talk about career paths and career trajectories and all the crazy things a life in IT can bring.

We talk a lot about technology in our jobs, but we don’t talk a lot about our jobs in technology, and it was great to sit down and chat about how we’ve gotten to where we are, where we’re headed, and what we’re learning about working with people, machines, and applications. In particular, I found Vi’s conversation about relationships mattering in IT to be illuminating. How we fit our departments and businesses into each other is so important. It made me go watch her talk from Penn State again, to remind myself of who my internal folks are, who my external folks are, and so I can close the loop with so many of those people again in the near term.

Ashley Smith reminded me of the importance of being willing to do the legwork on a topic when you don’t know the answer, and that the best response to a question you don’t know the answer to is “I don’t know, but I’ll find out.” We grind so many people through the grist mill of Tier 1 support, but we don’t spend time letting them learn, in favor of metrics that likely don’t have a good backing in objective reality. As part of managing service desks, we need to make sure that we’re not blindly adhering to metrics over the development of our people.

This week’s episode is a break from the minutiae of the job, in favor of some of the bigger picture. It’s worth your time to listen in the browser, or out on Overcast.

Seattle was a marvelous city to visit, even in the midst of winter, and I had so many incredible meals (Seven Stars Pepper! Harbor City! Jack’s! Arctic Club! Beer Star!) and conversations that it will remain in my heart. My thanks to organizers of the event, and to everyone that I got to see while I was out there for three days. Getting out to meet people in Mac IT all over the country is the best part of my job, and I can’t wait to do it again in March.

Everything I Know (Now) About The 13-inch MacBook Pro (non Touch Bar) Solid-State Drive Service Program

This Fall, Apple announced a service program for the non-Touch Bar MacBook Pros (also known as the MacBook Escape, for the hardware Esc key that they still have), specifically around the solid state drive that stores the operating system and user data. Think of a service program a lot like a car’s technical service bulletin program: designed to identify a potential failing of a given make and model of machine, and resolve that defect before it turns serious.

The Apple documentation for this repair is clear: the machine will have all of its data wiped during the firmware fix. Apple states: “Prior to service, it’s important to do a full back up of your data because your drive will be erased as part of the service process.” This means that you must backup the data before you take the machine to Apple. In our case, where Time Machine backups exist, we will perform a final update to the backup before the machine goes in. Where one does not exist, we will use Carbon Copy Cloner to backup to a disk image.

Today, I got to watch as a technician completed this process on a client computer, and I wanted to catalog what happened, as there’s not a step-by-step guide available for admins. In this case, I had three affected machines, and a Genius Bar appointment. Two of the machines failed the diagnostic portion of the firmware fix, and one was successful, which gave me a look at both cases of the SSD Firmware Update.

The Basics of the Solid-State Service Program

Before the process began, each of our machines was inspected and made sure to be in operating condition. After a brief check to determine OS level and functioning status, the machine was restarted, its PRAM zapped, and then it was run through standard onboard diagnostics (ie, hold Shift-D at boot). Our friendly Genius also reminded us for the third time that all data should be backed up at this point, or forever hold your peace. Now the machine was ready for the next step.

The firmware update process was handled in a NetBoot environment, as these machines are not T2 machines, and thus can be NetBooted. A specifically-created NBI was used by the Genius to boot the machine to a single-use tool. The appearance of this tool was very similar to booting into recovery, where a standard window appears and offered a single tool, the SSD Firmware Update.

The actual process of running the SSD Firmware Update is quick. I clocked it at well less than three minutes. If there’s a failure, it’s even faster.

In The Event of a Failure

If the mechanism doesn’t pass muster, a failure dialog is displayed, and it advises that the machine’s SSD needs to be replaced. This is not something Apple was ready to do on the spot, and said it would need to go to depot for repairs. There was a silver lining here: the existing volume was preserved with its information. This allowed us to take the machine back and do a direct transfer of data to an alternate loaner machine and schedule the depot repair at our convenience. In short, the machine’s ready to go back to use for the time being, and you’ve got a good backup.

In The Event of Success

If the mechanism does pass muster, you will get one last confirmation before everything is wiped from the drive. This is the fourth time I was asked if there was a backup of the volume. There was, we proceeded.

After a short period — three to five minutes by my recollection — the firmware was updated and we could proceed. It was then booted into Internet Recovery, and we used Disk Utility to create a new APFS volume on the otherwise-vacant SSD. After the firmware update, there was nothing on the disk, not even an empty volume. In order for the OS to be reinstalled, a volume had to be created first.

Once that was completed, the OS was reloaded, and twenty minutes later we had a working machine again.

Summary And Opinions

The process here was, thankfully, fairly painless. The machines that failed the upgrade weren’t erased and can go gingerly into the hands of their users until we can identify sufficient loaners. The machine that succeeded is now deemed cured and shouldn’t have this problem again. But that takes us to the problem’s mere existence. We had 40 MacBook Pros that fit the description of the warranty program, and something like 22 of them have to go to Apple in the coming months. I feel particularly awful about the company where 11 of their 18 machines have to go in.

The effect of this service program occasionally requiring a depot repair is also deeply unfortunate, because how many loaners is a 15-person company supposed to keep around? In this case, it should be possible for an org to arrange to just have these machines replaced in their entirety. Machines that have this defect can just stop working in their entirety, leaving a trusted member of your staff facing a nightmare scenario of recovery. Worse, depot repair is 5-7 days.

To bolster good will, I would hope that Apple would consider a new machine swap for these machines to get them transferred in a way that was more respectful of the time of Mac Admins and Apple customers in general. It is also quite frustrating to arrange with Apple to do these firmware fixes en masse. It takes an hour to prepare the machine, an hour to transport it to Apple and wait in the store, and then another hour to two hours to restore the operating system and user data to the machines. In addition, this service program requires Apple to participate. For shops that are using internal technicians who are Apple-certified, this tool is apparently not available via Global Service Exchange or GSX. That means you either have to find an AASP who will help you, but still require you to bring in the machines to their bench, or you have to make Genius Bar appointments for these machines.

All of them.

This isn’t a good experience for the companies that pay to be part of GSX, or the organizations that can’t participate on that scale. And these machines are fairly popular, as they represented a good balance between cost and functionality in a world where the Touch Bar is still a bit of an unknown quantity.

Yes, this is a special situation. It’s unlikely that any future machine will need this fix, due to the migration of the storage controller into the T2 silicon that Apple uses for its storage controllers. That, however, underscores the need for a better customer experience to fix this issue in the longterm.

We now have to go back to users and request their permission to disrupt them again in the future, and that’s not a fun experience. Just swap out the defective hardware for new, and populate the refurb store with the difference. It’s the least Apple could do.