Inbox Ten

Because 10 > 0

I’ve heard people recommend Inbox Zero as a way of managing your email – that you try to get and/or keep zero messages in your inbox.  I’ve tried it, but I’ve always struggled, because there always seem to be a couple of emails that I’m not ready to file or delete just yet.

So I’ve come up with a solution that has been working pretty well for me for over two years – once a week, I get my email inbox down to ten or less.

Is this a joke?

No, it’s not a joke – it’s a reimagining of the goal.  The point of Inbox Zero isn’t actually to get to zero emails, it’s to keep your inbox manageable.  Ten is just as manageable as zero, and so much easier to achieve.

What Stays?

For me, if I have a shipping notice for something that hasn’t arrived yet, I may keep that around as a reminder to look for a package.  If I have an email I need to respond to, that might stick around.  If I have an email with information that I’m going to need in a couple of days, I’ll leave that in the inbox until then.

Why does this help?

This helps because it’s more doable.  If I have 50 emails, there are probably 35 that are easy to process or file, 10 that take a few minutes but are doable, and 5 that are going to take some serious work, or are blocked.  If I can get through the 35 easy emails and 10 mediums, then I’m done.  I’ve winnowed my inbox back down to a manageable amount without having to get rid of emails that I actually want to keep around.

If it’s easy, you’re more likely to stick with it.

Give it a try

If you’ve tried Inbox Zero but never been able to make it stick, try getting to Inbox Ten this week. 

Reverse Engineering on the Device – Slides from Droidcon UK 2019

I have the opportunity to present an expanded version of my talk about reverse engineering Android apps on the device, at this year’s Droidcon UK.

The video is available on the SkillsMatter website.

Links:

You can also see the slides & video from a shorter version of this talk at Droidcon NYC 2019.

Reverse Engineering on the Device – Slides from Droidcon NYC 2019

At this year’s Droidcon NYC, I had the chance to talk about some oddball techniques for reverse engineering Android apps from another app on a device.

Thanks to everyone who came out, and the slides are available now.

Save time and reduce risk with Gradle’s includeGroup

With the recently released version 5.1, Gradle has added a great, subtle new feature that lets you specify which dependencies should be pulled from which repositories.  To explain what this is, let’s start with the default behavior. In your Gradle file, you probably have multiple repositories defined, like this:

When Gradle needs to find a dependency, it will search each of those repositories, in the order they are declared.  So when it goes to download, for example, [com.android.support.constraint:constraint-layout:1.1.3], it will first check the Google repo, which has that artifact, so it’s done.  But then let’s say you want RxJava: [io.reactivex.rxjava2:rxjava:2.1.9]. Gradle checks for it in the Google repo, but Google responds with a HTTP 404 error, so Gradle moves on to the next repository, which is JCenter.  And on and on, for each dependency in your build.

This can lead to a couple of problems:

  • There’s a performance problem.  Since you have to check each repository in order for each dependency, there are a lot of requests that return a 404, and you waste time and resources.  Wouldn’t it be nice if we could tell Gradle “oh, I know RxJava is on JCenter, so don’t bother checking the Google repository”?
  • If a repository that’s first in the list gives a bad response (like the time JCenter responded to Google artifacts with an HTTP 409 error). Gradle will give up, and not check other repositories.  This will break your build, and leads to a lot of advice like “make sure you list the Google repository first!”
  • You’re vulnerable to a spoofing attack.  If you have a dependency that’s in your last repository (fabric in the example above), but a malicious actor uploads a library with the same group and artifact names to JCenter, for example, Gradle will download the JAR from JCenter since it’s higher in the list, and you’ll never know the difference.  JCenter doesn’t do much to verify that you are who you say you are, so this type of attack is a real risk.

So how do we resolve this?  Gradle 5.1 adds a new API so you can specify which groups to include or exclude in a repository.  A quick note about what I mean by groups: it’s the bit before the first colon in a Gradle coordinate.

So in practice, your build.gradle might now look something like this – note that you can match groups exactly or with a regular expression:

Now, when Gradle goes to download constraint layout, it will match the regex on the google repo, and Gradle will never attempt to download it from another repository.  Likewise, even though the Google maven repository is listed first, Gradle won’t attempt to download RxJava from it, because it’s not listed in the include groups.

If you want to test this, try running a Gradle task with “–refresh-dependencies”, which will force Gradle to try to download all of your dependencies again.  If you get an error like this one then you know you still need to work on your configuration.

> Could not resolve all files for configuration ':app:debugCompileClasspath'.
   > Could not find io.reactivex.rxjava2:rxjava:2.2.2.

The important thing to know about includes and excludes is that the behavior is defined per repository.

  • If you list an include – Gradle will only try to download the included groups for this repository.
  • If you list an exclude – Gradle will try any groups except the excluded groups for this repository.
  • If you list includes and excludes – Gradle will download only groups that are included and not excluded.

There’s one non-obvious thing that I want to really emphasize, though:  If you specify includeGroups on repo but don’t specify any groups on a second repo, like the screenshot below, Gradle will still try to download com.google artifacts from both of these repositories.  The options you declare for one repository don’t affect other repositories.

This leads to my last, but perhaps most important piece of advice: whitelist every group.  The best way I can see to use this is to include every group in the appropriate repository, and make sure every listed repository has an includeGroup declared.  This will force Gradle to download each dependency from the right repository only.

What’s the point of Lockdown Mode in Android P?

Lockdown Mode is a new feature in Android P — it disables fingerprint login, forcing you to enter your PIN/passcode to unlock the device.  This is an important, but subtle, distinction. Ultimately, what’s the difference between your passcode and your fingerprint?

The difference comes down to legal precedents in the US.  Legally, the police can compel you to unlock a device using biometrics (e.g. face or fingerprint).  They can’t, however, force you to unlock the device using your passcode. This is due to the Fifth Amendment to the US constitution — you can’t be compelled to testify against yourself.  Courts have said that revealing your passcode is equivalent to providing testimony against yourself — but using your body to unlock the phone isn’t legally the same thing. I’m not going to try to explain the underlying legal theories in depth, but if you want to read more, check out these articles:

There are a lot more details and complications that I’m glossing over, but if you were curious about why Android bothered to create a new mode that forced one type of login while disabling another — I think this is why they did it.

Disclaimer: I’m not a lawyer and this isn’t legal advice — if police action is in your threat model, you need to talk to a real lawyer.

Your Smartphone is the Brain, Everything Else is an Appendage

At their I/O conference, Google announced Android Wear, Android Auto, and Android TV.  As I watched the series of announcements, I was struck with this metaphor: your smartphone is the brain, and everything else is an appendage.  Your watch will give you notifications at a glance and take voice commands, but it’s really just funneling that data back to your phone. Android Auto will bring navigation and music control into your car’s dashboard, but it’s just mirroring the maps and music streams from your phone.  Android TV will play movies and search through IMDB, but your phone is the remote control.

All Five Senses (well, 3 out of 5 ain’t bad!)

Our brain doesn’t see or feel or move us directly.  It processes a signals from my nose that smell bacon, then sends signals back out to my legs that say “I want to go to there” and I start walking.  Likewise, my phone doesn’t track my steps, it’s using a sensor in an Android Wear device to log that info, then relay it back to the phone to make sense of it.  Then the phone/brain can say “well, it’s 6PM and you’ve only walked 6,000 steps today, it’s sunny and 75, so grab the dogs and go for a walk.”  It sends out a notification that interrupts me browsing Instagram and gets me moving.

We have virtually replicated three of the senses; we have cameras (sight), microphones (sound), and touchscreens (touch).  Our phones have sensors for all three, and these appendage devices tend to have at least two.

Input & Output

I got on board the Android train back with the G1, and I remember thinking how powerful this device was, even with that rough first-gen hardware.  In the office I was working on a simulator for a $25,000 drone platform for the Marine Corps, but for a couple hundred bucks I had just bought a computer fully integrated with a fast internet connection, location and orientation and acceleration sensors, hi-res E/O sensor (you’d call this a camera), proximity sensor, 2D touch sensor, and keyboard.  I could read the signals from all of these sensors, process them however I wanted, then feed signals back out using the display, LED lights, vibration, or sound.

Since then phones have added a few more inputs (fingerprint sensors, heart rate monitors, multiple cameras for 3D).  What we saw at I/O was a proliferation of these.  Android Wear adds inputs (voice, step counter, wrist motion, heart rate, touch) and outputs (display & vibration on your wrist).  Android Auto uses your car’s display and audio system for I/O.  Android TV uses your existing TV to add a 50″ display to your smartphone.

Why these new Appendages Matter

Some of these might seem like minor additions (why do I need a display on my wrist when my phone is right in my pocket?), but the power is in the context.  If I’m bored in the office, sure, I don’t mind pulling out my phone and reading through a stream.  But if I’m driving a car, looking down at my phone is a seriously dangerous distraction.  If, instead, I can tell my wrist “OK Google, navigate to my next meeting”, without taking my eyes of the road, we’ve enabled the brains of your smartphone to help in a new context where the smartphone itself can’t get the job done.

This may not be as life-changing as the jump from features phones to smart phones, but it is still an improvement.

Flipping the Network

I once worked in a lab where they wanted any user to be able to sit at any computer and start working.  So I was issued a hardware token (think of a smart card, or a USB plug – holding my private key) that I could plug into a computer; it would recognize me, load in my environment and preferences, and I could pick up working wherever I left off.  The central brain was in a server room, but I could use whatever computer was in front of me for I/O.  With Android, we’re flipping that around.  I can still use whatever device is handy for I/O, whether that’s my watch, my TV, my car; but now the central brain is sitting in my pocket with the phone, rather than in a server room.

So What

My point is that these new appendages shouldn’t be viewed on their own. They’re not replacements; you’re not going to get rid of your phone when you get a new watch.  Instead, the watch becomes additional input and output for your phone.  So don’t think of the watch’s utility as a watch, think of what it can do as extra I/O for your phone.

Jenkins CI broken on upgrade to Mac OS Mavericks

Putting this out there in case somebody has the same problem.  I upgraded my Jenkins box to OS X Mavericks, and Jenkins stopped responding; requests to localhost:8080 simply dropped.

After a bit of digging and dead ends, I found out that java wasn’t installed.  Running

javac -version

from the command line failed, and asked me to install Java.  I installed the lastest JDK, restarted jenkins with these commands:

sudo launchctl unload -w /Library/LaunchDaemons/org.jenkins-ci.plist
sudo launchctl load -w /Library/LaunchDaemons/org.jenkins-ci.plist

and everything seems to be back to normal.

Virginia Traffic outage

The Virginia Traffic app experienced an outage this weekend.  VDOT has changed their website significantly, which broke the Virginia Traffic’s app reading of their data.

I’ve mostly recovered, you should see incidents as before, though some incidents might not show up under the right regions.  I’m working on it.

The good news is that VDOT has added some very useful metadata to their data, so the app will be able to take advantage of this data in a future release.