Save time and reduce risk with Gradle’s includeGroup

With the recently released version 5.1, Gradle has added a great, subtle new feature that lets you specify which dependencies should be pulled from which repositories.  To explain what this is, let’s start with the default behavior. In your Gradle file, you probably have multiple repositories defined, like this:

When Gradle needs to find a dependency, it will search each of those repositories, in the order they are declared.  So when it goes to download, for example, [com.android.support.constraint:constraint-layout:1.1.3], it will first check the Google repo, which has that artifact, so it’s done.  But then let’s say you want RxJava: [io.reactivex.rxjava2:rxjava:2.1.9]. Gradle checks for it in the Google repo, but Google responds with a HTTP 404 error, so Gradle moves on to the next repository, which is JCenter.  And on and on, for each dependency in your build.

This can lead to a couple of problems:

  • There’s a performance problem.  Since you have to check each repository in order for each dependency, there are a lot of requests that return a 404, and you waste time and resources.  Wouldn’t it be nice if we could tell Gradle “oh, I know RxJava is on JCenter, so don’t bother checking the Google repository”?
  • If a repository that’s first in the list gives a bad response (like the time JCenter responded to Google artifacts with an HTTP 409 error). Gradle will give up, and not check other repositories.  This will break your build, and leads to a lot of advice like “make sure you list the Google repository first!”
  • You’re vulnerable to a spoofing attack.  If you have a dependency that’s in your last repository (fabric in the example above), but a malicious actor uploads a library with the same group and artifact names to JCenter, for example, Gradle will download the JAR from JCenter since it’s higher in the list, and you’ll never know the difference.  JCenter doesn’t do much to verify that you are who you say you are, so this type of attack is a real risk.

So how do we resolve this?  Gradle 5.1 adds a new API so you can specify which groups to include or exclude in a repository.  A quick note about what I mean by groups: it’s the bit before the first colon in a Gradle coordinate.

So in practice, your build.gradle might now look something like this – note that you can match groups exactly or with a regular expression:

Now, when Gradle goes to download constraint layout, it will match the regex on the google repo, and Gradle will never attempt to download it from another repository.  Likewise, even though the Google maven repository is listed first, Gradle won’t attempt to download RxJava from it, because it’s not listed in the include groups.

If you want to test this, try running a Gradle task with “–refresh-dependencies”, which will force Gradle to try to download all of your dependencies again.  If you get an error like this one then you know you still need to work on your configuration.

> Could not resolve all files for configuration ':app:debugCompileClasspath'.
   > Could not find io.reactivex.rxjava2:rxjava:2.2.2.

The important thing to know about includes and excludes is that the behavior is defined per repository.

  • If you list an include – Gradle will only try to download the included groups for this repository.
  • If you list an exclude – Gradle will try any groups except the excluded groups for this repository.
  • If you list includes and excludes – Gradle will download only groups that are included and not excluded.

There’s one non-obvious thing that I want to really emphasize, though:  If you specify includeGroups on repo but don’t specify any groups on a second repo, like the screenshot below, Gradle will still try to download com.google artifacts from both of these repositories.  The options you declare for one repository don’t affect other repositories.

This leads to my last, but perhaps most important piece of advice: whitelist every group.  The best way I can see to use this is to include every group in the appropriate repository, and make sure every listed repository has an includeGroup declared.  This will force Gradle to download each dependency from the right repository only.

What’s the point of Lockdown Mode in Android P?

Lockdown Mode is a new feature in Android P — it disables fingerprint login, forcing you to enter your PIN/passcode to unlock the device.  This is an important, but subtle, distinction. Ultimately, what’s the difference between your passcode and your fingerprint?

The difference comes down to legal precedents in the US.  Legally, the police can compel you to unlock a device using biometrics (e.g. face or fingerprint).  They can’t, however, force you to unlock the device using your passcode. This is due to the Fifth Amendment to the US constitution — you can’t be compelled to testify against yourself.  Courts have said that revealing your passcode is equivalent to providing testimony against yourself — but using your body to unlock the phone isn’t legally the same thing. I’m not going to try to explain the underlying legal theories in depth, but if you want to read more, check out these articles:

There are a lot more details and complications that I’m glossing over, but if you were curious about why Android bothered to create a new mode that forced one type of login while disabling another — I think this is why they did it.

Disclaimer: I’m not a lawyer and this isn’t legal advice — if police action is in your threat model, you need to talk to a real lawyer.

Your Smartphone is the Brain, Everything Else is an Appendage

At their I/O conference, Google announced Android Wear, Android Auto, and Android TV.  As I watched the series of announcements, I was struck with this metaphor: your smartphone is the brain, and everything else is an appendage.  Your watch will give you notifications at a glance and take voice commands, but it’s really just funneling that data back to your phone. Android Auto will bring navigation and music control into your car’s dashboard, but it’s just mirroring the maps and music streams from your phone.  Android TV will play movies and search through IMDB, but your phone is the remote control.

All Five Senses (well, 3 out of 5 ain’t bad!)

Our brain doesn’t see or feel or move us directly.  It processes a signals from my nose that smell bacon, then sends signals back out to my legs that say “I want to go to there” and I start walking.  Likewise, my phone doesn’t track my steps, it’s using a sensor in an Android Wear device to log that info, then relay it back to the phone to make sense of it.  Then the phone/brain can say “well, it’s 6PM and you’ve only walked 6,000 steps today, it’s sunny and 75, so grab the dogs and go for a walk.”  It sends out a notification that interrupts me browsing Instagram and gets me moving.

We have virtually replicated three of the senses; we have cameras (sight), microphones (sound), and touchscreens (touch).  Our phones have sensors for all three, and these appendage devices tend to have at least two.

Input & Output

I got on board the Android train back with the G1, and I remember thinking how powerful this device was, even with that rough first-gen hardware.  In the office I was working on a simulator for a $25,000 drone platform for the Marine Corps, but for a couple hundred bucks I had just bought a computer fully integrated with a fast internet connection, location and orientation and acceleration sensors, hi-res E/O sensor (you’d call this a camera), proximity sensor, 2D touch sensor, and keyboard.  I could read the signals from all of these sensors, process them however I wanted, then feed signals back out using the display, LED lights, vibration, or sound.

Since then phones have added a few more inputs (fingerprint sensors, heart rate monitors, multiple cameras for 3D).  What we saw at I/O was a proliferation of these.  Android Wear adds inputs (voice, step counter, wrist motion, heart rate, touch) and outputs (display & vibration on your wrist).  Android Auto uses your car’s display and audio system for I/O.  Android TV uses your existing TV to add a 50″ display to your smartphone.

Why these new Appendages Matter

Some of these might seem like minor additions (why do I need a display on my wrist when my phone is right in my pocket?), but the power is in the context.  If I’m bored in the office, sure, I don’t mind pulling out my phone and reading through a stream.  But if I’m driving a car, looking down at my phone is a seriously dangerous distraction.  If, instead, I can tell my wrist “OK Google, navigate to my next meeting”, without taking my eyes of the road, we’ve enabled the brains of your smartphone to help in a new context where the smartphone itself can’t get the job done.

This may not be as life-changing as the jump from features phones to smart phones, but it is still an improvement.

Flipping the Network

I once worked in a lab where they wanted any user to be able to sit at any computer and start working.  So I was issued a hardware token (think of a smart card, or a USB plug – holding my private key) that I could plug into a computer; it would recognize me, load in my environment and preferences, and I could pick up working wherever I left off.  The central brain was in a server room, but I could use whatever computer was in front of me for I/O.  With Android, we’re flipping that around.  I can still use whatever device is handy for I/O, whether that’s my watch, my TV, my car; but now the central brain is sitting in my pocket with the phone, rather than in a server room.

So What

My point is that these new appendages shouldn’t be viewed on their own. They’re not replacements; you’re not going to get rid of your phone when you get a new watch.  Instead, the watch becomes additional input and output for your phone.  So don’t think of the watch’s utility as a watch, think of what it can do as extra I/O for your phone.

Jenkins CI broken on upgrade to Mac OS Mavericks

Putting this out there in case somebody has the same problem.  I upgraded my Jenkins box to OS X Mavericks, and Jenkins stopped responding; requests to localhost:8080 simply dropped.

After a bit of digging and dead ends, I found out that java wasn’t installed.  Running

javac -version

from the command line failed, and asked me to install Java.  I installed the lastest JDK, restarted jenkins with these commands:

sudo launchctl unload -w /Library/LaunchDaemons/org.jenkins-ci.plist
sudo launchctl load -w /Library/LaunchDaemons/org.jenkins-ci.plist

and everything seems to be back to normal.

Virginia Traffic outage

The Virginia Traffic app experienced an outage this weekend.  VDOT has changed their website significantly, which broke the Virginia Traffic’s app reading of their data.

I’ve mostly recovered, you should see incidents as before, though some incidents might not show up under the right regions.  I’m working on it.

The good news is that VDOT has added some very useful metadata to their data, so the app will be able to take advantage of this data in a future release.

24 Game Solver

My wife teaches elementary-school math, and I’m somewhat of a math nerd, so the 24 game is right up our alley.  Basically, you’re given four numbers, and you have to find a series of operations that makes 24 from those numbers.  For example, given 1, 2, 3, and 4, you might respond that  1*2*3*4 = 24.  Some sets of numbers are harder than others (much harder).

One day, my wife and her class were having trouble solving a particularly hard set, so she emailed me for help.  It took me a while to find the answer, and all the while I was thinking to myself “Self, I’m a programmer.  Why am I doing this the hard way?”  So now I’ve created the easy (cheater) way.  Go to http://jebware.com/24 input your 4 numbers, and it will tell you how to make 24.

Right now it does addition, subtraction, multiplication, division, and exponentiation.  However, it doesn’t understand the commutative property, so you’ll get a lot of answers that are essentially the same, like (1*2)*(3*4) and (4*3)*(2*1). I wrote it in javascript, and if you want the source or you want to improve on it, I made a repository on GitHub.

The “November Nor’easter”

That’s what the local news is calling it, at least. I went out yesterday and got some pictures. Keep in mind that I did this about five hours after high tide, so the water had already gone down a little bit from its morning peak.

These folks were trying to tow their minivan out of the water. I took this picture right after their rope snapped.
Police were blocking this street from both directions so that nobody would even try to get through. Which is probably good, because I saw people attempting some pretty stupid things.

Submerged cars. You’ll notice this becomes a theme.

These people were out taking pictures with their dog; I saw one of their pictures later on the local news’ website. I tried taking my dog along for this expedition, but she gave up after two blocks.

This vehicle wasn’t abandoned yet, I think the owner was still sitting in it.

Probably wishing you hadn’t parked your BMW on that particular block.

You want to know what’s sad? So far these pictures aren’t even tidal flooding. They’re just areas that don’t drain.
In the middle of this shot is Smith’s Creek. On the left is a road, Mowbray Arch a.k.a. the Smith’s Creek Annex. This is the only picture in this set with tidal flooding.

Remember that BMW? When I came back by a tow truck was fishing it out. I hear that the waiting list for tow trucks is getting pretty long.

“Neither rain, nor sleet, nor gloom of night”. What you can’t see in these pictures is that it was still raining hard. And the wind was blowing at 50 mph gusts.