PSA: Connecticut Tech is Getting Exciting

New Haven Ruby is going strong, and New Haven JavaScript is heating up fast. We had a great turnout last night, with about 40 people at the Digital Surgeons office, and some great talks. Meet-ups are full of energy, introductions, and catching up with friends. And, obviously, tech talk.

broader New Haven tech community is emerging, combining NH.rb and NH.js with The Grove,  Independent Software, and now the A100 Developer Community.

The Greater Hartford ACM chapter is spinning up, and doing very well so far, hosting talks and tours of Connecticut technology centers. Hartford JavaScript is starting and Hartford Ruby is making a come-back (which I’m especially happy to see).

Connecticut hackers are doing some great things. Come out and join us!

Direct Manipulation, and Text Editors

Hat-tip to Giles Bowkett for mentioning Bret Victor’s talk, Stop Drawing Dead Fish, about direct manipulation and computer interaction as a tool for illustrators and animators.

I paused this video part-way through, so I could Get Down to Work. I paused it around 34:30, right after Bret Victor mentioned David Hestenes‘ idea that algebra and geometry help us model the world in similar ways, but that algebra uses our linguistic ability, while geometry uses our visual/spatial perception. Victor went on to say that tools for visual artists should use our visual/spatial abilities (direct manipulation), rather than our linguistic ones (code).

Like I said, it was time to Get Down to Work. I flipped over to Sublime Text 2, where I happened to have a block of text selected. Warming up, I idly hit command-I to search for a bit of text, only to realize that Sublime was only searching in that selected text. This is handy! I’ve been wanting something like this lately, when a method or variable name shows up all over the file, but I’m only working on one method.

Using the trackpad to select a bunch of text, and then working on it, feels a lot like I’m holding the text in one hand, and working on it with Sublime in the other. I discovered this accidentally, too – I’ve felt pretty productive with Sublime after the (tiny) initial bump, and I occasionally, gradually, get better as I learn new tricks.

I switched to Sublime after trying to learn Vim for a month or so. I’d been an emacs user for a few years, but I was under-using it, and didn’t need all the extra machinery. Vim seemed lighter, so I tried it out. But it felt like learning emacs all over again, but with different incantations: everything forced into the keyboard. And I get that! I’m a home-row Dvorak typist! But I still felt like emacs and vim were a step back from humbler tools like Programmer’s Notepad.

Bret Victor’s talk suggests an interesting explanation for that. He points out that some animations are a pain to create manually, and some behaviors are hard to code into the tool: so you divide and allocate tasks to the computer and the artist according to their abilities, and their cooperation produces the best effect.

Maybe this explains the appeal of these less-hardcore text editors. Sure, using the mouse and buttons for everything, a la Microsoft Word, is tedious, but so is forcing all interaction through the keyboard. Maybe a better allocation of tasks, a better balance of responsibilities between typist and tool, is what’s needed.

Cantor’s Snowflake

The Koch snowflake is a famous fractal.

The Koch Snowflake fractal

So is the Cantor set.

The Cantor set

Less famous, maybe, is Cantor dust, a version of the Cantor set made with squares instead of lines, which apparently earned it a much cooler name.

But as far as I know, we have no Cantor snowflake.

Since it’s Christmas, and since, in the odd quiet moments between holiday noise, Daniel Shiffman’s Nature of Code has been keeping me company, I wondered if we could make a Cantor snowflake.

Here’s what I came up with.

cantor-snowflake

As a bonus, it contains the Koch snowflake inside of it! I didn’t expect that.

I also rendered a Cantor snowflake PDF, which has a couple extra generations. It could make a nice bookmark.

Here’s the sourcecode, which is also running on openprocessing:

void setup() {
  size(1450, 300);

  background(255);
  noStroke();
  fill(0);

  cantorSnowflake(0, height/2, 140, 280);
}

void cantorSnowflake(float x, float y, float length, float sideStep) {
  if (length < 0.1) return;

  pushMatrix();

  hexagon(x, y, length);

  translate(sideStep, 0);

  for (int i = 0; i < 6; i++) {
    PVector point = vector(i * THIRD_PI, length * 2 / 3);
    cantorSnowflake(point.x, point.y, length / 3, sideStep);
  }

  popMatrix();
}

void hexagon(float centerX, float centerY, float length) {
  translate(centerX, centerY);

  beginShape();
  for (int i = 0; i < 6; i++) {
    hexPoint(vector(i * THIRD_PI, length));
  }
  endShape(CLOSE);
}

void hexPoint(PVector v) {
  vertex(v.x, v.y);
}

PVector vector(float rads, float length) {
  return new PVector(cos(rads) * length, sin(rads) * length);
}

Happy Christmas!

Symmetrical Portraits, Undone

Julian Wolkenstein’s Symmetrical Portraits project just made the rounds. I could’ve sworn I saw it on Brain Pickings, but I can’t find it now. Whatever, no matter.

It’s a weird-looking project: take a bunch of head-shots, cut them down the middle, and mirror each half, so one asymmetrical face becomes two symmetrical faces. It’s startling how much some of the pairs differ from each other. There’s a hypothesis that symmetry makes the people more attractive, but some of them are pretty uncanny:

So what’s a Processing goof-off going to do? Tear them apart, and put them back together. I don’t know whether the asymmetrical version is right, or whether it’s backwards, but I don’t think it really matters, unless you know the person in the photo. Click ‘em for big versions.












Here’s the code I used to de-symmetry them. Note the mouse controls: I had to tweak some of them, especially that second one of the blond short-haired guy.

// 36_Wolkenstein_12.jpg
String[] files = new String[] {
  "01_v2", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12"
};
PImage[] origImgs;

int imgIndex = 0;

void setup() {
  PImage img = load(files[0]);
  size(ceil(img.width * 1.5), img.height);

  origImgs = new PImage[files.length];

  for (int i = 0; i < files.length; i++) {
    origImgs[i] = load(files[i]);
  }
}

void draw() {
  PImage orig = origImgs[imgIndex];
  image(orig, 0, 0);

  int placeLine = round(orig.width * 1.25);
  int cropLine = round(orig.width * 0.75);

  int placeOffset = round(map(mouseX, 0, width, -20, 20));
  int cropOffset = round(map(mouseY, 0, height, -20, 20));

  image(
    orig.get(0, 0, round(orig.width * 0.5), orig.height),
    orig.width, 0);

  image(
    orig.get(
      cropLine + cropOffset,
      0, round(orig.width * 0.25), orig.height
    ),
    placeLine + placeOffset, 0);
}

void keyPressed() {
  if (key == ENTER) {
    save("fixed_" + files[imgIndex] + ".jpg");
  }
  imgIndex = (imgIndex + 1) % files.length;
}

PImage load(String chunk) {
  return loadImage("36_Wolkenstein_" + chunk + ".jpg");
}

Chaos, Order, and Software Development

Zach Dennis gave a very interesting, but not terribly well-received talk at RailsConf 2012, called “Sand Piles and Software.” (It’s on the schedule on Tuesday in Salon J, if you want to check it out.) Here are the slides (which are more suggestion than information), and here’s the synopsis:

This talk applies the concepts of chaos theory to software development using the Bak–Tang–Wiesenfeld sand pile model [PDF link] as the vehicle for exploration. The sand pile model, which is used to show how a complex system is attracted to living on the edge of chaos, will be used as a both a powerful metaphor and analogy for building software. Software, it turns out, has its own natural attraction to living in its own edge of chaos. In this talk, we’ll explore what this means and entertain questions for what to do about it.

The TL;DR of the talk was: as you build your software system, as you add features, you add complexity, and when it’s too complex, you won’t be able to add anything more, until you clean something up. So you clean a bit up, and add more complexity, until it falls over again. Like dropping grains of sand onto a sand pile, each grain is tiny, hardly worth noting, but one of them will cause a slide.

That much rang very true with me.

Zach’s advice, then, was to “fall in love with simplicity,” and “loathe unnecessary complication,” and there are some more slides about practices and values and refactoring, but I can’t remember the ideas for them; I’ll have to check my notes.

To me, that part sounded virtuous.

This morning, I turned again, for other reasons, to Dick Gabriel’s Mob Software: The Erotic Life of Code. (I’ll say it until I stop meeting programmers who haven’t read him: you are missing out.) I got to the part where he talks about swarms (he’s preparing to introduce us to the Mob, the open-source hackers), and complexity emerging from local actors with simple rules, and this part reminded me of Zach Dennis’ talk:

Chaos is unpredictability: Combinations that might have lasting value or interest don’t last—the energy for change is too high. Order is total predictability: The only combinations that exist are the ones that always have—the energy for stability is too high.

He goes on to quote Stuart Kauffman from “At Home in the Universe”:

It is a lovely hypothesis, with considerable supporting data, that genomic systems lie in the ordered regime near the phase transition to chaos. Were such systems too deeply into the frozen ordered regime, they would be too rigid to coordinate the complex sequences of genetic activities necessary for development. Were they too far into the gaseous chaotic regime, they would not be orderly enough.

…cell networks achieve both stability and flexibility…by achieving a kind of poised state balanced on the edge of chaos.

Is Zach telling us to stay where it’s safe and ordered? Are we stuck on this edge between chaos and order, if we want to write interesting software? I’d like my software to be both stable and flexible. If, to achieve this stability and flexibility, its behavior must be emergent, not guided by my brain, is that ok? Or is there a way for me to still specify requirements, and get this stability and flexibility? Is emergent-design only able to produce certain kinds of software?

One of Zach's slides: reaching your software's critical point

Thanks to Ren for reviewing this!

ERMAHGERD, the Gem

I just published my first “official” gem, ermahgerd, and what an auspicious way start my gem-author career! ERMAHGERD, I’M A RERL RERBER PRERGRERMAHR!

It’s (currently) totally based on J Miller Design’s translator, but there are some bits I’d like to tweak. We’ll see, it’s just for fun.

Get started with ERMAHGERD:

$ gem install ermahgerd
$ irb
ruby-1.9.3-p0 :001 > require 'ermahgerd'
 => true 
ruby-1.9.3-p0 :002 > Ermahgerd.translate("Goosebumps, my favorite books!")
 => "GERSBERMS, MAH FRAVRIT BERKS!" 

Rails 3: Selectively Override Email Recipients

It’s a common thing, in your test environments, to intercept out-going email, and stuff it in some dumpster out back, so you don’t bother your users. We do this at SeeClickFix, and we’re upgrading to Rails 3, so I went searching for the new way to do this, and found Rob Aldred’s handy post on the subject.

So we deployed our Rails 3 branch to a test environment & unleashed our QA staff on it, but they all knew that they’d never get email from that environment, so they never checked for them. Which was a problem, because all the email were broken in QA. Oops.

It’d be nice to only dump certain messages (the ones to your normal users) and let through others (the ones to your QA staff). Can we do this? Let’s see.

ActionMailer::Base lets you register an interceptor, and every time it’s about to send an email, it’ll call your interceptor’s #delivering_email method with the email as an argument. All the examples I found register a class as an interceptor, with #delivering_email implemented as a class method, like this:

class FooInterceptor
  def self.delivering_email(message)
    message.to = "dump@example.com"
  end
end

ActionMailer::Base.register_interceptor(FooInterceptor)

Now that’s fine, but why pass a class with a class method? Why not an object with an instance method? Especially since a class is just an object, an instance of Class. Will ActionMailer::Base#register_interceptor do something funny with its argument? Try to call #new on it? Who knows?

I tried this just to see if it would work:

class FooBarRecipient
  def delivering_email(message)
    message.to = "dump@example.com"
  end
end

ActionMailer::Base.register_interceptor(FooBarRecipient.new)

And it does! Nice job, register_interceptor, not doing anything funky with it. Thanks!

This means we can create an interceptor object with a whitelist:

class WhitelistInterceptor

  def initialize(whitelist)
    @whitelist = whitelist
  end

  def delivering_email(message)
    message.to = Array(message.to).map { |address|
      if @whitelist.include?(address)
        address
      else
        "dump@example.com"
      end
    }
  end

end

Of course that’s really basic – you probably want to allow all email sent to your domain, for instance. And maybe you want the messages to be tagged somehow, so you can tell which test environment a message came from, so you give the WhitelistInterceptor the Rails environment to add as a message header. But that’s the idea. And my favorite part is that the class has no Rails dependencies, so it’s trivial to test.

Is there any reason not to do this?

New Haven Ruby: First Thursday, Third Wednesday

The New Haven ruby group is gonna start building some rhythm, meeting twice every month, on the first Thursday and the third Wednesday. Even months (June, August, October) are hack-nights; odd months are social nights.

We had our first one last Thursday, at the SeeClickFix offices, and had a great turn out – about 15 people! Even Denis came out to join us. We were hacking on web apps for coordinating tasks, on ruby for reformatting other ruby, and some of us were just discovering programming for the first time.

Our next one is Wednesday, June 20th, and will again be at SeeClickFix, where free parking is just around the corner, and good pizza delivers. New Haven’s newest hackerspace, MakeHaven, is also around the corner, and there’s talk of doing a visit at some point. I’ll be there, probably hacking on an app for printing fliers for user groups, or an IRC bot for the group, or a regular expression parser, or some Project Euler problems.

Hope to see you there!

Out of Love with Active Record

(I’m a new-comer to Rails. When I first found Ruby, and Rails, I liked the Ruby better. And I never found many Rails jobs near home anyway. So for years, Ruby flavored my C#, and C# is where I learned, among other things, to persist my domain aggregates with NHibernate. Now I’m a card-carrying Rails jobber, which is great, because I play with Ruby all day. And the Rails community is discovering domain-driven design, and ORMs…)

Steve Klabnik just posted about resisting the urge to factor your models into behavior-in-a-mixin and dumb-persistence-with-active-record. He nails it when he says:

Whenever we refactor, we have to consider what we’re using to evaluate that our refactoring has been successful. For me, the default is complexity. That is, any refactoring I’m doing is trying to reduce complexity… One good way that I think about complexity on an individual object level [is its] ‘attack surface.’ We call this ‘encapsulation’ in object oriented software design.

If you learn only one thing from his post, let it be that “mixins do not really reduce the complexity of your objects.” Greg Brown threw me when he said that mixins are just another form of inheritance, and I think he was getting at the same thing.

Steve’s suggestion for separating persistence and behavior is to – duh, once you see it – separate them into different classes: a Post and a PostMapper, or a Post and a PostRepository. When I used C# and NHibernate, we loaded our Posts from the PostRepository, which used our PostMapper for data access. (Actually, our PostMapper was an XML mapping file.) You might call that overkill, but in a legacy app, it was nice to sheetrock our repositories over all the different data access technologies we’d acquired over the years, from the shiny new ORM to the crusty old Strongly-Typed DataSets.

When I was on that team, the thing that we worried about was, what grain should we build our repositories at? We didn’t have simple models, we had domain aggregates: we’d load a ThirdPartyAdministrator, which had many Clients, which each had a number of Accounts of different types, each of which had different options and sub-objects. So, what kind of repositories should we build, and what methods should they have? If we want to load the Client’s Accounts, should we load the ThirdPartyAdministrator, find the Client, and get its Accounts? load the Accounts directly? load the Client, and get its Accounts?

For a ridiculously simplified example, but to give you the flavor of it, say we load the ThirdPartyAdministrator, the aggregate root, and go from there:

class ThirdPartyAdministratorRepository
  def load_tpa(id)
    ...
  end
end

tpa = ThirdPartyAdministratorRepositor.load_tpa(42)
client = tpa.clients[client_id]
accounts = client.accounts

That’s too coarse; do we really have to load the TPA before we can get the client we’re after?

class ClientRepository
  def load_client(id)
    ...
  end
end

class AccountRepository
  def load_account(id)
    ...
  end
end

client = ClientRepository.load_client(client_id)
accounts = client.account_ids.map { |id|
  AccountRepository.load_account(id)
}

That’s too fine a grain, too low-level; we don’t want to have to muck around with Account IDs.

client = ClientRepository.load_client(client_id)
accounts = client.accounts

That might be a good middle-approach.

It comes down to knowing your application’s data-access patterns, and your domain’s constraints. If you often need a chunk of data, all together, you should probably have a repository for it. If one piece of data depends on another, your repository probably shouldn’t make you get them separately.

With Rails’ ActiveRecord, all this is sorted out for you – you define your associations, it provides all those querying methods, and you use the correct ones for what you need. With repositories, you have decisions to make – you have to design it, and design is choice. But choosing is work! and you can choose inconsistently! sometimes, it even makes sense to! I’m curious to see how the Rails community, with its culture of convention, tackles this. And for myself, I plan to check out DataMapper at some point.