Monday, April 21, 2014

Dear applicant

I wrote up a series of overly honest responses to job applicants a few years ago.  Didn't send them, of course, it was purely therapeutic writing.  Now that the job posting itself is stale and I don't even remember which company I was at when I wrote this, it's safe to post...

Dear applicant with broken links,

Thanks for sending your resume in regards to our open position.  Unfortunately, we're looking for the experience we asked for in the job description.  I appreciated the personalized cover letter but it would have been even nicer if it weren't formatted so oddly in HTML.

Dear applicant with 20 jobs in the last four years,

Thank-you for your interest in our position posted on craigslist.  At least I think you're interested, but since you just cut and pasted your resume into email and didn't reformat it, nor did you add any personalized cover letter, I'm not sure I really feel the interest.  You might like to know that your portfolio site has nothing on it but three static pages with a purple background.  I'm sure that was an oversight?

Dear new-grad applicant,

Thank-you for your interest in our position.  I feel at this time we're looking for somebody who has the experience they claim -- your "ten years experience" summarized in your cover letter looks more like four small short-term, part-time contract jobs in the last four years when I look at your resume.   Also, anybody who feels "proficiency with Microsoft Office Suite" is a skill worth listing for a technical position has too little of substance to say about their skills.

I was intrigued to see that your portfolio site looks like it came from the nineties.  Is Web 2.0 passé now, and retro static sites are the new white canvas?  I find it fascinating that you recreated that limited, static feel with CSS and Javascript!

Monday, March 31, 2014

Streets are safer than ever

I recently looked up some data in response to a sad person thinking that "our streets" (the writer was writing from New York, but the perception is widespread) are less safe for kids than they were in her time. As long as I was doing the research I thought I should post it here too.  TL;DR: The streets are safer than ever.

There’s more relative deaths due to accident and homicide than due to illness, but that’s because childhood deaths due to illnesses have plummeted. So that means a parent was right to be much more worried about kids dying from measles than guns in the early last century. But the risk is down for all causes, so today a parent ought to be much less worried overall, AND less worried about each individual cause.
“For children older than 1 year of age, the overall decline in mortality during the 20th century has been spectacular. In 1900, more than 3 in 100 children died between their first and 20th birthday; today, less than 2 in 1000 die. At the beginning of the 20th century, the leading causes of child mortality were infectious diseases, including diarrheal diseases, diphtheria, measles, pneumonia and influenza, scarlet fever, tuberculosis, typhoid and paratyphoid fevers, and whooping cough. Between 1900 and 1998, the percentage of child deaths attributable to infectious diseases declined from 61.6% to 2%. Accidents accounted for 6.3% of child deaths in 1900, but 43.9% in 1998. Between 1900 and 1998, the death rate from accidents, now usually called unintentional injuries, declined two-thirds, from 47.5 to 15.9 deaths per 100 000.” from Pediatrics journal
The CDC data shows that death by accidental injury is several times higher than death by homicide, for ALL age groups, even for 15-24 year olds.  Accidental injury is mostly vehicular, so I tend to ask people if they drive on the freeway if they are worried about their kids playing outside or going trick-or-treating.
And in case one is worried about city streets, the drop in risk can't be attributed only to the suburbs and the country. Living in the city is now less dangerous. This article is about town vs country but also talks about overall safety in cities going way up.  I looked up data for New York county, the densest county in NY State, compared to other NY counties using CDC Wonder, and found that New York county had 22 deaths per year per 100,000 between the ages of 1 and 19. That's much closer to the lowest county, Westchester with 17 per 100,000, than to the highest, Sullivan county with 38.6 per 100,000. 

Monday, March 10, 2014

How I hire engineers for startups


I went looking for articles on how to interview programmers/engineers for startups.  I didn't like much of what I found (I did like Elad Blog's take), and none of them addressed engineering skills and new-technology choices the way I wanted, so wrote my own guide (and as usual, wrote too much).  My thanks to Mark Ferlatte and Rachel Gollub for reviewing a first draft of this. Mistakes remain mine.

My thesis is that a startup needs coders who are good engineers too.  A startup that hires coders who are bad team players and engineers will quickly dig itself into a hole of technical debt.  A startup is in a worse position than most other employers to teach engineering skills like technical teamwork, using specifications and unit testing; also it is in no position to spend 5 times as much time and money fixing bad code.

1.  How do you use specifications in your programming?  Please expand...

Look for mid-range answers here.  “Never” or "Specs are a waste of time" is bad — using specifications, wireframes or user stories is an important skill.  An engineer who doesn’t know how to review and use a specification will at best put their own assumptions and that specification's errors into code, which will turn up as bugs later. 

“I always need specs” may also be bad for a startup, unless the engineer takes some responsibility e.g. “If a spec doesn’t exist, I write down what I’m going to do and send it to the team."  

A good engineer should also be able to talk about reviewing specs and improving them, and possibly even writing them or teaching product people how to write them.  Look for information behind the quick answer, because companies and experiences vary hugely. A developer who can talk intelligently about how rapid iteration reduces need for elaborate specs sounds decent to me.

2.  How much do you rely on unit testing?   When and how do you write unit tests? 

Here the right point on the scale is at least 80% of the way towards “always”.  Unit testing is an important skill and art and another one you don't have time to teach.

Excuses are not that impressive here.  If a candidate said their past companies didn’t do unit testing I would wonder why the candidate didn’t learn and even promote unit testing on their own.

Good answers: “I rely on unit testing to help me think through a problem and I write unit tests sometimes before I write the function, but sometimes after, depending."

3.  When do you decide to use a new technology? Give an example of learning a new technology, promoting it in your team, and executing it. 

Look for mid-range answers here too.  Candidates should not be on the bitter bleeding edge of technology.  There needs to be a drastic advantage to taking on the risk of brand-new technology.  Ask how they justified picking up something untested, and how they assessed and minimized the risk.  See my addendum below for examples of bleeding edge vs behind the curve.

On the other hand, candidates should be able to point to some new-ish technology (not just new to the candidate), year after year, that they learned and adopted.   Try to find out if a candidate eagerly learned the technology or had to.  Did the candidate choose and promote, or follow teammates’ beaten paths?

4.  What’s important to you in your work/employment?  

Most answers are probably not critical hire/no hire factors here.  However, you need to know what the candidate needs, in case 1 week later you’re saying “How do we convince Lisa to join us now” or in 6 months asking “Lisa is really awesome, how do we make sure we keep her?”.  You must treat tech candidates' needs respectfully.  You need them more than they need you. 

That said, there are some bad answers for startups: “Working with rock stars” (or ninjas).  “Stability”.   “A mentor” (or a hands-on education in engineering).  

Good answers: “Learning” (if self-directed).  “taking on a variety of roles”.  “Being very responsible for something”.  “Moving fast” (if under control via unit tests, etc).   “Being involved at the level of the business model.”   “Having some non-trivial ownership in a venture. “  “Gain the experience to do my own startup.”  “Be in a position to develop a great engineering culture." "Working with a good team." 

5.  Can we see some samples of your code?  (Alternatives: solve problem code on the whiteboard or join for a trial day coding on the project)

For this part, you need two or more really good startup engineers that are so good you CAN’T hire them to advise you.  Have them do in-person or phone interviews or read source code to evaluate candidates.  

If the candidate can’t point to some open source contributions, it may be possible to see some previous code under NDA; or the developer may have done some tutorial coding to learn some new system.   Looking at source code is an effective use of time both for the candidate and for your startup.  If this is not possible for the candidate, then a whiteboard or “homework” coding session is needed.  

Another option is to have them work on a project for a day or two.  Sometimes this can work and sometimes it can't.  A programmer who can't or won't dedicate full days of programming to prove themselves to a startup (see above about you need her more than she needs you) may still be the right person to hire.

What else to take notes on

While asking these questions, find out if the candidate is nice and easy to get along with. Startups are stressful enough already. Working with arrogant people makes startups worse.  You need good communicators.  Techies who assert and tell what to do may sound knowledgeable initially so dig deeper -- they need to be able to explain in detail how and why, too.

Try to find out if the candidate learns fast, both by asking about learning new technology, and ideally by asking making suggestions in the technical part of the interview (or after reviewing their code) that require the candidate to listen to another way of doing something and figure out how and why to do it that way. 

The "new technologies" question should help you also answer: is a candidate narrow or broad, a generalist or a specialist. Rather than just hiring C++ engineers, startups need people who may write C++ and can deploy EC2 servers, do metrics in Python and debug some Javascript.

Try to find out if the candidate can handle uncertainty and change-- the spec question is a good time to address that as a side concern. 


Addendum: Examples of bleeding edge, cutting edge and behind the curve

Bleeding edge: Almost no startup in 2009-2010 needed to or could afford to build their entire codebase on Node.js that year.  Don't hire the person who advocates this or something equivalent, especially if they haven't done it themselves.  (Not their team.)

Cutting edge: If the candidate learned Node.js in 2010, however, and decided that 2012 was the year to make it the basis of a product, and had a plan for hiring other engineers that know or want to know Node.js, that is a good sign. 

Behind the curve: They should not still be promoting Java servlets for brand-new code-bases (at least not without Hibernate and Spring, and only if Java is already needed for the startup).  Somebody whose only new technology learned in the last 5 years was Java servlets, because their manager assigned it to them, is not adopting new software technology at an appropriate pace or time.

You notice that my examples are older or heavily hedged.  This is a really tricky area.  I disagree with people I deeply respect about whether certain examples are a bad sign, a good sign, or "it depends" -- though  I suspect if we talked about a specific candidate and what that candidate had said about this question we'd be able to come to a conclusion about the candidate, even if not about Java Servlets.  Take really great notes so you can talk about the nuances with at least 2 of your tech advisors.

Wednesday, February 12, 2014

You must be this tall to ride the Elastic Beanstalk

Elastic Beanstalk seems like it’s meant to allow a startup to easily deploy Web sites using the most common Web frameworks, and scale those sites gracefully with an integrated AWS scaling group.  I’m judging it by that supposed purpose, and comparing it to Heroku.

Here's what we discovered using it in a startup. 

1.  Elastic Beanstalk deploys are inherently risky - delete working copy then cross your fingers

EB documentation instructs users to push new code to an existing environment in order to deploy a new version.  However, EB takes the site down in order to do this.  If something goes wrong with the new version, the old version is gone.   It should be possible to return to the previous version on the same beanstalk but it can get in a state where the environment is corrupt. Experienced users advise always creating a new environment, test it then redirect URLs, then throw the old environment out. 

Compare to Heroku, where if the deploy fails, Heroku stays with the old working site, which is not replaced until the new site works.  I also never experienced a failure to revert to the older version. 

2.  EB requires more manual or automation labour 

Because of EB’s risky deploys, best practice is commonly to create a new environment for each deploy, and test that environment before swapping the URLs with the running environment.  This is fine, but as soon as one has a handful of environment variables and custom settings like VPC membership, the creation of that new environment needs to be automated in scripts and config files.

There are still risks.  If one environment is not working, AWS refuses to swap the URLs!  Creating a new environment reduces the chance that production would be pointing to a broken environment, but doesn’t completely eliminate it.  If that happens, one has to use DNS to redirect the public URL to the working environment — because even if one deletes the broken environment, AWS doesn’t permit changing the URL of the working environment!  

Even if a startup does all this automation, compare to Heroku, where neither manual environment swapping nor custom automation are required. It's a good engineering practice to still automate source control checkout, tagging and deploy, but the deploy itself would be one line of the script.

3.  AWS support is not what it should be

About half the time, AWS support was useless or worse.  
  • A common example of when it's useless is in asynchronous support, when the support representative asks repeatedly for information that was already given earlier in the conversation, or to try things which one has already tried (and mentioned).  This happened to me several times.
  • An example of when it's worse than useless is when the support engineer suggests disruptive measures which do not help.  Once when our production site was unreachable because we'd misconfigured VPC membership, the support engineer didn't help figure this out.  Instead he had me repeatedly restart the environment in single instance mode then back to cluster mode; then restart the environment with a different machine image (AMI). It took hours and hours. He seemed knowledgeable but the things he knew were not the problem and he didn't seem to know how to confirm or rule out his hypotheses. 
My experience with Heroku is that I need support less than 10% as often as I did with AWS EB. 


4.  AWS documentation is not what it should be

If the service is complicated and doesn't always work the way one expects, then the documentation needs to be excellent.  In the AWS documentation is that there's a big gap where the mid-level documentation should be.  There is high-level "Here are the awesome features" documentation and low-level stuff like field names and config file formats. The missing parts are "Here's a mental model so you can understand what's happening" as well as "Here's an approach that we know works well for other users in this situation".  

Without any documentation to supply good mental models of what's going on, or good practices for various common situations, it's really easy for inexperienced people to construct a setup that has unexpected drawbacks.  

Summary: Disappointed.  

While EB is a definite improvement over running machines manually or on a hosted service at the level of EC2, I am disappointed that it is not as easy to use or as well-supported as Heroku.  I have heard that with experienced, dedicated operations engineers, it's great -- but I also know of companies with experienced, dedicated operations engineers who run their own EC2 instances and manage scaling with auto-scaling groups, rather than let Elastic Beanstalk handle it for them.

I've heard the argument that AWS is cheaper than Heroku, but at a small scale, employing experienced AWS operations engineers is much more expensive.  So is disappointing users and gaining AWS experience the hard way.  If you must be on AWS eventually, do so when you're "tall enough" to really need the scale and hire the resources -- migrating between Heroku and EB itself is not that hard.

Wednesday, December 11, 2013

Testing Rails apps when using DynamoDB


Once we decided to use Dynamoid to replace ActiveRecord in our Rails project, we needed to figure out how to test it.

First off, testing against Amazon's live DynamoDB is out of the question.  I'm not going to be limited to network speeds to run unit tests, nor be required to have an Internet connection in the first place.  Just a bad idea.

Mocking out every 'save' and 'create' request on each Event and Plan interaction would have severely undermined the usefulness of any model tests and even many controller tests.  When we mock a components behavior it's easy to make the wrong assumption about what it does.  The mock is unlikely to catch things like suddenly an integer field is being added instead of a string even if that change would cause the real storage engine to fail.

Fake_dynamo is what I found to use.  So far I'm really impressed by its accuracy in behaving the way Amazon DynamoDB does.  I haven't found it to either raise false negatives (fails when the live service succeeds) or false positives (succeeds when the live service fails).  Fake_dynamo can be used to run unit tests or just to run the Rails/DynamoDB system locally for development and ad-hoc testing.   I installed the gem on the system.  I did not include it in the Rails project because it's not called from anywhere in the project, it's run from the command-line to listen on a local port (default 4567).

At this point it would be helpful to point out a command-line option I found useful:

fake_dynamo -d fake_dynamo.fdb 

The -d option tells the gem which file to save to rather than the default.  Unit tests can therefore use a different file than ad-hoc tests.  This is nice because to do ad-hoc tests I might setup some long-lived data, whereas unit tests really only work well on a completely clean database rather than having dependencies between runs.

So to clear out the database between runs I issue a network command to the local port to delete the whole database.  I combine this in one line with running the tests:

curl -X DELETE localhost:4567 ; rake

I should probably clear out the old db as part of the internal automation of the test run itself but haven't bothered yet.

 The last challenge was really how to do fixtures or factory data.  I tested Rails fixtures, FactoryGirl and one other solution which I can't remember since this was over 2 months ago.  Dynamoid, unfortunately, did not work with any of them.  It turns out Dynamoid is missing too many ActiveRecord features to really work well with these systems yet.  So for example

Attendee.delete(attendee_id)
Attendee.find(attendee_id).delete


The first is supposed to work in ActiveRecord but doesn't work in Dynamoid; the second really does work in Dynamoid. Most of the time it's easy to replace the non-working syntax with one that works, but not when using helpers like fixtures/factory systems.

Summary: fake_dynamo good; Dynamoid shown to need more work.

Wednesday, October 30, 2013

Using DynamoDB, work in progress


At work we're using Amazon Web Services' DynamoDB for a backend.  This is early days and a work in progress, but I thought I'd post about what we're doing so far because I've seen so little elsewhere about it.

Our Web framework is Ruby on Rails.  Rails is a system that favours convention over configuration.  Most RoR developers use ActiveRecord, Rails' built-in system for object modeling and abstracting away SQL database access.  If you stay on the rails, this works fantastically.  Rails automates or partially automates many tasks and systems, from migrating your data when the model changes, to setting up unit tests that conveniently setup and instantiate the things you want to test.  Building on top of this, many Ruby gems extend your Rails functionality in powerful ways (Web UI test automation, authentication and user management, access to social network sites).

As soon as a project starts to diverge from Rails conventions, trouble begins.  Trouble may be contained if the difference can be contained and made as conformant as possible to the default components.  For example, when writing a API that serves RESTful JSON resources instead of HTML, it's best to figure out how to use views to serve the JSON in the same way that HTML views are generated (a topic of a few posts I did a couple years ago).

Which brings me to Dynamoid.  Amazon's Ruby gem for access to DynamoDB is very basic and exposes DynamoDB architecture directly.  That can be useful but it doesn't behave anything like ActiveRecord, and in order to use Rails' powerful tools and extensions, we need something that behaves as much like ActiveRecord as possible.  The only ActiveRecord replacement for DynamoDB that I could find, that was at all active, was Dynamoid.  So I'm pinning my hopes on it.  AFAICT so far, it is incomplete but has "good bones".  I've already fixed one tiny thing and submitted a pull request, and intend to continue contributing.

Next post will be about testing in this setup.

Monday, October 14, 2013

Correctness impedes expression

In kindergarten and grade one these days, teachers encourage kids to get their thoughts onto paper any old way.  They don't explain how to spell every word and they certainly don't stop kids and correct their spelling.  For a beginning writer, it will likely be all caps, and the teacher may not even suggest spaces between words.  Here's my first grader's recent work:

wanda-yi wan-t toa school 
and i wat to room 
four it was nis 
iasst the tishr wat 
harnamwas martha She 
was nisdat war day sH- 
Em

This means "One day I went to a school and I went to room four.  It was nice.  I asked the teacher what her name was, Martha.  She was nice that(?) were day she (unfinished?)"   Don't you love the phonetic "wandayi" for "One day I" ?  I do.   Note that the letter "I" is used for long sounds like in "nice", because that makes sense before one learns that that sound can be spelled many ways including "ie", "i?e", "aye", or "y".

Okay, cuteness aside, I flashed to thinking about XCode while Martha explained why they teach writing this way:  it's hard enough for a kid, writing slowly and awkwardly, to get three words out onto paper, let alone a whole page of writing.  Many kids get intimidated by corrections and worry of mistakes.  Instead of answers, she gives them a whole bunch of resources: try sounding it out, think of a similar word, try looking somewhere else in your own writing, see if the word is somewhere else in the room.  Above all, she encourages practice and resourcefulness rather than perfection.

Unlike Martha, XCode is like the stereotypical teacher from 60 years ago who would stand over you constantly and warn if she even thinks you're about to make a mistake.  "That's wrong."  "No, it's still wrong".  "That's somewhat better, but still not good."  "Now that's right, but now this is wrong."

Maybe that's why I still use TextMate for Ruby.  If the code doesn't have the right syntax, I'll learn about it later.  (I write tests.)  But for getting an algorithm out of my head and onto the screen, I much prefer not to be corrected and warned constantly while I'm doing it.

Friday, October 04, 2013

AWS Persistence for Core Data

I like DynamoDB, and I like architecture that reduces the amount of backend engineering one needs to do in a company whose product is an app.  So I was quite interested to investigate AWS Persistence for Core Data (APCD, for lack of a better short name) in practice.

APCD Overview according to me

APCD is a framework you can install on an iOS app such that when the app wants to save an object to the cloud, it can set APCD to do so silently in the background.  Not only does it save the object to the cloud, but changes made in the cloud can be magically synched back to the app's Core Data.  There's a parallel framework for Android which is promising for supporting that platform with the same architecture.

On the server end, if the server needs to do some logic based on client data, the server can access the DynamoDB tables and view or modify objects created by the applications.  In theory one doesn't have to design a REST/other interface for synchronizing client data to the server or to other clients. That's a significant savings and acceleration of development, so we read up on APCD around the Web and implemented it.

While there were a bunch of minor problems that we could have overcome, the primarily one was: nowhere does Amazon seem to document how to architect to use AWS Persistence, or explain what is it for. In the main article linked above, the sample code and objects are "Checkin" and "Location".  But where's the context?  Are these Checkin and Location objects in the same table?  Is there one giant table for all data?  Does each client have its own private table for a total of N tables?  Or are there two tables? Or 2N?   It really helps if new technology documentation  includes some fully fleshed out applications to give context.  Full source code isn't even what I'm talking about, but at least tell us what the application does, why it's architected the way it is use the new technology, and some other examples of what the new technology is for.

What I think APCD is for

Well we recently put together a couple facts which suggest what APCD is for.

  • You can't have more than 256 tables in a DynamoDB for an account, even when using APCD.  This limitation is very relevant to architectural choices made with APCD.*
  • If an installed app has the key to access any part of a table, the app can access the whole table, all objects.  There's no object-level permissions yet, and because the app access the data on DynamoDB through APCD, the server can't intercede to add permissions checking.
All right, so that tells us we can't architect the application so that each app instance saves its own table separate from other apps' tables.  We run out of table space at 256 installed users if not sooner.  It also tells us that if apps are going to share larger tables, the information in those tables has to be public information.  

So that suggests to me that APCD is for apps to synchronize shared public data.  For example, an application that crowd-sources information on safe, clean public bathrooms.

How my sample app would work

The crowd-sourced bathroom app could have all the bathrooms' data objects in one big table, and each instance of the application can contribute a new bathroom data object or modify an existing one.  A server can access the bathrooms data too, so Web developers could build a Web front-end that interoperates smoothly as long as the data model is stable.  

Now to use the service, even if the whole dataset is too large to download and keep, an app could query for bathrooms within a range of X kilometers or in a city limit, and even synchronize local data for offline use.  When the app boots up it doesn't have to download local bathroom data if it has done so before, instead APCD is supposed to fill in new data objects matching the query, and update the client with changes. 

For security, we have to trust each app to identify users so we can identify and block simple bad actors (somebody using the app interface to insert false information), and we have to have some backup for dealing with the contingency where the app is completely hacked, its key is used to access the bathroom data, and somebody quite malicious destroys all the useful bathroom data.  

What we did

We ended up not using APCD because what we're building does not involve a shared public database. We have semi-private data objects shared among a small set of trusted users.  Doing that with APCDs limitations seemed too far off APCD's garden path of easy progress.

Is there a better way to use APCD? 


*   Yes, you can have the 256 table count lifted, but not by much.  Not, say, to 1 million. That's not how DynamoDB is architected to work well.

Thursday, September 26, 2013

Opportunities arising in fall 2013

Working on a new project using cutting-edge AWS stuff and iOS 7, I note some opportunities.

1.  A really good Ruby library for working with AWS.

Although Amazon really should hire more Rubyists, this could also be done by outsiders.  AWS is powerful and Ruby is powerful.  Hooking them together properly would be sooooo nice.

2.  An iOS module or framework for higher-level use of Amazon Persistence for Core Data (APCD)

APCD is intriguing but Amazon has a lot more work to do.  For example, objects can only have one properly-persisted relationship between them.  You can't synch an Event object with an "organizer" relationship to Users as well as a "creator" relationship to Users.

Whether Amazon does more work here or not, there's opportunities for people to build on top of this service, because it doesn't address problems like version skew between mobile app instances.  For that matter, I'd like some explanation what it's for -- a real-time background table synch service is good for something but what exactly did the architects have in mind?  Without knowing what it was built for and tested for, it's hard to know whether the service will work smoothly for the applications I'm thinking of.

3.  Documentation and examples for the new XCode unit testing

There's vast amounts of information out there on unit testing with Java, Python and Ruby.  There's blog posts upon blog posts of best practices, and many great questions and answers on Stack Overflow.  But when it comes to XCode, I can't successfully google for "unit test method for filling in a text field in ios".  Apple, why do you hate Web searches?

Ok.

Would somebody get on these please?

Thank you.

Systems thinking vs algorithm thinking


I was chatting with another programmer about our different styles.  He's an incredible algorithm solver.  He's done compression and encryption algorithms in school, and codecs and video processing and GUI animation effects since (he's 23).  I tried to explain the kind of problem that I'm attracted to, which none of those are, and used the word "systems problems".

"But isn't everything a system?".   Only in the most trivial sense.

What I was trying to distinguish by talking about systems problems and systems thinking in programming is modeling independent and interconnected agents.  I'm not the only one with this kind of definition.  In an interesting publication on managing social change, I saw the definition "Systems characterised by interconnected and interdependent elements and dimensions are a key starting point for understanding complexity science."  Very close.  Is there a better phrase for system-style solutions as opposed to algorithm-style solutions?

Another way I explain this approach when I'm being self-mockingly post-modern is to say "I'm interested in the liminal spaces in computer architecture", which is an arty/jargony way of saying I'm interested in the interfaces between agents: APIs and protocols, typically.   I also hear the words of a British-accented UWaterloo professor from 20 years ago, saying "Modularization!" and "Information hiding!" over and over.  (Systems thinking supports object-oriented design.)

I've worked with a ton of people who have the same mental models because I worked a lot in the IETF for ten years.  It's a necessary part of communications security in particular, because in addition to thinking of each client and each server as ideal agents, one must always think of bad actors and flawed agents.

Coming back to startups, I'm always surprised in a way when I talk to a programmer who doesn't design and implement protocols and APIs because they think so differently.  It's more justifiably shocking when I meet people who know about and implement REST and aren't used to systems thinking!

Monday, September 23, 2013

I'm reading Don't Make Me Think by Steve Krug, and just read the section on home page messages.  That's why I laughed out loud (surprising my cat) when I saw this home page:

Amazing, huh?  It's got simple pricing!  Free, pro or on-premise!  What does it do?  Not important!

If you scroll below the fold -- I swear this is exactly what I see above the fold my very first visit to the site -- there's a "Full Feature List".  Now (and you really can't make this stuff up) I can see that Teambox supports
  • Capacity (this is a header)
  • Users
  • Projects
  • Storage
  • Organizations
  • Hosting
  • Support
  • Premium Features (this is the next header)
  • Group Chat

In other words, I still have no clue except that Group Chat is apparently a premium feature.  Wow.  At this point I don't even want to know.  I feel like knowing what the service or product is would only detract from the delicious infinite possibilities of Teambox.  I don't want the service, mind you -- I'm happy with the inscrutable mystery of Teambox.  Some things are not meant to be known.

Friday, September 20, 2013

Kevin Liddle makes a case against cucumber in his blog post.  Since he doesn't have comments, I'll basically comment here.

I agree with Kevin that the idea that product managers will write cucumber tests is pretty weak.  They might well read them and understand them however.

I think, however, that the value of cucumber and its gherkin syntax are, as with many things, not exactly in the place where the designer thought the value would be.  The value is in using anything but ruby to describe what you're trying to accomplish with ruby.  Every so often I'll write or encounter a test written with the same narrow view with which the programmer wrote the code. Such a test verifies that the code reliably does the wrong thing in the larger sense.  Using ruby to test ruby also encourages trivial tests where the implementation detail is verified, not the application logic (see Don't Unit Test Trivial Code  )

My science-based (but not scientific) theory is based on how low-level thinking impedes high-level thinking.  As an example, this happens when you're driving, thinking about something low-level like adding distances, and get distracted and miss your freeway exit.  Cognitively,  when you're in the middle of writing Ruby code and thinking about how to write Ruby, it's harder than normal to think "what should the code do".  Switching to another language, gherkin or anything else, prompts the programmer to go meta. Going meta means repeatedly re-loading the mental model of how what i'm doing is fitting into a larger system and goals.

This effect is known in a couple different fields:

  •  Learning math: "Good problem solvers possess metacognitive skill, the ability to monitor and assess their thinking" (ref Support for Learning)
  • Corporate strategy: "A strategic thinker has a mental model of the complete end-to-end system of value creation, his or her role within it" (wikipedia Strategic THinking)

So besides just changing to another language to avoid ruby-testing-ruby circularities, Gherkin is designed to make the programer think in terms of wants and fulfilling user expectations.  The syntax "Given I am a new user, When I go to the home page, Then I should see the zero-content display" helps the developer pop from what she's trying to do to how she's trying to do it, and back, without losing the big picture.

Sunday, September 08, 2013

An information coordinator is useful


An information coordinator is a useful person, and after two years of working with a partner or alone, this weekend was a nice reminder of that.

The classroom camping trip was Saturday night, and since I haven't been working full time this summer, I volunteered to organize it.  What I did:
  • Printed out last year's camping duty list, announced that I was posting it two weeks ago.
  • Collected the duty list and sent out email for the last few needed duties.
  • Advised the shopping volunteer what to buy
  • Sent out a public email answering the questions individuals had asked
  • Kept track of which spots were free and directed arrivals (sounds like work, but wasn't really, more like socializing)
  • Triggered volunteers when to begin lighting the grill, and the campfire
  • Approved people's suggestions (people wanted to know if a suggestion would interfere with other plans so wanted some kind of coordination check, not really approval)
This didn't seem like work to me but it met with widespread gratitude.  An organized point of information exchange is really what I was, timekeeper, and a maker of trivial decisions.  Software projects call these people project managers, but they exist in all kinds of domains, sometimes with different names, sometimes with specialized expertise.  Sometimes the project manager is just an organized person holding the clock, the task list, and the notebook.

Monday, August 19, 2013

Order of Operations

I recently taught my son to ride his bike without training wheels.  He was very resistant and afraid of falling down.  I discovered that the order of learning skills was very important.  Before he could get confidence balancing and moving, he needed to be confident that he could brake and put his feet down at any time. This is a coordinated movement between hands and feet as well as body balance (which foot? which side? when?)  so not as simple as it seems.

Over twenty minutes on two days, we practiced braking dozens of times: I would hold his bike up while he put his feet on the pedals, help him move forward pushing the pedals, and then either tell him to brake, or let go and he would wobble and brake on his own.  Eventually one time he forgot to brake and just kept going: breakthrough!  So the order of learning was

  1. Learn how to stop
  2. Learn how to go straight on his own
  3. Learn how to turn
  4. Learn how to start on his own
Despite having a sore lower back from holding up his bike so much, I thought this worked well.  But what a strange order to learn in!  Then I remembered how knitting is most often taught.  The teacher will cast on a bunch of stitches and do a few rows, so that the knitter can (1) go straight, then learn to (2) turn at the end of the row, then (3) bind off at the end, and finally someday (4) cast on a new beginning.

Wednesday, August 14, 2013

UX for Lean Startups required reading

UX for Lean Startups: Faster, Smarter User Experience Research and DesignUX for Lean Startups: Faster, Smarter User Experience Research and Design by Laura Klein
My rating: 5 of 5 stars

I loved Laura's book.  As I read it I kept on putting it down thinking "I need to put this down and go follow her advice IMMEDIATELY" and then I would pick it up because I wanted to learn more and hear more of her voice.   Since I know Laura I could hear her voice advising, explaining, and gently mocking commonly-held falsehoods.  The tone combines with the topic matter to break down pre-conceptions, to convince and teach.

Laura's advice is incredibly practical. Having just been through a startup I could immediately see what I could have applied, and working with other startups now I do get an opportunity to apply more ideas.  Many ideas are only obvious in retrospect (like testing fake features when you're operating on a shoestring budget) and then even once the idea is obvious, there's great advice for making the most of the idea.


View all my reviews

Blog Archive

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.