Moving on to My Next Big Thing

I’ve started, edited, then deleted and started this post from scratch a few times now. Nothing clever worked. So, straight to the point. Three weeks ago, I moved on from 2 years of building a very awesome hyperlocal news aggregation platform at to my Next Big Thing.

I’ve left to pursue a remarkable new opportunity that I can’t discuss quite yet; not because I don’t want to, but because, dammit, they won’t let me! It’s got some key facets that ultimately pulled me away from the killer team at OI: it’s in a space that will impact everyone I know, it challenges me to build apps that I will want to use every day, the scale will require some clever architecture that’s stretching my thinking, and it presents a true startup-business-building challenge for me. BTW, we’re hiring Scala and Ruby developers, so get in touch.

I am beyond proud of what we have accomplished at, and of what the team I’m leaving continues to build there. The OI platform is in great shape. The public API is impressive and live and it’s one of the best out there, powering hyperlocal for huge sites like The next phases of its evolution are in the best possible hands. I’m incredibly thankful for the experiences of the past 2 years, and grateful to the team that made them possible.

I happy to announce here that I have connected with a new opportunity that represents all of the things I’ve been wanting to do next. I am very excited to become CTO at, which has been my favorite NYC startup for a while now. Here’s why:

  • I don’t know what Web x.0 means anymore. But while we figure that out, the holy grail for everyday Web use is filtering and organization of data of interest, along axes that make sense. has nailed this from day 1, choosing geographical proximity as that axis.
  • The OI platform stack has to scale massively. In prior roles I’ve sometimes said that the technology I’m helping build has scaled beyond “consumer app numbers.” OI already has those numbers, and needs to cleverly grow fast as an aggregator, publisher, API provider, and Web app. This is my idea of fun.
  • The news business is shifting even as I write this. More agile, community-centric news delivery and filtering are already here, and traditional new businesses are adapting or facing extinction. OI offers  Big News and all authors, small and large, the platform they will embrace for distribution and monetization of local information. In short, our timing is absolutely perfect.
  • Technologies in the worlds of Natural Language Processing and Semantics are coming of age, and OI is making practical use of them in very creative ways. I may even have a few ideas about this up my sleeve.
  • Amazing team! Founders Steven Johnson, John Geraci, and Cory Forsyth created an incredible startup energy, and our CEO Mark Josephson is taking that a few notches further. The dedication, smarts, and creativity of this group of people is awesome.
  • Mark and our Board are the ideal leadership team. I’ll save the “why” of this for another post about startup leadership, but it’s a big part of why I’m here and putting my all into it, every day.
  • I can walk to work!

So, I’ve been on the job 2 weeks, enjoying every minute of learning the technology and development process VP of Engineering Cory has built to date, and getting to know the stellar team of engineers, product and business folks. It’s also been enough time to validate all of the drivers for me deciding this was the place for me, and that is truly a great feeling. As I have shared with the oi team, this is the “calm before the storm…” We are in the war room strategizing how to reveal some new technologies and business lines that have been brewing in HQ at 20 Jay Street, DUMBO. Stay tuned and get ready, I think we’re gonna turn this thing on it’s head!

ASSERT: Learn by Testing

My 6-year-old daughter received a microscope kit as a gift. It’s not the 50’s-style die-cast behemoth I had as a kid, with all it’s frustrating sample mounting and focus issues. No, this is 2009, and this perfect kids’ ‘scope requires no focusing and is small enough for her to wear as a necklace. And, it’s pink which makes the necklace prospect all the more attractive.

I’ve been teaching her a little bit about science and encouraging her questions. But really, I am passive, watching her formulate theories and test them. Our freezer is full of “samples” from the last Brooklyn snowfall.

She is learning to learn by testing, to come up with an assertion she believes to be correct, and then by observation rule it in or out. The application to programming and building systems is obvious, but easy to forget. I’ve sometimes fallen into the trap of assuming something is true, and relying on that assumption without a proper assertion and test. With the overwhelming amount of reading material available on what works and what doesn’t from “cloud computing” to scaling Ruby, from development environments to search algorithms, it’s easy to fall back on assumptions, most often and dangerously someone else’s assumptions. Watching my daughter’s inner scientist emerge has been a good reminder for me.

I now assert, and have proven, that even New York City snow is made of ice crystals and melts at room temperature.

What’s a Good API?

API, or Application Programming Interface, is a term that has nearly become synonymous with Web API’s of late. Simply put, an API of this sort offers interaction with a server system over HTTP, delivering data as XML or JSON. I was recently asked in a job interview “What makes a good API?” I answered reasonably, but it got me thinking. Here’s a brief but more thoughtful answer, informed by recent real experience with Facebook, Flickr, Amazon and Twitter.

A good API has to value consistency as near religion. An ideal API experience for a new client of that API should require no documentation reading at all, assuming the client author knows what data she’s trying to pull from the provider. REST has gone a long way towards standardizing how this can, and perhalps should, be done. Twitter, for the moment, suffers a painful inconsistency between it’s standard REST API and the very useful search API. Granted, search is provided through the recent acquisition of Summize and the cobined Twitter and former Summize teams fully intend to address the inconsistency.

Orthogonality is another cornerstone. There should be very little magic done behind an API call, in that if I ask for something, I should get back just that something (or a clean list of somethings), in a more-or-less canonical form. Magic is tempting to add for convenience to clients, but it leads to overlap, and hence non-orthagonality. For maximum use and reuse, and interface might even require multiple calls, where one magic one would have sufficed, in order to retrieve the data a client needs. In essence, I mean that no side effects, good or bad, can result from a call for some data. I love the term “Principle of Least Astonishment” to embody thought that no one should be surprised by anything your API does. Although the API may ultimately become large, orthogonality as a principle of design will keep it relatively compact and obvious to client developers. They don’t have much interest in reading documentation, most likely, but a few well-written practical examples demonstrating this idea will be good enough for nearly all needs.

Smart version handling and clear policies must be in place at the inception of a good API. It’s got to have a supportable version policy (that is, how many versions will be supported and for how long?), and a simple version exchange protocal so that clients can report the version the support and expect, and servers can respond with their version and level of support for a particular client call. Further, friendly deprecation of API methods is a way to soft-fail calls that are going to be removed or re-purposed in the next version.

Things ought to be named intuitively. When the team at Digital Railroad took on API design, we made sure to eat our own dog food and be the first users of whatever resulted. Naming proved to be an unexpected challenge. I found that if we couldn’t name it easily, something about the design was likely amiss. Flickr and Amazon in particular have done a nice job of simple, consistent naming in their REST API’s.

Errors should be informative and specific, but use standard HTTP error codes. I worked with a technology provider in the photo world, who shall remain nameless here, that built a reasonably large API. It violated nearly all of the points I put forth here, but the most aggregious was in the area of error reporting. Imagine working with a method that takes no less than 20 parameters, then returns a “-1” upon failure in the body of an HTTP 200 response, with no additional diagnostic info! This is extreme, but really happened until we ultimately shelved the project due in large part to this kind of API nonsense.

There are more real-world principles of good API design, so shoot me yours.

OK, How to Start!

As a quick follow-up to the “How to Start Up” post, here is a pragmatic list of tools, tried and true via 2 recent projects of mine, and at least one more upcoming (which will also succeed ;)). The question I’m answering, of course, is: “what tools to use to bootstrap development on the cheap, and get yourself something launched with minimal headache and maximal features and reliability?” This is by no means a recipe for success or a religious treatise on how things must be done. It is a (hopefully) practical guide to what has worked for me in the context of real world “How to Start Up.” None of it substitutes for good engineering and smart entrepreneur-ing.

  • Linux: goes without saying. Oops, I’ve said it! I like deploying and developing on the same Linux distro, and for now that’s Ubuntu server. I find Aptitude to be the fastest, easiest package management system around. It does what it needs to and stays out of my way.
  • Rails: Yay, Ruby on Rails! Everyone loves it, right? It must be the shit! Well, feel free to join the pile-on. For a rapid, well-organized Web framework, it’s my choice right now, and I am a relative newcomer, still learning. I do love Ruby to get things done quickly and flexibly, and Rails is a natural extension.
  • Ruby: despite performance limitations, and single-threadedness for now, Ruby is the perfect “glue” code. If you need a server prototype or something asynchronous for your Web app, it’s a fine choice to start. For scripting and automating tasks, Ruby is right on. After all, that’s really what it was intended for.
  • Java: For any sort of real server heavy-lifting, Java and it’s infinite world of libraries and components are the obvious choice. All the complexity that you don’t need or want for rapid development of a Web app is much more palatable in this context because the benefits of performance, type-safety, threading, and dogmatic separation of concerns are well worth it. Lucene and Hadoop are great examples of very active projects with a lot of momentum, wide applicability, and lots and lots of good old Java. A requirement in my toolbelt.
  • Netbeans: I’ve done battle with IDE’s over the years. Even pitted one against the other. I’m fickle, easily distracted, and never satisfied when it comes to IDE’s. Right now I find Netbeans to be a clear choice for having struck the near-perfect balance between responsiveness, sophisticated editing, plugin-friendliness, and applicability for Ruby, Rails, and Java work. It generally stays out of the way but there is always something useful still to learn, and a zillion timesavers built by folks who understand what frustrates me!
  • Postgres: Another natural choice, given the Prototype Trap, and assuming you need a realtional DB. It’s very likely that you do, but not a given. PostgreSQL is proven to scale and cluster. Databases are quirky by definition, so putting your time into learning one that offers a lot of mileage is the name of the game. And, when you’ve got your funding you will have a relatively easy time finding a PostgreSQL DBA to hire for your team. Oh yes, and I love PGAdminIII, despite its unfortunately uninspired name.
  • Amazon EC2: I’m new to EC2, but pretty solidly behind it. At Digital Railroad I had the opportunity to speak with the EC2 folks and a number of competitors in varyinig degrees of depth. Remote virtualization is very clearly the way to deploy in the early stages, and depending on the application and resource needs may be right for the long haul. In the case of a current project, I was able to very quickly find a suitable base Ubuntu Server image, bring it up, and customize it appropriately. I’m a sucker for slick tools, and I must say this sucked me in further: ElasticFox.
  • 16bugs: Even if there are only 2 of you on the project, paper and todo.txt files will get old (and useless) quickly. Task and bug tracking are a must, and both are decently done and free at Don’t expect Bugzilla or JIRA-level workflow. Simplicity and just enough to do the job is what 16bugs is for, and it’s working well for our tiny team.
  • Yammer: This one is growing on me. While I don’t think Yammer is an essential tool, it does accomplish some things for small, geographically distributed teams. Things that aren’t obviated by project tools, IM, and email. My advice is to try it on a real project. As they say, your mileage may vary.

So there you have it. Not an exhaustive list, by any measure, but perhaps the means to get started on a lean, mean trajectory. It’s working for me.

How to Start Up

With the endless proclamations of the startup funding apocalypse inundating all our channels, it might be useful to consider how to start up a software thing and do it well on the cheap. I’m not talking about using the “cloud” or how you ought to look at open source… there’s plenty written about those and they pretty much fall into the “duh” category.

This is as good as any characterization of what we are facing. It’s not good. But there really is a lot of potential opportunity, heightened by the dearth of folks who can comfortably funnel some personal money into a project. Innovation marches on, and the gaps to fill with smart products, quickly developed, are accumulating at just about the normal rate. A project spun up with truly minimal cash and a real market, not a speculative one, will succeed if anything can in the next 12 to 24 months.

In the past couple of months, I have had the opportunity to advise and look at a number of Web/software technology ventures that meet these criteria, but are crippled, or doomed, because they just don’t know how to get started efficiently and set the stage for great things.  Here is some the advice I have given and thoughts that have come to mind of late.

  1. The Prototype Trap
    Beware of building a throwaway prototype, something that appears to work but really is nothing more than carefully propped-up demoware. In the good old days, such a diorama might have raised funds, but building it cost entrepreneurs money and served as nothing more than a starting point for spec’ing the real product. Don’t build a prototype; build an alpha, and define “alpha” in terms that accomplish your goals.  It’s infinitely better to choose platform(s), language(s), deployment environment, and base features than to build garbage demoware. The inital results won’t be as pretty as the polished garbage, but when you see something working, it’s really working. You’re team has identified real risks and pitfalls rather than emptily theorizing about real costs and unknowns.
  2. Hire/Partner with a Leader
    If you aren’t technical, or aren’t technical enough, hire someone who is. If you are human like me and need an invested collaborator, find that person as one of your primary efforts. Don’t accept that you tech lead has to be part of your outsourcer’s organization or that you, the most accomplished well-connected business professional that you are, can manage the risk yourself. Find a technical collaborator who grooves with your vision and working style. Offer equity for participation, hire a consultant who has a real interest in what you are doing and who will come aboard, or best of all spend on this full time hire, offer equity, and build a real partnership for execution. Spend more than 50% of you start-up time and energy identifying and exchanging ideas with this person. It is an obvious and critical investment that will yield staggeringly good returns.
  3. Start Microscopic
    Don’t bite off more than you can code! Sounds obvious, but it’s such a common mistake. If you start by identifying the bare minimum functional requirements to demonstrate your mission with real, working code you are comfortable handing over to alpha users and potential investors, and potential partners and staff and build nothing more… well, that’s really the start up app holy grail. Few achieve it, believe me. I think the tendency is to fail to find the perfect mix of focus and flexibility. If a project takes in the neighborhood of a month to implement, chances are very high that during that time the team will learn substantially more about the problem, the market, potential users, and competition that changes will happen. I’ve got some rules of thumb for this, and I will share them in a follow-up post. I know; you’d think this is common sense but I can assure you that very smart, savvy people with excellent ideas fall into all of the traps I’m writing about.
  4. Interest Alignment
    Regardless of who is building the alpha, make sure your respective interests are well-aligned. Your technical leader will push on topics such as framework choices, development environment, source control, and implementation time. Equally important is User Experience and performance, which will float or sink the idea in an instant on the monitor of your first alpha user! I like to set milestones that are small and very well-understood, and use them as checkpoints for Interest Alignment. Are the tools and framework choices holding up? What hackery has taken place to build this iteration, and is it something I want to inherit and support in beta and beyond? What code or approach was “borrowed” from another project? Are the resources still the best possible and invested, or are we slipping to the back burner? What User Ex compromises have taken place or are about to take place? These questions and more are relevant when implementation is done by outsourcees, contractors, and even new employees. And, one immutable fact is that if you are outsourcing, your outsourced team has fundamentally different interests than your own. The closest those interests align is in the goal of having your business survive. But, you need to do much more to lead the market and innovate, and they need you to survive and continue providing a steady flow of income.
  5. Alpha Wise and Beta Foolish
    Go into it assuming you are going to succeed, where success here is defined as moving from Alpha to Beta and eventual launch with real users and enough funding to do it all justice while not starving. Getting apps to market, and a large component of the art of development, is compromise. Finding the optimal trade-offs as goals change and timelines compress is not scientific, despite what a lot of books, brochures, and coursework purport! You often have much more wiggle room with the alpha time line than with subsequent ones. I always try to use this to advantage, insisting on building enough of a foundation to support what we imagine will subsequently sit atop it all. It’s very important not to shortchange your initial implementation and have the confidence to build solidly. There will be compromise, but don’t rush these decisions; a quicker-to-build, shaky alpha will always cost more to revisit as you are sprinting toward beta and launch than the cost of building solid and letting the foundation have some settle time in alpha.
  6. Outsourcing: The Glass is Half Full and Evaporation
    Assume the worst when you are outsourcing. There are many, many strategies for managing outsourced engagements, but the best is to assume, at the outset, a pessimistic disposition. If things fail in some way, you will be prepared to minimize and deal with the consequences. Competent outsourcing managers and teams expect this, and it has little to do with your confidence in them, or their ability to earn your trust. Since Interest Alignment is implicitly handicapped when outsourcing, both parties must make the most of it. Excuse this analogy, but you are a client, and they are the server. Code defensively! Extending the glass analogy, I like the term ‘Evaporation’ to describe what happens when that half-full glass sits around exposed to natural factors. Essentially, if you don’t put effort into ensuring a quality product via testing, frequent contact and brainstorming, and code inspection done in-house, you face Evaporation. Over time, that half-full glass get emptier, and emptier.
  7. Outsourcing: It’s Your Source Code
    Always have access to your source code and documentation. Ensure that code builds and works on your hardware and subject it to your high standards. Know the dependencies yourself, and ensure that dependency bloat doesn’t occur on your watch.  The unthinkable extreme case of source code kidnapping happens more often than most people realize.  But death by a thousand cuts can occur when your team is not vigilant about quality.  Your app will bleed to death from a zillion cut corners. Keep an eye on updates every day.  After all, this is why you hired or partnered up with a rockstar technical leader.
  8. Live with It!
    Well before real users assail your system, make sure you and your team understand all of the implications of running it. How many machines or could slices do you really need? Does the code recover from failure? Does it care? What registers as a catastrophic crap-out and waking someone up at 3 am, and who exactly gets that call? Rock sold stability is not expected in alpha, after all that’s why it’s alpha. But if you aim for better and achieve it, you will stand out. Live with the system you are deploying for a week before unleashing it. It will probably result in some work and living with an improved system for another week, but the time burned is another kind of important investment you will not possibly regret.

Starting things up isn’t easy or straightforward. But it ought to be fun, predictable, and inexpensive.

DRR and “What Happened?”

I am the former CTO of Digital Railroad, Inc. With the very difficult shutdown of DRR behind us, I’d like to set the record straight where I feel it is ethically OK to do so. Photo industry commentators, outside the fray and at times without a good understanding of what happens when a business is in distress, have attempted to lay out a time line and answer the elusive question, “what happened?” I’m writing this to correct some of the misinformation, and put forth clearly that the creators of Digital Railroad did everything possible to prevent the difficulty that our loyal customers are now enduring.

Among the things I cannot do are single out any individuals or companies, neither to praise nor criticize them, and please know that both praise and criticism are well warranted. I also cannot correct or clarify all of the misstated and erroneous information on various blogs, but I will make a few important points.

I hope that among the readers of this post are some of you with whom I had direct interaction through this difficult period. Each of you knows how the small DRR team that remained to the end (and beyond) fought hard to bring about a better resolution than was ultimately possible.

The most well-intentioned, objective explanation was posted by Allen Murabayashi of Photoshelter this week, and you can read it here. Allen was an insider to a degree because of his good work helping push for salvaging DRR photographer archives, which would have benefited our members and Photoshelter, both. As he reported in other posts, he also was in touch directly with investors and the DRR management team during this time. Nonetheless, his account gets some of it wrong. He writes:

“Portions of this document might be factually incorrect – I don’t vouch for the complete veracity, I’m just trying to shed some light on the situation, so that photographers can gain some understanding of the situation.”

True and honest. Thanks also, Allen, for not speculating on how DRR came to face financing challenges prior to the publishing of John Harrington’s post. DRR was a company with significant revenue and also significant investment. The economic downturn presented us with a time line for additional financing against which we could not deliver a solution. This despite strategic opportunities that never came to light publicly but were very much in play. Any of those opportunities would have resulted in a remarkable future for DRR and our members. But, they were not to be realized.

When Diablo Management was retained, it’s mission was not to liquidate the company, but rather to identify and accelerate any of a number of potential deals which would have preserved the business and the platform. Shutting down the company was called for only after the funds allocated to pursue potential deals ran out. Note that employees were not all let go simultaneously, and even some who were let go volunteered time to continue efforts to salvage the business, or minimally to export photographer and agency assets.

“Potential acquirers were asking questions, but there was no one at the company who had intimate knowledge of the business. Diablo tried to assemble answers, but they didn’t really know the entire situation.”

That’s not correct. I personally dealt directly with most of the folks asking questions, and with others indirectly. I am aware of every potential deal and did not turn my attention away from them until all were exhausted. During this time the team also made contact with a number of DRR members who had very difficult business situations to resolve, and these members were able to do so with the help of the staff who remained. I cannot sufficiently emphasize the fact that the dedicated team our membership came to know over the years went well above and beyond the call during DRR’s final days. Of course, I know this means little for those who have been dealing with major headaches as a result of what transpired.

Finally, Photoshelter’s assessment of the technical issue encountered exporting images via FTP syndication is close to what transpired. The syndication subsystem simply could not keep up with demand during peak times of the day. DRR staff (or former staff) were monitoring to a degree and taking action, but keeping systems healthy became impossible when access to office equipment, VPN’s, and DRR engineering tools was no longer possible. At no time was access to images deliberately blocked, as has been claimed elsewhere (not by Allen or Photoshelter).

“It is easy to see how DRR’s demise can damage people’s belief in the entire space for online archiving, portfolio, and digital storefront providers.”

Agreed. It’s my hope that faith in the space and innovation around it are not damaged for the long term by our story. DRR’s fate is not a result of the business we were in, or the platform we built, but a consequence of strategies and gambles often taken by early-stage business in order to grow. In our case, accumulated risk and recent severe economic developments combined to close off one opportunity after another, until none remained.