The Null Terminator

Ethan Ram’s geeky blog on the seam of technology and product management.

The Angry Flying Chicken Rocks

A review of what makes Rovio’s Angry Birds so good

Yesterday I found myself buying Facebook Credits for the first time in my life. No App on Facebook has managed to tempt me into buying its goods up until this one. None of the addictive Zynga games or any of Playtika slot games made me convert into a paying customer. Not event GameGround’s tournaments. So how come I find myself paying for a virtual earthquake that shakes pig shelters???  Worse than that: I’m re-playing levels I’ve already managed to pass months ago, just to earn a couple of extra level points and beat my friends to the crown, at 3am. This is not like me at all…

These guys must be doing it right if they manage to convert a freebee-lover-non-ad-clicker like myself.

Actually I started thinking about this game several months ago, after a successful session of pirating the registration requirements, when they’ve just released the PC version on the Intel App Store (Intel must have paid Rovio a fortune to get this game exclusive rights as a standalone install-based game on their App store. But I’ll keep this story to another post). It’s the first time in years that I’ve been hacking a game… and it really made me think – what makes it so successful? It wasn’t following any of the recent trends in gaming. In fact, if I’d describe its basic nature to any of the VCs I’ve come across they would have dismissed me saying I’m old fashioned, even a dinosaur.

  • Its graphics is basic 2D.
  • It’s not using any of the newer devices capabilities like motion detection, movement sensors, light sensors, camera etc..

[up until their recent Facebook App launch…]

  • It has no social features to make it viral. Not even a friends-invite or a share-this. Nothing.
  • It’s an offline game: no user loyalty features, friends leaderboard or level downloads
  • No virtual goods or anything to buy inside the Game.
  • You have to pay to get it. No free lunch [at least until they released an ad-sponsored version on the Android App store]

So what is it? It seems to be defying common knowledge of how to be successful in an always-online world. How do they do it? How do they manage to be the #1 selling App world-wide for months and years? 2 years after the game release and it’s still growing.

This is the (secret) recipe:

  • A background story that appeals to all ages, classes and nationalities: Farm animals leading the cast; the heroes are fighting against the villains, willing to sacrifice their lives to save their unborn siblings.
  • It’s dead-simple to play. No brainer kill-them-all-before-you-get-to-the-next-level game flow where the learning curve is gradual. Anyone 2 years and above can play this game…
  • A clear and distinctive comics-like theme, graphics and sound effects. One that makes you smile even when you perform miserably. Even after a completing 100 levels.
  • It has the basic physical engine that makes all flying things look somewhat for-real. With a bit of a super-natural feel, but certainly not sci-fi;
  • A truly casual game: One finger to play with short game rounds, a few seconds each. This really makes it suitable to play anywhere and anytime – on the platform waiting for the commute train, at a boring class, when waiting for a meeting to begin or while the stupid TV ads are preventing you from watching your favorite weekly episode. It works on any form factor and most devices too, from my 2.5” x10-mini mobile, to a 50” media center TV.
  • Levels are unlocked one at a time, adding new features and new required skills. The basic thing that intrigues you – makes you want to finish yet another level, just to discover what the next one will get you.
  • Some surprises to spice up the stew: find all golden eggs, bonus levels, monster-pigs etc.

If this all sounds to you like quoting a basic game developer manual – you are right. Nothing here is big news. The true geniusity is taking all the basic elements and combining them into a truly good game. This game is a combination of Pacman game, Spiderman comics and the movie Madagascar.    🙂

And now for the Facebook version goods:

  • It adds new features to the game just when you got board of the offline version.
  • The friend’s leaderboard and level score system is excellent. So you find friends who played the same level and made more points than you did, but not much more than you did, so you don’t feel cheated, under-skilled or bot-deprived.
  • You buy extra skills, not “just” virtual goods. Things like a better aiming ability, a bit more throwing power. This makes it easier to pass a hard level without going to YouTube to seek for a resolution.
  • They don’t push you into buying their stuff. You get some free extras, and then you have to buy more, if you want to use more.
  • They celebrate your high-scores and leaderboard achievements adding new groove to the offline game.
  • Ok. Ok. I admit! It was the ‘buy 5 pay for 1’ promotion that they had. This was the straw that broke my extra ordinary resistance to buy staff at Facebook’s.

what an amazing promo!

I’m now a converted Facebook user and a happy gold crown holder Angry Birds Level 1. Let’s see if you can beat me!

 

p.s. kudos to Jaakko Iisalo, the designer who came up with the Birds’ concept art and then played a key role in developing the early versions of the game.

Advertisement

Making Google Chrome’s search default to google.com

On the most useful hack I’m using too often…
 

This post is a bit unusual – I’m going to help you revert the one most annoying feature of Google Chrome – they force you to use the local Google site when searching from the Omnibox. If you’re living outside of the US I’m sure you’ve seen this thing – almost everyone is using a local version of Google. So, in my case it’s been defaulting to Hebrew version of google.co.il (right to left!)

I cannot work like that!!!

I see so many ppl who actually gave up on clicking the ‘Google.com in English’ link every time they open their browser and somehow got used to having their search engine talk to them in Hebrew whatever… Hopefully some day soon someone up there would put this property in the decent checkbox in the options and let it sync to my profile. Pls pls pls.   (no chance!)

Recently it’s got worse: Chrome is now trying to convince me to google in Estonian…

Here’s a simple 1-2-3 guide on how to rid of it.

1-      Close your Chrome browser.

2-      I know you closed it. But it’s most likely still running. So kill any remains of chrome.exe from Task Manager

Task Manager

3-      Open Notepad then open the file Local State under the folder C:\Users\<user>\AppData\Local\Google\Chrome\User Data

It’s there. So if you can’t see it change your filer to All-Files or search for ‘*’

4-      Now edit the 2 lines in the beginning of the file where you see the local Google site. Change them to read “.com” instead of the “.co.il” or whatever local version they put you on.

5-      Save and exit

6-      Open Chrome. Search.

Breathe deeply. Your life just got a whole lot better  🙂 🙂 🙂

p.s. They will offer you once in a while to switch – don’t get tempted…

Soluto Frustrations

On why I was deeply unimpressed with Soluto’s product

The last week hasn’t been an easy one… I’ve got a brand new Lenovo X220 – and it’s giving me a hard time. For over 10 years now that I’ve always had a ThinkPad laptop (except for a couple of years with a MacBook – but this story is for another post) and I was always very happy with it. But this time…

Boot is stuck for over a minute between login and getting to see the desktop!

My laptop gets stuck forever 2-3 seconds after I go wireless on my workplace Wi-Fi network!!

Changing writing language too often doesn’t work – I’m getting stuck in Hebrew forever!!!

Fingerprint software is dead…

A PCI port is missing a driver and Windows keeps complaining about it…!

The Lenovo System Updater gets stuck forever when I run it to check maybe I’m missing some updates… errr.

In general – I get to the point that I have to reboot the computer 2-3 times a day… err err errrrr!

…And I’m a software guy, expert with Windows Internals – right? So I must be able to find the cause of these. But hell – it’s a new computer running latest software – I don’t feel like spending day resolving these. Or – maybe return the thing to the IT department and let them break their heads on this (still I will have to take a day off or work on a temp PC that does not have my configuration… bad idea!)

WOW – after a couple of days I hit some major frustration loosing over ½ an hour of work. ((Remember this was actually common in the ’90… but things surly have changed since… or maybe this is only me??))

Then I remembered this slogan from a young and apparently very successful startup from Tel Aviv called Soluto – “Soluto is bringing an end to PC user frustration …killer technology”. They have millions of installations and got so many positive reviews and prizes. It seems that Yishay Green, Roy Karthy and the other guys there are really managing to create a buzz and get some heavy funding. Actually, I remember I even tried one of their early betas after hearing one of their founders talk about his former startup successes… I have to give it a try. Maybe 15 minutes with this and I can save the hours of digging into finding what’s wrong with my PC.

I’m not going to write a full review about my experience. Just a few bits. They have a smooth installer and the UI looks great although it is a bit slow to come up with answers. (I even took some time to help them defining what some of the more obscure apps I’m running on my laptop are – after all it’s much of a community project.)

But then I hit this…

So they found that Google Chrome is running in the boot taking 29 seconds of my boot time. But they cannot do anything about it. WTF?? This is simply wrong. I actually ran a boot profiler myself (SysInternals ProcMon) and Chrome does not run on boot at all… And why do they say they cannot do anything about it? Can’t they help me uninstall it?…

They are offering me to remove some 10 pieces of software from the boot process, most of them with no clear explanation on what they do, and each taking a full 0.1 seconds of my life… Who cares?! But check out this suggestion – “pause it unless you connect to a network on the internet using an Intel wireless network adapter” – Even a pro like me got confused for a minute. This is cryptic Chinese for 99% of the world population. I’m sure that those poor 10% of people who choose to follow their advice and disabled their PC wireless have really got no frustrations now…

So it’s offering me to remove 8 of the 18 plugins I have in my Chrome browser. They say I have 2 Chrome toolbars and six more plugins that I can safely be removed… WHAT??? Toolbars in Chrome?! In Chrome there are no toolbars, like they have in (poor) Internet Explorer; and those 6 plugins I use are very useful and consume somewhere between ZERO to NOTHING processing time. Why do they suggest I remove them? Why do they tell me 26% of users actually disabled their Multimedia Plugin (and forgot about having media playback capabilities in the process)!?

I have a few more examples but I think the point I’m making here was well understood. So I’ll shortly conclude with this strange behavior: it takes Soluto 6 seconds to open its own About dialog… At least in the first time after every boot.

As you can understand it did not find my issues, or help me fix them, so we parted like friends (somewhat of a frustrated PC user friend, though).

OK – OK. I know I’m probably not the average guy and probably I’m not their target customer and/or my new PC is not their target PC because it’s new. I can even think of some cases when I was called-in to help fix a dysfunctional computer where this utility application could actually do some good. Still I felt I’ve wasted time playing with it.

So here is advice for what could be some excellent features for the next Soluto version that may actually make a difference. And if they don’t add them to Soluto, still, my dear readers, you can follow and fix your frustrating computer yourself.

  1. Enable Windows update to automatically update your PC. 99.9% of ppl should absolutely automatically update their system. Always.
  2. Install a basic anti-virus, anti-malware and update their signatures. Also enable windows firewall. In 90% of the PCs I was called to help with these where not right. Microsoft Anti-Virus/Malware and Firewall are excellent free alternatives here that require close to zero user interaction/intervention and get their updates from Microsoft as part of the Windows Update service. So simple.
  3. Uninstall most stuff that comes bundled with the original PC and with hardware drivers: The HP suite to tools and games that came bundled with the printer drivers,  the Cannon suite that came bundled with your latest digital camera etc.
  4. Uninstall those Shell/Explorer extensions, Internet Explorer extensions and the rest of the known-evil-doers. If you don’t know which of those is an evil just delete them all – you want miss them.
  5. Prevent about 15 Windows services from loading in the boot. Some can be delay-loaded and some can be changed to load ‘manual’. Here’s the list I use.
  6. Uninstall redundant drivers that haven’t been used for a while. Same for applications that haven’t been used. Things like that old Printer you once had in the office, that Nokia Suite that used to sync your late mobile etc.
  7. Run a registry consistency and temporary files cleanup tool like the CCleaner that I regularly use. (CCleaner actually fixed some of the issues I’ve had – see above)
  8. Analyze the system event log and saved mini-dumps to find problematic hardware and drivers causing issues with the PC. Just recently I’ve had to replace a display adapter that was blue-screening my home PC every other day.
  9. Assist in updating the display adapter drivers to the latest – – a very common problem.
  10. Rebuild the TCP/IP stack and remove any Winsock/network providers.

Enough said. I still have a couple of issues on my new laptop to resolve today, although most issues I’ve already found and fixed 🙂

One twists I really liked in Soluto: They have this little tray-icon menu where one can click “My PC Just Frustrated Me”. I’ve clicked it a couple of times not knowing what will happen. It seems to be doing nothing – no dialog opened, no thank-you message. Nothing. Maybe they are just collecting data for their next version or something. Maybe a bug? Strangely, after clicking it I felt somewhat relieved. Like a small steam release.

On how to go Agile without missing your yearly quota/deadlines

Going Agile in a B2B Company – Part 2

This is the second post about going Agile on a B2B company. Read the 1st part – that’s where you’ll find the why. This post is about the how.

Well… So much has been written on how to go Agile and what an Agility project looks like. I’m not going to repeat. I’ll be giving some general key guidelines – so that when you go on reading about Agile methodologies you can look at it with the right perspective; When you go to your managers to ask for an Agility project funding and time you’ll come with a good plan; When you speak with an Agile Coach you can see how well he can fit into your Agility project, rather than him dictating you what it should look like.

First thing to know is that Going Agile is not a one-time project thing. It’s a management philosophy. But – if you haven’t been practicing Agility in your company you should start with an Agile project with set goals – ppl like to have clear goals. Still, you should always remember that being Agile should be a general long-term goal. A goal to move fast, to be more productive, to beat competition.

A working company cannot and should not stop everything (development, sales) to have an Agile project in place. No manager would approve that, even in a startup. Most development group is investing about 25% of its time in building infrastructure and refactoring existing code. The initial Agility project would certainly require more resources, but it should not stop every other functional development.

Agile projects are all different because organizations are different and their Agility focus is different. Setting “standard” ultimate goals to achieve in a 6 months project is not realistic. Your managers will not understand where you are heading and what the urgency is (and you’ll miss your quota/deadlines). It’s better to educate your staff with the Agility concepts you want to embed in your process and set short-term goals that people understand. Then continue evolving in the Agile directions.

An Agility project should ultimately reduce development time, shorten release cycles, improve product quality and make your development team happier in general. But these are hard to quantify and it’s hard to “sell” an Agility project to higher management if you talk about these goals. So here is an idea – Start by setting project goals where your product lacks the most in a ways that hurts the SALES cycle and OPs. If you manage to improve here the benefit is measurable, immediate and noticeable. You’ll also make some other department managers happy about your Agility project and it’ll be easy to set new Agility goals.

So if it’s taking 5 days of work to configure each new customer environment – focus to resolve that with the Agility thinking and tools in mind – maybe by automating the installation and configuration process. If your QA cycle takes 2 weeks from code freeze to release – that’s an excellent place to start your test automation. If you’re having troubles upgrading customer’s database every time a new version is released – set fixing this issue as one of your first goals as part of your Agility project – for instance, by moving your database schema to the main code branch, building it as part of every build.

Introduce your managers (group leaders, team leaders) to the Agility philosophy and tools first. Set clear Agility goals for 2 months ahead and get your managers involved early in the process of how to get to those goals in time. Prepare to have some resistance – ppl don’t like changes and in many cases they will see it as an extra workload, even if they agree with your long-term goals. When you have a plan gather everyone and explain them Agility-Why and show them the plan.

Automation… Automation… Automation!

Many of the Agility tools and methodologies involve automating manual work – build servers, automatic unit testing, integration testing and QA automation – all require scripting and batch files. You’ll quickly find that your developers prefer coding in JAVA/C# than in Ant/cshell and that there are only few QA engineers that can script. This means ppl will need to adjust, evolve and learn new stuff as part of an Agility project. I found that I had to replace some of my QA staff with others that had automation and scripting in their background. I had to give some time for my developers to learn Ant scripting so that they can contribute to the Agility project in the short-term and continue developing in an Agile way later-on.

Task force

You’ll surely be creating some new development infrastructure to enable the Agile process to happen. A Continuous Integration Server, updated workspaces/projects for your developers that include built-in emulators for unit-testing, a new product packaging/installer etc. It’s wise to have a task-force in place that would build this infrastructure that would then be used by everyone else. Create a task-force built from engineers from all teams. This will help make sure that planning takes into consideration all relevant aspects and that when the initial infrastructure is ready there are already engineers in all teams that know how to use it and can teach the rest.

The new infrastructure soon affects every engineer’s daily work so it better be stable and solid enough so that it doesn’t break everyone’s work halting development completely. The task force should build the infrastructure and emulators, then the first few unit-tests and scripts on-top of it, to see that it’s actually functioning and can also be used by everyone else as an example. You can also start using the new infrastructure and test automation in parallel with the older process and throw warnings on unit-tests that fail rather than block the process.

In GameGround after the initial infra was developed we took 80% of engineers (everyone but a few left to fix blocker bugs) for a week of writing unit tests. So that everyone had both a chance to experience writing unit tests to their code, use the new infrastructure and understanding how development is going to be changed. This also meant that by the end of that week we also had a significant amount of our code tested regularly by our continuous integration server – 10%-20% of functionality. A good start.

Well, I must say, in reality, it was (is) much harder to achieve… I’ll try to write about the hurdles I’ve had in one of my future posts.

p.s. the post really had little with “yearly quota” – but it comes to show that an Agile project can be done without derailing the yearly planning of the company…

OPs / Production services review: Nagios, Kiwi Syslog and Limelight CDN

A Review of Services I’ve used in GameGround – Part IV 

This is the 4th part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about Analytics and BI Reporting tools for the marketing team. This post is about Monitoring the health of the system server – for the OPS team. Next in the series – development infrastructure.

Nagios

Nagios probably is “The Industry Standard in IT Infrastructure Monitoring” as their slogan says. It’s very popular among IT stuff and can be configured to monitor and alerts about up to 40-50 servers. So even a medium size company can use it. It’s free server software – basically a scheduler that executes service checks against installed agents and tests against network devices, reports back the results and raises alerts above predefined thresholds. There’s also a comprehensive list of extensions or plugins written by the community that can be utilized to monitor about anything you’ll ever want.

It’s easy to setup Nagios to watch for server disk-space, CPU and the existence of certain services. The difficult part is to create checks that would alert you if internal parts of the software behave irrational and users are not seeing what they should. E.g. certain transactions do not end in time, server response time for certain requests is going up, users suddenly cannot see their friend’s list etc. These are much harder to watch. To monitor these you’ll need to write code both on your back-end servers – special functions (REST/WSDL)  that would do some internal testing and return true/false accordingly. Nagios is able to call such functions periodically and alert if they failed. It’s an evolving process: You’ll see your systm fail without Nagios alerting about it and then add more of those checks till it functions well.

So- It’s wiser to add some testing functionality on design time: plan your server modules to have Nagios testing APIs. You’ll also need to watch that some of your 3rd pty providers are working right: If the A/B testing API you are using is down then your site is probably down too. If your Content Delivery Network is down ppl are not getting to see your website, although everything is functioning on your side.

Nagios – the CONS:

  1. Nagios was started in 1999 and is written in Perl. Although new versions have been released it fills like an old product and things seem harder to achieve than what we’re used to these days. Most of the checks have to be configured annually in config files including thresholds for alerts, the amount of times a failure does not raise an alert etc.
  2. To achieve the functional testing mentioned above and to integrate with all the plugins for the different OS types and monitoring you’d want to do you’ll need to code some scripts on Nagios side (or at least edit existing scripts). That’s Perl coding. Although Perl knowledge is still quite common it’s fast diminishing from the planet. It’s already hard to find IT managers who can code it and younger developers haven’t even heard of it… We ended up getting outside help to create the basic setup. That much about free software…
  3. The learning curve is long. Expect the system to text-message you false alarms at 2am, telling you the system is down, for a few months, until you get the thresholds right. Expect your CEO to call you at 2am to tell you the system is down but there was no alert… Lots of things went wrong in a live environment of (only) 4 servers we had in production in GameGround – it took us about 3-4 months to get to a relatively solid Nagios setup that actually alerted us on most of the real problems.
  4. Some of the things to watch for ened-up being certain errors written into the different servers’ log files. These may be critical bugs and exceptions thrown from bad things happening  down in your code stacks. So you can set up Nagios to grep the log files for those strings. This is very heavy on your servers and on the traffic. Better have a proper central log server with alerts (see below). But then this actually means that you’re going to have 2 monitoring systems – one is the Nagios and one in the central logging server.
  5. Nagios is good in giving you a green or red sign next to your servers/services. But in reality managers want to know ahead of time that things are going in the wrong direction: queues are not emptying fast enough, response time on some requests are mounting. Nagios is no good for those tasks. You cannot use it to create graphs and its dashboard is not flexible.
  6. You have to manually define each server and each service you want to monitor. This does not work for cloud-based environment where adding a server instance is  done in a click, or even automatically.

I don’t know of a good alternative. But I would like to see something that combines system health alerts with Syslog analysis and a real-time configurable dashboard. Any ideas?

Kiwi Syslog

If you want to have a good insight into what’s actually happening in your servers you must check the different servers’ logs. Getting all the logs from all the servers into one place and automating the search for errors, exceptions and irregularities is key to having a healthy working production environment. First product we checked following warm recommendations from friends was Splunk. It has excellent easy-to-use web-interface and the setup is very easy (assuming that your servers are written and configured to upload syslog/log4 to a central server…). But Splunk is VERY expensive, even for a small server setup like ours they asked for something like $6000/year. The free version is only good for internal testing and running on-top of QA systems. For production you’ll need the enterprise version. It does not make sense to pay that much in a startup… So we checked Kiwi Syslog.

Kiwi Syslog is a relatively small piece of software made by a NZ company. Their main interface is based on a Windows installed client. But they now also have a web-based dashboard that gives you the most important features. It’s easy to setup and work with. It’s cool. And it costs like 2% of Splunk’s cost. Go Kiwi Syslog! Go!

Limelight CDN

Working with a Content Delivery Network is an important factor in speeding your pages loading time. When we tested before-and-after we saw a dramatic decrease of first-time page load from 3-4 seconds to 2-2.5 seconds for US-based users. With later widgets and pages the load time was about 30% faster. This is a lot! The other reason you’d like to have a CDN is that it’s going to take a large percentage of the traffic from your servers – so you’ll end up having less servers and pay less on traffic.

The basic service a CDN offers is the speeding up of static content (Imgs, CSS, JS files) delivery. The advanced services CDNs offer are media streaming and something called Whole Site Delivery – out of scope for this blog post. For the small site/service you’re going to pay $1000-$2000/month for the basic CDN – it may not be too bad considering the reduced costs on servers and traffic.

If you know you’re going to use a CDN you can write your code and delivery procedures in a way that starting to use a CDN would just be a flip of a config file entry. If you already have a website/service functioning without a CDN you’ll probably need to do some work to separate and version the static files correctly and add proper configuration everywhere. So, with the right design you should be able to integrate with a CDN, change CDN or stop working with a CDN in a matter of minutes.

So the story goes like this: We decided we had to have a CDN because every millisecond of page load time is critical. This was before launching our initial service. We went shopping and were surprised – it seems that most of the bigger CDNs were not willing to work with us at this stage at all. Even the local rep of the local Cotendo (a startup sharing a VC with GameGround) never returned a phone call… Luckily the local rep of Limelight was willing to take the deal and after a couple of weeks on negotiations we switched NO the config and it was working well (we did have a couple of config issues – minor faults on our side)

Q: Should a small lean-startup deploy a CDN as part of their initial release?

A: NO NO NO. It’s expensive and the signing up with the local representative of a CDN will consume too much of your time.

Q: Should a lean-startup write their code with a CDN in mind?

A: Yes! Sure! This will allow you to speed up your site and offload traffic if and when your site/service is showing some signs of success. Coding with a CDN in mind won’t make it slower anyway.

Q: Can you give some hints on how to design it right to work with a CDN?

A: I promise to have a post about it later on…  << but if you have a specific Q – ask it in a comment below

Q: Are there no free/cheap alternatives?

A: There are! Check out this post about using Google Apps Engine as a free static data CDN. Also – this post about using DropBox as a free CDN solution. Note that if the delivery of the resources from those unofficial-CDNs is not faster than delivering them from your own site then adding a CDN configuration might actually slow down your site. Be ware!

Going Agile in a B2B Company

On why Agile is the right development methodology on non-internet software companies too

Of the 14 years I’ve been developing software 10 years were with companies doing B2B software (intended to be sold to another business, as opposed to B2C – software that is directed at customers online etc.) In recent years the Agile development methodology is growing strong and a recent Forrester study shows that now over 40% of development teams in the US are using some sort of Agile development methodology. I’ve heard of Agile project in some of the larger companies and had a chance of “upgrading” my own development department to work in an Agile environment (we took Kanban as our preferred Agile approach). Now this blog post is not going to be about my experience with Agile. Instead I’m going to tell you about a talk I’ve had with a friend who told me Agile was not for his (awesome) B2B software company and my response to that. So these are the reasons why he thought Agile was not for him:

  1. The company’s product release cycle is 3-6 months – They have a larger release and a couple of smaller releases, or Feature Packs, yearly. This is what their sales and channel guys are used to and can handle. They will not accept a daily/weekly/monthly release anyway – this is not how the work.
  2. Every other year they have at least one of those larger development projects where they tear apart large parts of the system and re-architect them. This takes several months to develop and runs in parallel to other projects. Those projects will not fit into an Agile development environment.
  3. They cannot afford the time it would take to go Agile. They are too tight with schedule to allow their devs to go back and add unit testing to existing code and development infrastructure for automation – and test automation is key to any kind of Agile-ness. Right?
  4. Their QA manager is happy with the code he gets and the released product quality is fair.
  5. The company is doing well in general – why change?!

At first this all seemed logical to me as I knew that the real power of Agile development lies in the quick release cycles (“give something small to your customers often”) and in cases that the software quality socks. Anyway, with another thought, these are the questions I’ve asked–

  • Are you, the product manager (or company manager in a small operation), happy with the time it takes from when you identify a clear customer need to the time you have a new feature that you can sell that customer? If I told you could potentially be in a place to define customer-specific releases that would have the new feature out in 3 weeks – how much does that worth to you? ((AND – your R&D manager is not going to freak-out about this last-minute change, because his yearly timeline is getting fucked and his people don’t like working Saturdays.))
  • How much time do you spend before every major release reviewing product requirements and development estimations, trying to fit 30 new features into a 10 features development time. Prioritizing again and again, only to find out 4 weeks before release that 20% of features you were guaranteed to have will not make it on time. Or the deadline must be push.
  • How many times did you want something added to the upcoming release, as a large customer deal is pending on it, and your R&D managers said there’s no time and you’ll have to take something out of the release – – or the deadline must be pushed…
  • Why is it taking the QA teams 2 full weeks to test a new version release? Does it have to be like this: they first get a version that just doesn’t works 4 weeks before deadline. Then this Ping-Pong between R&D and QA till the version stabilizes: The open bug count starts dropping to the point of reaching the ultimate go-no-go meeting, where the QA manager signs-off the version at the end of an extremely hectic and night-less month. No better way to meet the same goal?
  • It happens that the more lines of code we have (and we have more every day) the more QA personal we need. We Often get to the point that the QA manager tells me “…we haven’t had the time to complete the testing of this version…”, “…we need more QA engineers just to complete the STP in time…” – We often release without completing the QA cycle and indeed we often need to patch the version a week or two after release because the version has some major bugs. WTF???
  • The product manager often complains that the developers missed some of his instructions in the PRD and that a feature is f**ked as a result. Then the QA manager complains that the version has some features missing and some new features he did not know of and one of the team-leaders told him that some things were changed and showed him a couple this email thread from 2 months ago were those new features were detailed. Well sort of detailed.
  • Your development manager is asking for 8 weeks of developer time to conduct a code merge of the new feature that is already tested into the main code branch. The Mac OS-X product was written on a branch that was never merged back to the main branch and devs keep complaining that it’s time we move to a better/modern/faster source control server because those XML files are never merged right and there’s so much manual work to do with every code commit. There must be a better way.
  • And the competition is there. They are fast. How come they are faster to react then we are?

Well – you’re expecting this – go Agile.   🙂

I’m not saying its magic. You’ll have to invest time to make it happen. You’ll have to give it some chance and believe it can greatly improve your performance.  Why “Believe???” – – We are engineers (or sales guys) and we have targets and methodologies to work. Why do we need to believe? Because you’ll have to change the way you work. People don’t like to change. People like to stick with processes they know. They mostly don’t see the flaws. They find it hard to believe that it can be so much better.

This is why I think that the goals for an Agile project must be set by a high ranking manager. The R&D manager is going to be involved for sure, but also marketing/product manager, and sales exec for sure as one of the main goals for an Agility project would ultimately be to improve the sales cycle and time to market. So, it seems the division head or CEO is probably the person that should set the goals and allocate the resources for an Agility project.

OK. So you are saying “the above is exactly the problems I see in my company but I’m not the CEO. I’m merely a team leader in the R&D…”  –  Now what? Well, send this post to your CEO – ask for a meeting to discuss the topic. Come prepared. Bring along an Agile couch/consultant. They are used to talking high mgmt. into investing in Agile.

Next up – some guidelines on how to go Agile without missing your yearly quota / deadlines.

Qs and comments are welcome as always.

Web analytics and BI reporting services review: Google Analytics and SiSense Prism,

A Review of Services I’ve used in GameGround – Part III – Marketing tools

This is the 3rd part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about Community engagement tools for the marketing team: sending emails and engaging customers in a conversation. This post is about Analytics and BI Reporting. Next up – OPS tools and of course, development infrastructure.

Google Analytics

This extremely popular free SAAS service by Google has become the de-facto standard in website traffic analysis. 10 Years ago I used to download my http server logs and run a simple analysis tool that gave me most of the basic analysis features I needed, but this SAAS has some excellent analytics features like measuring page view time, campaign origin tracking, goal tracking, integration with AdSense etc. There are a few BUTs here, which make me think twice before I choose this option again:

  1. The service was built in the times when every website page had a unique URL. This is no longer how the internet works today – we use several widgets that load asynchronously into one page, we change the content of the page using Hash Navigation and AJAX. Especially with mobile content we don’t use the old website concepts anymore – it’s becoming ever more popular to have a single-page website that loads a JavaScript application that makes it all tick. An App, in short. Google knows this and allows you to report “manually” about any URL you are loading. So you can report on any action taken on the screen and on any widget loaded – call the service API with a different fake URL. The problem is that you lose most of the abilities to track the flows users have taken in your application, the count of page views and time spent on a page gets totally skewed. No remedy here.Google Analytics is Useless
  2. You want analytics not only about page views. You want to know about downloads ppl have made, about clicks to the cancel button on the registration page etc. All that is not well supported with Google Analytics APIs. The Events API does not work with the goals feature and calling the APIs on page events (.g. clicks) reporting fake URLs is skewing everything and the numbers just don’t add up.
  3.  The tool is mostly good for the product managers to see how the crowds are using the product/website. A few other causes are not well served: your sales manager would like to get information of sales progress from analytics on individual potential customer – integrate the analytics goals with SalesForce and alert sales staff to contact potential customers. Engineering would like to know of faulty pages (500s), broken links (404s) etc. Security officers would like to know of traffic spikes and login errors to track potential invaders and hackers. You’ll need other tools for those tasks. p.s. check out the friends at Totango for an excellent analytics tool specific for SAAS sales managers.
  4. Much of the analytics data is delayed. You get much of your statistical views update daily. So you upload a change close to the end of your working day. You come in the morning – still no significant results. You have to wait another day before you get some results. And that is IF you’ve managed to write the fake URLs thing correctly. In many cases you’ll need to fix it a bit and repeat the test. This is way too slow for a startup…
  5. GA is a basic website traffic analysis tool. Traffic alone is good for SEO tasks, understanding traffic sources, goal achievement etc. For Business Interlace you’ll need something much stronger with access to the business data stored in your database (see below).

In short – For modern websites and apps GA is almost useless – it will only give you the big picture. Forget about the details …Or check out a better service that was designed for it.

Two insights on the development management side of things: Plan the analytics of every feature as part of the design of the feature itself. Having a feature that one cannot analyze and understand user interaction with is usually worthless. Plan to spend more time than you initially thought to support Google Analytics efforts (probably true with any analytics.)

SiSense Prism

SiSense is a startup developing a very interesting reporting product that is based on unique Columnar Data Storage technology (as opposed to the “regular” OLAP-cubes or other in-memory solutions) that enables large-scale data-sets analysis. The product has an easy-to-use interface that allows creating of beautiful web-based reports for business intelligence, website analysis fort any where managers need a dashboard with stats. It can connect to multiple data sources including most common DBs and even cloud services like Google AdWords, Google Analytics and Amazon S3 logs. This means that the cost of creating and operating excellent reports is much lower than with some other popular products by IBM, Omniture, Microsoft, Oracle and so many others.

I liked using their product a lot. In GameGround the product was mostly operated by one of our QA guys (in addition to his QA roles) that had some basic knowledge in databases and SQL and assisted by our DBA occasionally.

A few notes for everyone thinking of building a BI suite using SiSense and the like:

  1. Lean start-ups – abort here! The establishment of a BI product is lengthy, expensive and has a high learning curve. In most cases it would involve bring in an expensive contractor just to help you boot-start the thing. If your data includes a few thousands records you’re much better off with Excel. Excel can connect to most Databases and you can create filters and graphs and send them by email to the marketing/sales daily. It may sound “ugly”, but the time/cost it would take is a fraction of the time it takes to build a proper BI suite. BI reporting suites are not meant for lean start-ups! Starting thinking about a BI suite when you have some real customers and Excel’s abilities of crunching data are too low (over a couple of hundreds of data rows Excel starts slowing down to a crawl!)
  2. In many (read: most) cases the information you want to investigate does not exist in the Database. To create a report that shows you how many ppl clicked on ‘like’ and how many ppl uploaded a picture every day you’ll need to add code to collect the data. In many cases this involves code on both the website, the back-end services and changes to the database.  No magic here – BI needs are met with development costs even if the BI person is part of the marketing team.
  3. The most important thing with BI is knowing the right questions to ask. In most cases the basic question of “if I give you the data you requested what would you do with it” is never asked. Ask questions that gets you actionable data. The harsh reality is that in too many cases reports were requested and were never utilized.  Still producing those reports took a lot of effort.
  4. As a manager, if you ask for very detailed reports you’ll find that you drown in details and cannot get the whole picture. The whole point of having BI dashboards is that you can get the interesting point in 5 seconds. So, start by asking for basic stats that can be visualized over time. Then ask to get details on specific actual things you see.
  5. Building a good BI dashboard is, thus, an evolving process, not a project. The project would be to get to the point where you have the first 3 reports in place. Then you’ll want to continue develop more reports and improve on existing.

A few notes specific to SiSense Prism:

  1. They are still a startup themselves. Investing in a BI suite that may not be there in a couple of years is risky. Still I give the team at SiSense a strong plus. I’ve seen the work they’ve done with wix is pretty impressive.
  2. They prefer you to take a monthly paid subscription to use the product instead of paying a one-time. This is cool and allows you to pay-as-go and pay-as-you consume. It also reduces the cost of boot-starting a BI solution that mostly involves buying a strong server, paying for a contractor to help you out getting the thing to work etc.
  3. Their pricing plans are a bit problematic. Their most basic feature – viewing the reports in a browser – is only available in the most pricy plan.
  4. Their customer support had some serious issues at times. We got no response for a bug we had with their reports viewer software (we did not use the web-based version), and reverted to use a 500MB server software on each of the managers’ laptops…

Community Engagement services review: SendGrid, GetSatisfaction

A Review of Services I’ve used in GameGround – Part II – Marketing tools  

This is the 2nd part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about A/B/Split testing tools for the marketing team. This post is about Community Mgmt. Next up – Web analytics and BI reporting, OPS tools and of course, development infrastructure.

SendGrid

One of the first features every service has is sending email to customers. There are 2 basic types of emails to send: transactional and mass-mailing. Transactional emails are those produced as a result of a user action, like registration, friend invites etc. Mass-mailing are those when you invite your registered users to an event, a sale etc. So why not use your own corporate SMTP server for those emails? Because you are likely to find yourself in one of the many black lists of spam servers at some point. If spam filters on several servers worldwide find your emails to be spam or If 2-3% of your users mark your email as spam you’ll be black listed and will not be able to send emails from your company at all… bad idea. Other issues you’ll have to manage yourself if you don’t use a SAAS for this is managing unsubscribe lists (<1% of users on social networks unsubscribe in average) and email bounce list (~12% of email address users give on social networks are miss-typed or bogus). Managing those lists is mandatory if you don’t want to get black-listed.

We started with using MailChimp, probably the largest of several competing services, but quickly found that they will not send our mass-mails as they are afraid their servers would get black-listed. We then had the same issue with Constant Contact and CampaginMonitor. It seems that most EMS vendors send all email from a set of about a dozen shared IP addresses. Thus, they have to minimize complaints across their entire portfolio. Most EMS vendors require that you give your users either opt-in (“I’m willing to get marketing materials” checkbox on registration) or double opt-in (+email verification). And if the complaints rate resulting from your service is above a very low rate they kick you out. On our first campaign to just 1200 registered users we had a complaint rate of 1.1% and their acceptable limit was 0.2%… For a young company with little history records that is running its first campaigns the demanded ratios we not acceptable. And- we wanted to have an opt-out on sign up, not an opt-in. We got stuck for a few days till we managed to resolve the mess.

Then entered SendGrid! SendGrid is a cloud-based SAAS with a technology that seems to be far more resilient to black-listing. Their white-label feature allows you to bind your domain MX records to one of their servers with an IP address in cloud. This means you do not share IPs with others and do not need to comply with such low complaints rates. If you get black listed you can change IP address and/or domain name and get back on business in a matter of minutes. So we set up 2 accounts – one for transactional emails, that are less likely to cause blacklisting, and bound it the company’s domain name. Then we bought another domain ‘mailer1 –mycompany.com’ and bound it to the second account. SendGrid system appends an ‘unsubscribe’ link to your emails if you don’t do it yourself and they manage the lists for you – they won’t send an email to someone who unsubscribed, even your service did send them. You get a dashboard where you can see stats of your sent mails, bounces, spam reports etc. and fix your email templates as needed.

The integration with SendGrid’s basic SMTP service took us 15 minutes. They also give you APIs to sync user lists, send using predefined templates etc. but we haven’t got to use those. Pricing is low for what you get. It’s highly recommended to work with them and utilize their APIs to save you the need to write email templates and change them every other days according with the product needs. Let the product guys edit the email templates on SendGrid control panel. No code changes involved unless a radical change is made and different parameters are needed to fill-in the template. So much simpler to operate this feature too. Our email system is working fine with a delivery rate of ~95% on the transactional emails, which is excellent.

Now, how about some tips on how to avoid getting your emails marked as spam? This is a bit out of scope here – maybe I’ll do another post on the quests I’ve had to work-around the spam filters mine-fields. Meanwhile, you may want to read here.

GetSatisfaction

This very successful SAAS product allows you to add a popup widget to your website where your customers can write their feedback, good or bad, and suggest you things. It allows your support and product ppl to engage in a conversation with your customers in a productive way. The Javascript integration is simple and with a bit of extra integration you can also OAuth your logged-in users, so that you can work on a common user-base and get back to the users who wrote a feedback. The control panel allows you to define your products, settings, admin the feedback etc. To have the OAuth feature (a must in my opinion) you need to buy [expensive!] the $99/mo plan. The integration went easy and all was working in a matter of hours. BUT!!! I don’t think this product is so great:

  1. The product is a hybrid between an online feedback survey and a forums product. It has the disadvantages of both: you mostly get to hear only the ppl complaining about your product; irrelevant questions and remarks bloat the service with historical and irrelevant data.
  2. The loading of their widget is slow and cause long delays in the page loading -time. On some pages the feedback widget was the first thing to show up on the page (WTF???). We ended up writing a script that delay-load their script to bypass their mentality of “We’re THE product and our users are the websites using it”…
  3. The suggestion engine meant to prevent users from entering the same feedback over and over again is weak.
  4. You already have a Facebook page, a blog and the product pages with comments. Why do you need another place where ppl would talk about your product?! I would give this service a pass next time I’m around. Instead just open a feedback page, place a Facebook comments widgets in it and you’re done.

Marketing tools review: Google Web optimizer, Visual Web optimizer and Unbounce

A Review of Services I’ve used in GameGround – Part I – A/B/Split testing and landing pages services 

GameGround.com is a service I’ve built during 2010 and was alive till mid-2011. I’ve managed this startup dev teams, developing a consumer facing social meta-game. This is a short review of several 3rd pty products and services I’ve used and my take on them. The basic approach I’m taking here is the applicability of the product to a lean startup that wants to move fast. I started writing it and quickly found out that it’s actually too long for one post. So I’m going to make it a series of post covering Marketing tools, Community Mgmt. tools, OPS tools and of course, development infrastructure.

Google Website Optimizer

GWO this is a simple and free Google service that assists in A/B/Split testing. The JavaScript API makes the decision on which of the optional views to show and you get clear stats view on which of your options is better. I think this product is too simple and not very helpful as it misses the very basic idea of A/B testing: the whole point of A/B testing is that the designer/product mgmt. can run different views and phrasing to find what works. The problem is that for very small variations of pages you need to push code to your production servers and the marketing team cannot work without a developer that assists them in the process. This leads to too many people being involved and process being too slow. Another problem is that some of the stats are updated on a daily basis. In many cases you’d like to make a decision faster and move to the next test – why wait a day?

Visual Web Optimizer

VWO after giving up with GWO we moved to VWO. This new and relatively cheap SAAS-based product is great! After a simple JavaScript integration one can create page variations using a WYSIWYG HTML editor in VWO’s site. Set test goals, alter text, images and CSS and even replace whole pages in runtime. This means that the marketing team can create most A/B/Split variations by themselves and run the testing without a developer nearby (well… They will ask some Qs…). The testing stats results are displayed instantly in a very clear way (see images) and you can even tell it to automatically stop the testing and always show the winning variant. As a bonus you get heat-map views of your tested pages. The couple of issues we’ve had with them were (a) their inability to run tests on our logged-in pages. E.g. pass the login barrier with their WYSIWYG editor which. This required some extra work in the service integration phase. (b) at one point the testing stopped and we found out that a new version of their JS integration library was published without informing us (the customers). Anyways, I can tell that their support team was fast and gave us a quick remedy. VWO’s slogan is “World’s easiest A/B testing tool” and I think they are doing a wonderful job at it.

Unbounce

Unbounce is a landing pages SAAS. “… a self-serve hosted service that provides marketers doing paid search, banner ads, email or social media marketing, the easiest way to create, publish & test promotion specific landing pages without the need for IT or developers.” Yes! Landing pages for specific audiences and campaigns is an excellent way to drive traffic to your site. And Unbounce’s platform with its WYSIWYG HTML editor simplify the process even further allowing the marketing to create those pages and amange them as part of campaigns they are having without needing development involvement. They even give you multi-pages per landing page (e.g. a small website), a lead generation module, A/B/Split testing tools and other goodies. So far so good.

BUT! There’s a major but here: the SEO marks for those pages on Unbounce are extremely low. Search engines don’t like websites and landing pages that has only static content. They also don’t like it that the landing page in not under your own domain, but rather on Unbounce’s, and so they incorrectly see the landing page as a spam blog. This (among others, I’m sure) led us to get very few displays of our ads on Google Adwords and very few clicks coming from this major traffic source.

We ended up using some other desktop HTML editor to create a single-page site for each landing page. It was then uploaded to our live production servers, under the ‘/play’ folder, using a FTP we opened for it. This way the marketing team could create their landing pages according to the running campaigns and upload them to production with little or no dev/OPS involved. This is lean-thinking in its best – have as little ppl involved in each task. Ppl should mostly be able to complete their tasks end-to-end without needing to interface with others.

Node.JS – I Love this Technology!

Beware! A game-changing technology has entered the arena. The internet as we (developers) know it is about to change soon.

I rarely get to see a new technology that sparks my mind and keeps me late at night, trying to utilize it and doing something with it. It happened to me a couple of months ago when I first played the PC version of Angry Birds (2 white nights…!) and lately again with Node.JS. But this one is no game! To explain the thing I have to take you back in time to year 1999…

It all started when I wrote my first server for Exent’s Games on Demand platform. It was a large-volume file data server designed to respond to the requesting clients very fast and serve thousands of concurrent clients. We wrote the server as a kernel module and accordingly it was written in a fully asynchronous fashion. This project lead by my first team-leader, Amnon Romm, was certainly the most beautiful piece of code I have seen to date. [us, developers, can see beauty and ugliness in simple code. It’s a special gift we have that a non-developer will never understand… J Code is actually mostly old and ugly. If you write something and a colleague comes to you and tells you your code is beautiful – this would be the BEST compliment you can ever get. Really]

Since then I’ve seen so many other servers. Some of them, like Check Point’s Firewall, definitely have a good asynchronous architecture (even if the code is somewhat ugly…) But one thing I could never figure out how come the whole WWW (the browser part of the internet) is running on top of badly designed synchronous servers. Maybe it’s the basic design of the [wonderful] HTTP protocol that is request-response based. Maybe it’s us, developers, who find it harder to design and code asynchronously. Maybe it’s because in the early ‘90s when internet standards were written, running a CGI process on a UNIX was the main way to handle HTTP server requests, and we just never wanted to stop supporting those standards… Anyway, I figured out that all the most common servers – Apache, IIS, JAVA based web-services, the standard .NET stack, Django/python, PHP, Ruby and almost every other piece of HTTP server out there are written to run on a synchronous environment. Every request is either served by a new thread or a thread from a big thread pool. Such a thread is executing the request and response stacks waiting for resources from the DB, from the disk drive, from a memcached service they call etc. And each time they go to sleep waiting for a response from the device to arrive and context-switched out to give another thread some CPU time. This means that heavy-load servers spend MUCH of the CPU and memory-bus time switching threads. The simplest server written with (the newly designed) .NET/WCF can have up to 100 threads running on a dual-core processor. The result is that a strong server can serve only a few thousands of clients/browsers concurrently. So much CPU time and money is wasted.

Another issue is the usage of high-level interpreted languages to write the internet. From JSP to PHP to Python. Most of the internet is written in scripting languages because it is easy to write and easy to deploy. But everyone knows it is running slow. It’s a balance development managers take – code fast and get it out the door. They say “We’ll have other ways to speed up the beast after it’s already out” – clustering, stronger hardware, another caching layer etc. Anyway – once the version is out its now the problem of the IT guys to meet the SLA. WTF??? Some real efforts were recently made by Facebook to speed up PHP with their release of Hiphop – a server add-on that transforms PHP scripts into highly optimized C++ code and then uses g++ to compile it to machine code before its run. They say that on average it reduced CPU usage at Facebook by about 50% (!!!) and that WordPress 3.0 is running x2.7 faster under Hiphop. Wow!  Impressive!

But what if we could solve the synchronous design in a similar way? The problem is we cannot – we’ll have to throw away all the code that was ever written and start fresh. That is because asynchronous code cannot call code that blocks and all that code out there is blocking.

Enter Node.JS (Start fresh!)

So what is it? I see it as the next-gen any-software platform. You write the code in JavaScript that then gets run inside Google’s blazing-fast V8 JavaScript Engine. This engine first compiles the code to binary then executes it. The more important thing is that the core libraries of Node.JS require you to write everything asynchronously. Whether you’re accessing a config file, requesting an access token from a Facebook API or running a SQL query on your DB – all the APIs are synchronous. They also took the event-driven I/O approach and implemented the CommonJS specifications so that it’s extremely simple to write servers using Node.JS. There are also a few other strong features: a way to package modules of code into libraries or Modules; a way have native binary Modules (written mostly in C/C++); a package manager called NPM; and an excellent open-source spirit community of devs around it. It was started by Ryan Dahl in 2009, and its growth is sponsored by cloud provider Joyent, which employs Dahl.

So what can you do with it? Well, basically everything you can do with Python, Perl or Java – client code, server code. But the goal is definitely server code. Blazing fast web-servers that can handle x10 more traffic and do it much faster. This can serve not only “regular” browser based traffic, but can also be utilized to stream music and video, used for sharing applications, reverse proxies etc.

There are a couple of things you need if you want to have a team of developers starting to work on your next big thing. First you want a proper a development environment (use Eclips with Google’s V8 plugin); then you need a Unit Testing framework (use Expresso);  an application server with MVC and templating support (use Express); an ORM/Hibernate-like tools to ease coding on the DB (see MongooseJS for MongoDB and SequelizeJS for MySQL); a library of utility Modules to copy-paste from for almost every basic need (see NMP Registry with almost 3000 entries << try searching “facebook”); and a cloud based app-engine to deploy your application on, preferably for free (I found 11 such services but Nodejitsu and Cloud Foundry seems to be the most advanced). Let’s not forget that a strong development community is also very important (NodeJS main newsgroup: ~50 threads/day; ExpressJS group: ~10 threads/day ; Linkedin NodeJS group has 832 members; StackOverflow NodeJS tag: ~13 tags/day). I think we are good to go!

nodejs-vs-ror

Node.JS thing is catching fire these days – this Google Trends view clearly shows how fast Node.JS is soaring and that it’s now almost as big as Roby on Rails. Many new startups are seeing this as a great opportunity and are developing on Node.JS. It fits so well with the lean-startup concept. One coding language for both front-end and backend >> one developer can write the whole feature, end to end. And no more translations between XML and JSON. Now everything is JSON in all application layers. No more

Some giants have already decided they are joining the party. Microsoft has recently announced it’s going to support Node.JS on Azure (and Visual Studio, for sure). VMWare is already supporting the Node.JS deployments on their cloud services – CloudFoundary.

Interesting blog posts if you want to further read –

Did I say a game-changing technology? I think that this is taking on the HTML5 hype. I’m predicting here that in 5 years Node.JS would be the most prominent coding language in the world, and servers based on Node.JS platform will replace most ailing JBOSS, ISS and Apace servers out there. The LAMP stack is dead, long live JavaScript.

p.s. I guess some of the readers of this post are saying “this guy is crazy! He’s taking an immature technology and convincing it should be used in production today. The risk is too high yada yada yada…” I agree. The risk is high. If you don’t have strong devs that can master a new technology and face some difficulties then you should stick with the usual Django/Rails/GWT. If you have strong devs the up side of this technology is great and I think it’s mature enough for most tasks. Especially if you start something new.

%d bloggers like this: