Recent Posts
Blog Archive
:-)
%7Cutmcmd%3Dreferral%7Cutmcct%3D%252Fissue%3B%2B)
Ethan Ram’s geeky blog on the seam of technology and product management.
A review of what makes Rovio’s Angry Birds so good
Yesterday I found myself buying Facebook Credits for the first time in my life. No App on Facebook has managed to tempt me into buying its goods up until this one. None of the addictive Zynga games or any of Playtika slot games made me convert into a paying customer. Not event GameGround’s tournaments. So how come I find myself paying for a virtual earthquake that shakes pig shelters??? Worse than that: I’m re-playing levels I’ve already managed to pass months ago, just to earn a couple of extra level points and beat my friends to the crown, at 3am. This is not like me at all…
These guys must be doing it right if they manage to convert a freebee-lover-non-ad-clicker like myself.
Actually I started thinking about this game several months ago, after a successful session of pirating the registration requirements, when they’ve just released the PC version on the Intel App Store (Intel must have paid Rovio a fortune to get this game exclusive rights as a standalone install-based game on their App store. But I’ll keep this story to another post). It’s the first time in years that I’ve been hacking a game… and it really made me think – what makes it so successful? It wasn’t following any of the recent trends in gaming. In fact, if I’d describe its basic nature to any of the VCs I’ve come across they would have dismissed me saying I’m old fashioned, even a dinosaur.
[up until their recent Facebook App launch…]
So what is it? It seems to be defying common knowledge of how to be successful in an always-online world. How do they do it? How do they manage to be the #1 selling App world-wide for months and years? 2 years after the game release and it’s still growing.
This is the (secret) recipe:
If this all sounds to you like quoting a basic game developer manual – you are right. Nothing here is big news. The true geniusity is taking all the basic elements and combining them into a truly good game. This game is a combination of Pacman game, Spiderman comics and the movie Madagascar. 🙂
And now for the Facebook version goods:
I’m now a converted Facebook user and a happy gold crown holder Angry Birds Level 1. Let’s see if you can beat me!
p.s. kudos to Jaakko Iisalo, the designer who came up with the Birds’ concept art and then played a key role in developing the early versions of the game.
This post is a bit unusual – I’m going to help you revert the one most annoying feature of Google Chrome – they force you to use the local Google site when searching from the Omnibox. If you’re living outside of the US I’m sure you’ve seen this thing – almost everyone is using a local version of Google. So, in my case it’s been defaulting to Hebrew version of google.co.il (right to left!)
I cannot work like that!!!
I see so many ppl who actually gave up on clicking the ‘Google.com in English’ link every time they open their browser and somehow got used to having their search engine talk to them in Hebrew whatever… Hopefully some day soon someone up there would put this property in the decent checkbox in the options and let it sync to my profile. Pls pls pls. (no chance!)
Recently it’s got worse: Chrome is now trying to convince me to google in Estonian…
Here’s a simple 1-2-3 guide on how to rid of it.
1- Close your Chrome browser.
2- I know you closed it. But it’s most likely still running. So kill any remains of chrome.exe from Task Manager
3- Open Notepad then open the file Local State under the folder C:\Users\<user>\AppData\Local\Google\Chrome\User Data
It’s there. So if you can’t see it change your filer to All-Files or search for ‘*’
4- Now edit the 2 lines in the beginning of the file where you see the local Google site. Change them to read “.com” instead of the “.co.il” or whatever local version they put you on.
5- Save and exit
6- Open Chrome. Search.
Breathe deeply. Your life just got a whole lot better 🙂 🙂 🙂
p.s. They will offer you once in a while to switch – don’t get tempted…
The last week hasn’t been an easy one… I’ve got a brand new Lenovo X220 – and it’s giving me a hard time. For over 10 years now that I’ve always had a ThinkPad laptop (except for a couple of years with a MacBook – but this story is for another post) and I was always very happy with it. But this time…
Boot is stuck for over a minute between login and getting to see the desktop!
My laptop gets stuck forever 2-3 seconds after I go wireless on my workplace Wi-Fi network!!
Changing writing language too often doesn’t work – I’m getting stuck in Hebrew forever!!!
Fingerprint software is dead…
A PCI port is missing a driver and Windows keeps complaining about it…!
The Lenovo System Updater gets stuck forever when I run it to check maybe I’m missing some updates… errr.
In general – I get to the point that I have to reboot the computer 2-3 times a day… err err errrrr!
…And I’m a software guy, expert with Windows Internals – right? So I must be able to find the cause of these. But hell – it’s a new computer running latest software – I don’t feel like spending day resolving these. Or – maybe return the thing to the IT department and let them break their heads on this (still I will have to take a day off or work on a temp PC that does not have my configuration… bad idea!)
WOW – after a couple of days I hit some major frustration loosing over ½ an hour of work. ((Remember this was actually common in the ’90… but things surly have changed since… or maybe this is only me??))
Then I remembered this slogan from a young and apparently very successful startup from Tel Aviv called Soluto – “Soluto is bringing an end to PC user frustration …killer technology”. They have millions of installations and got so many positive reviews and prizes. It seems that Yishay Green, Roy Karthy and the other guys there are really managing to create a buzz and get some heavy funding. Actually, I remember I even tried one of their early betas after hearing one of their founders talk about his former startup successes… I have to give it a try. Maybe 15 minutes with this and I can save the hours of digging into finding what’s wrong with my PC.
I’m not going to write a full review about my experience. Just a few bits. They have a smooth installer and the UI looks great although it is a bit slow to come up with answers. (I even took some time to help them defining what some of the more obscure apps I’m running on my laptop are – after all it’s much of a community project.)
But then I hit this…
So they found that Google Chrome is running in the boot taking 29 seconds of my boot time. But they cannot do anything about it. WTF?? This is simply wrong. I actually ran a boot profiler myself (SysInternals ProcMon) and Chrome does not run on boot at all… And why do they say they cannot do anything about it? Can’t they help me uninstall it?…
They are offering me to remove some 10 pieces of software from the boot process, most of them with no clear explanation on what they do, and each taking a full 0.1 seconds of my life… Who cares?! But check out this suggestion – “pause it unless you connect to a network on the internet using an Intel wireless network adapter” – Even a pro like me got confused for a minute. This is cryptic Chinese for 99% of the world population. I’m sure that those poor 10% of people who choose to follow their advice and disabled their PC wireless have really got no frustrations now…
So it’s offering me to remove 8 of the 18 plugins I have in my Chrome browser. They say I have 2 Chrome toolbars and six more plugins that I can safely be removed… WHAT??? Toolbars in Chrome?! In Chrome there are no toolbars, like they have in (poor) Internet Explorer; and those 6 plugins I use are very useful and consume somewhere between ZERO to NOTHING processing time. Why do they suggest I remove them? Why do they tell me 26% of users actually disabled their Multimedia Plugin (and forgot about having media playback capabilities in the process)!?
I have a few more examples but I think the point I’m making here was well understood. So I’ll shortly conclude with this strange behavior: it takes Soluto 6 seconds to open its own About dialog… At least in the first time after every boot.
As you can understand it did not find my issues, or help me fix them, so we parted like friends (somewhat of a frustrated PC user friend, though).
OK – OK. I know I’m probably not the average guy and probably I’m not their target customer and/or my new PC is not their target PC because it’s new. I can even think of some cases when I was called-in to help fix a dysfunctional computer where this utility application could actually do some good. Still I felt I’ve wasted time playing with it.
So here is advice for what could be some excellent features for the next Soluto version that may actually make a difference. And if they don’t add them to Soluto, still, my dear readers, you can follow and fix your frustrating computer yourself.
Enough said. I still have a couple of issues on my new laptop to resolve today, although most issues I’ve already found and fixed 🙂
One twists I really liked in Soluto: They have this little tray-icon menu where one can click “My PC Just Frustrated Me”. I’ve clicked it a couple of times not knowing what will happen. It seems to be doing nothing – no dialog opened, no thank-you message. Nothing. Maybe they are just collecting data for their next version or something. Maybe a bug? Strangely, after clicking it I felt somewhat relieved. Like a small steam release.
Well… So much has been written on how to go Agile and what an Agility project looks like. I’m not going to repeat. I’ll be giving some general key guidelines – so that when you go on reading about Agile methodologies you can look at it with the right perspective; When you go to your managers to ask for an Agility project funding and time you’ll come with a good plan; When you speak with an Agile Coach you can see how well he can fit into your Agility project, rather than him dictating you what it should look like.
First thing to know is that Going Agile is not a one-time project thing. It’s a management philosophy. But – if you haven’t been practicing Agility in your company you should start with an Agile project with set goals – ppl like to have clear goals. Still, you should always remember that being Agile should be a general long-term goal. A goal to move fast, to be more productive, to beat competition.
A working company cannot and should not stop everything (development, sales) to have an Agile project in place. No manager would approve that, even in a startup. Most development group is investing about 25% of its time in building infrastructure and refactoring existing code. The initial Agility project would certainly require more resources, but it should not stop every other functional development.
Agile projects are all different because organizations are different and their Agility focus is different. Setting “standard” ultimate goals to achieve in a 6 months project is not realistic. Your managers will not understand where you are heading and what the urgency is (and you’ll miss your quota/deadlines). It’s better to educate your staff with the Agility concepts you want to embed in your process and set short-term goals that people understand. Then continue evolving in the Agile directions.
An Agility project should ultimately reduce development time, shorten release cycles, improve product quality and make your development team happier in general. But these are hard to quantify and it’s hard to “sell” an Agility project to higher management if you talk about these goals. So here is an idea – Start by setting project goals where your product lacks the most in a ways that hurts the SALES cycle and OPs. If you manage to improve here the benefit is measurable, immediate and noticeable. You’ll also make some other department managers happy about your Agility project and it’ll be easy to set new Agility goals.
So if it’s taking 5 days of work to configure each new customer environment – focus to resolve that with the Agility thinking and tools in mind – maybe by automating the installation and configuration process. If your QA cycle takes 2 weeks from code freeze to release – that’s an excellent place to start your test automation. If you’re having troubles upgrading customer’s database every time a new version is released – set fixing this issue as one of your first goals as part of your Agility project – for instance, by moving your database schema to the main code branch, building it as part of every build.
Introduce your managers (group leaders, team leaders) to the Agility philosophy and tools first. Set clear Agility goals for 2 months ahead and get your managers involved early in the process of how to get to those goals in time. Prepare to have some resistance – ppl don’t like changes and in many cases they will see it as an extra workload, even if they agree with your long-term goals. When you have a plan gather everyone and explain them Agility-Why and show them the plan.
Automation… Automation… Automation!
Many of the Agility tools and methodologies involve automating manual work – build servers, automatic unit testing, integration testing and QA automation – all require scripting and batch files. You’ll quickly find that your developers prefer coding in JAVA/C# than in Ant/cshell and that there are only few QA engineers that can script. This means ppl will need to adjust, evolve and learn new stuff as part of an Agility project. I found that I had to replace some of my QA staff with others that had automation and scripting in their background. I had to give some time for my developers to learn Ant scripting so that they can contribute to the Agility project in the short-term and continue developing in an Agile way later-on.
Task force
You’ll surely be creating some new development infrastructure to enable the Agile process to happen. A Continuous Integration Server, updated workspaces/projects for your developers that include built-in emulators for unit-testing, a new product packaging/installer etc. It’s wise to have a task-force in place that would build this infrastructure that would then be used by everyone else. Create a task-force built from engineers from all teams. This will help make sure that planning takes into consideration all relevant aspects and that when the initial infrastructure is ready there are already engineers in all teams that know how to use it and can teach the rest.
The new infrastructure soon affects every engineer’s daily work so it better be stable and solid enough so that it doesn’t break everyone’s work halting development completely. The task force should build the infrastructure and emulators, then the first few unit-tests and scripts on-top of it, to see that it’s actually functioning and can also be used by everyone else as an example. You can also start using the new infrastructure and test automation in parallel with the older process and throw warnings on unit-tests that fail rather than block the process.
In GameGround after the initial infra was developed we took 80% of engineers (everyone but a few left to fix blocker bugs) for a week of writing unit tests. So that everyone had both a chance to experience writing unit tests to their code, use the new infrastructure and understanding how development is going to be changed. This also meant that by the end of that week we also had a significant amount of our code tested regularly by our continuous integration server – 10%-20% of functionality. A good start.
Well, I must say, in reality, it was (is) much harder to achieve… I’ll try to write about the hurdles I’ve had in one of my future posts.
p.s. the post really had little with “yearly quota” – but it comes to show that an Agile project can be done without derailing the yearly planning of the company…
This is the 4th part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about Analytics and BI Reporting tools for the marketing team. This post is about Monitoring the health of the system server – for the OPS team. Next in the series – development infrastructure.
Nagios probably is “The Industry Standard in IT Infrastructure Monitoring” as their slogan says. It’s very popular among IT stuff and can be configured to monitor and alerts about up to 40-50 servers. So even a medium size company can use it. It’s free server software – basically a scheduler that executes service checks against installed agents and tests against network devices, reports back the results and raises alerts above predefined thresholds. There’s also a comprehensive list of extensions or plugins written by the community that can be utilized to monitor about anything you’ll ever want.
It’s easy to setup Nagios to watch for server disk-space, CPU and the existence of certain services. The difficult part is to create checks that would alert you if internal parts of the software behave irrational and users are not seeing what they should. E.g. certain transactions do not end in time, server response time for certain requests is going up, users suddenly cannot see their friend’s list etc. These are much harder to watch. To monitor these you’ll need to write code both on your back-end servers – special functions (REST/WSDL) that would do some internal testing and return true/false accordingly. Nagios is able to call such functions periodically and alert if they failed. It’s an evolving process: You’ll see your systm fail without Nagios alerting about it and then add more of those checks till it functions well.
So- It’s wiser to add some testing functionality on design time: plan your server modules to have Nagios testing APIs. You’ll also need to watch that some of your 3rd pty providers are working right: If the A/B testing API you are using is down then your site is probably down too. If your Content Delivery Network is down ppl are not getting to see your website, although everything is functioning on your side.
Nagios – the CONS:
I don’t know of a good alternative. But I would like to see something that combines system health alerts with Syslog analysis and a real-time configurable dashboard. Any ideas?
If you want to have a good insight into what’s actually happening in your servers you must check the different servers’ logs. Getting all the logs from all the servers into one place and automating the search for errors, exceptions and irregularities is key to having a healthy working production environment. First product we checked following warm recommendations from friends was Splunk. It has excellent easy-to-use web-interface and the setup is very easy (assuming that your servers are written and configured to upload syslog/log4 to a central server…). But Splunk is VERY expensive, even for a small server setup like ours they asked for something like $6000/year. The free version is only good for internal testing and running on-top of QA systems. For production you’ll need the enterprise version. It does not make sense to pay that much in a startup… So we checked Kiwi Syslog.
Kiwi Syslog is a relatively small piece of software made by a NZ company. Their main interface is based on a Windows installed client. But they now also have a web-based dashboard that gives you the most important features. It’s easy to setup and work with. It’s cool. And it costs like 2% of Splunk’s cost. Go Kiwi Syslog! Go!
Working with a Content Delivery Network is an important factor in speeding your pages loading time. When we tested before-and-after we saw a dramatic decrease of first-time page load from 3-4 seconds to 2-2.5 seconds for US-based users. With later widgets and pages the load time was about 30% faster. This is a lot! The other reason you’d like to have a CDN is that it’s going to take a large percentage of the traffic from your servers – so you’ll end up having less servers and pay less on traffic.
The basic service a CDN offers is the speeding up of static content (Imgs, CSS, JS files) delivery. The advanced services CDNs offer are media streaming and something called Whole Site Delivery – out of scope for this blog post. For the small site/service you’re going to pay $1000-$2000/month for the basic CDN – it may not be too bad considering the reduced costs on servers and traffic.
If you know you’re going to use a CDN you can write your code and delivery procedures in a way that starting to use a CDN would just be a flip of a config file entry. If you already have a website/service functioning without a CDN you’ll probably need to do some work to separate and version the static files correctly and add proper configuration everywhere. So, with the right design you should be able to integrate with a CDN, change CDN or stop working with a CDN in a matter of minutes.
So the story goes like this: We decided we had to have a CDN because every millisecond of page load time is critical. This was before launching our initial service. We went shopping and were surprised – it seems that most of the bigger CDNs were not willing to work with us at this stage at all. Even the local rep of the local Cotendo (a startup sharing a VC with GameGround) never returned a phone call… Luckily the local rep of Limelight was willing to take the deal and after a couple of weeks on negotiations we switched NO the config and it was working well (we did have a couple of config issues – minor faults on our side)
Q: Should a small lean-startup deploy a CDN as part of their initial release?
A: NO NO NO. It’s expensive and the signing up with the local representative of a CDN will consume too much of your time.
Q: Should a lean-startup write their code with a CDN in mind?
A: Yes! Sure! This will allow you to speed up your site and offload traffic if and when your site/service is showing some signs of success. Coding with a CDN in mind won’t make it slower anyway.
Q: Can you give some hints on how to design it right to work with a CDN?
A: I promise to have a post about it later on… << but if you have a specific Q – ask it in a comment below
Q: Are there no free/cheap alternatives?
A: There are! Check out this post about using Google Apps Engine as a free static data CDN. Also – this post about using DropBox as a free CDN solution. Note that if the delivery of the resources from those unofficial-CDNs is not faster than delivering them from your own site then adding a CDN configuration might actually slow down your site. Be ware!
Of the 14 years I’ve been developing software 10 years were with companies doing B2B software (intended to be sold to another business, as opposed to B2C – software that is directed at customers online etc.) In recent years the Agile development methodology is growing strong and a recent Forrester study shows that now over 40% of development teams in the US are using some sort of Agile development methodology. I’ve heard of Agile project in some of the larger companies and had a chance of “upgrading” my own development department to work in an Agile environment (we took Kanban as our preferred Agile approach). Now this blog post is not going to be about my experience with Agile. Instead I’m going to tell you about a talk I’ve had with a friend who told me Agile was not for his (awesome) B2B software company and my response to that. So these are the reasons why he thought Agile was not for him:
At first this all seemed logical to me as I knew that the real power of Agile development lies in the quick release cycles (“give something small to your customers often”) and in cases that the software quality socks. Anyway, with another thought, these are the questions I’ve asked–
Well – you’re expecting this – go Agile. 🙂
I’m not saying its magic. You’ll have to invest time to make it happen. You’ll have to give it some chance and believe it can greatly improve your performance. Why “Believe???” – – We are engineers (or sales guys) and we have targets and methodologies to work. Why do we need to believe? Because you’ll have to change the way you work. People don’t like to change. People like to stick with processes they know. They mostly don’t see the flaws. They find it hard to believe that it can be so much better.
This is why I think that the goals for an Agile project must be set by a high ranking manager. The R&D manager is going to be involved for sure, but also marketing/product manager, and sales exec for sure as one of the main goals for an Agility project would ultimately be to improve the sales cycle and time to market. So, it seems the division head or CEO is probably the person that should set the goals and allocate the resources for an Agility project.
OK. So you are saying “the above is exactly the problems I see in my company but I’m not the CEO. I’m merely a team leader in the R&D…” – Now what? Well, send this post to your CEO – ask for a meeting to discuss the topic. Come prepared. Bring along an Agile couch/consultant. They are used to talking high mgmt. into investing in Agile.
Next up – some guidelines on how to go Agile without missing your yearly quota / deadlines.
Qs and comments are welcome as always.
This is the 3rd part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about Community engagement tools for the marketing team: sending emails and engaging customers in a conversation. This post is about Analytics and BI Reporting. Next up – OPS tools and of course, development infrastructure.
This extremely popular free SAAS service by Google has become the de-facto standard in website traffic analysis. 10 Years ago I used to download my http server logs and run a simple analysis tool that gave me most of the basic analysis features I needed, but this SAAS has some excellent analytics features like measuring page view time, campaign origin tracking, goal tracking, integration with AdSense etc. There are a few BUTs here, which make me think twice before I choose this option again:
In short – For modern websites and apps GA is almost useless – it will only give you the big picture. Forget about the details …Or check out a better service that was designed for it.
Two insights on the development management side of things: Plan the analytics of every feature as part of the design of the feature itself. Having a feature that one cannot analyze and understand user interaction with is usually worthless. Plan to spend more time than you initially thought to support Google Analytics efforts (probably true with any analytics.)
SiSense is a startup developing a very interesting reporting product that is based on unique Columnar Data Storage technology (as opposed to the “regular” OLAP-cubes or other in-memory solutions) that enables large-scale data-sets analysis. The product has an easy-to-use interface that allows creating of beautiful web-based reports for business intelligence, website analysis fort any where managers need a dashboard with stats. It can connect to multiple data sources including most common DBs and even cloud services like Google AdWords, Google Analytics and Amazon S3 logs. This means that the cost of creating and operating excellent reports is much lower than with some other popular products by IBM, Omniture, Microsoft, Oracle and so many others.
I liked using their product a lot. In GameGround the product was mostly operated by one of our QA guys (in addition to his QA roles) that had some basic knowledge in databases and SQL and assisted by our DBA occasionally.
A few notes for everyone thinking of building a BI suite using SiSense and the like:
A few notes specific to SiSense Prism:
This is the 2nd part in a series of blog posts reviewing several 3rd pty products and services I’ve used in GameGround and my take on them. The basic approach I’m taking here is the applicability of the product for a lean-startup that wants to move fast. In the last post I wrote about A/B/Split testing tools for the marketing team. This post is about Community Mgmt. Next up – Web analytics and BI reporting, OPS tools and of course, development infrastructure.
One of the first features every service has is sending email to customers. There are 2 basic types of emails to send: transactional and mass-mailing. Transactional emails are those produced as a result of a user action, like registration, friend invites etc. Mass-mailing are those when you invite your registered users to an event, a sale etc. So why not use your own corporate SMTP server for those emails? Because you are likely to find yourself in one of the many black lists of spam servers at some point. If spam filters on several servers worldwide find your emails to be spam or If 2-3% of your users mark your email as spam you’ll be black listed and will not be able to send emails from your company at all… bad idea. Other issues you’ll have to manage yourself if you don’t use a SAAS for this is managing unsubscribe lists (<1% of users on social networks unsubscribe in average) and email bounce list (~12% of email address users give on social networks are miss-typed or bogus). Managing those lists is mandatory if you don’t want to get black-listed.
We started with using MailChimp, probably the largest of several competing services, but quickly found that they will not send our mass-mails as they are afraid their servers would get black-listed. We then had the same issue with Constant Contact and CampaginMonitor. It seems that most EMS vendors send all email from a set of about a dozen shared IP addresses. Thus, they have to minimize complaints across their entire portfolio. Most EMS vendors require that you give your users either opt-in (“I’m willing to get marketing materials” checkbox on registration) or double opt-in (+email verification). And if the complaints rate resulting from your service is above a very low rate they kick you out. On our first campaign to just 1200 registered users we had a complaint rate of 1.1% and their acceptable limit was 0.2%… For a young company with little history records that is running its first campaigns the demanded ratios we not acceptable. And- we wanted to have an opt-out on sign up, not an opt-in. We got stuck for a few days till we managed to resolve the mess.
Then entered SendGrid! SendGrid is a cloud-based SAAS with a technology that seems to be far more resilient to black-listing. Their white-label feature allows you to bind your domain MX records to one of their servers with an IP address in cloud. This means you do not share IPs with others and do not need to comply with such low complaints rates. If you get black listed you can change IP address and/or domain name and get back on business in a matter of minutes. So we set up 2 accounts – one for transactional emails, that are less likely to cause blacklisting, and bound it the company’s domain name. Then we bought another domain ‘mailer1 –mycompany.com’ and bound it to the second account. SendGrid system appends an ‘unsubscribe’ link to your emails if you don’t do it yourself and they manage the lists for you – they won’t send an email to someone who unsubscribed, even your service did send them. You get a dashboard where you can see stats of your sent mails, bounces, spam reports etc. and fix your email templates as needed.
The integration with SendGrid’s basic SMTP service took us 15 minutes. They also give you APIs to sync user lists, send using predefined templates etc. but we haven’t got to use those. Pricing is low for what you get. It’s highly recommended to work with them and utilize their APIs to save you the need to write email templates and change them every other days according with the product needs. Let the product guys edit the email templates on SendGrid control panel. No code changes involved unless a radical change is made and different parameters are needed to fill-in the template. So much simpler to operate this feature too. Our email system is working fine with a delivery rate of ~95% on the transactional emails, which is excellent.
Now, how about some tips on how to avoid getting your emails marked as spam? This is a bit out of scope here – maybe I’ll do another post on the quests I’ve had to work-around the spam filters mine-fields. Meanwhile, you may want to read here.
This very successful SAAS product allows you to add a popup widget to your website where your customers can write their feedback, good or bad, and suggest you things. It allows your support and product ppl to engage in a conversation with your customers in a productive way. The Javascript integration is simple and with a bit of extra integration you can also OAuth your logged-in users, so that you can work on a common user-base and get back to the users who wrote a feedback. The control panel allows you to define your products, settings, admin the feedback etc. To have the OAuth feature (a must in my opinion) you need to buy [expensive!] the $99/mo plan. The integration went easy and all was working in a matter of hours. BUT!!! I don’t think this product is so great:
GameGround.com is a service I’ve built during 2010 and was alive till mid-2011. I’ve managed this startup dev teams, developing a consumer facing social meta-game. This is a short review of several 3rd pty products and services I’ve used and my take on them. The basic approach I’m taking here is the applicability of the product to a lean startup that wants to move fast. I started writing it and quickly found out that it’s actually too long for one post. So I’m going to make it a series of post covering Marketing tools, Community Mgmt. tools, OPS tools and of course, development infrastructure.
GWO this is a simple and free Google service that assists in A/B/Split testing. The JavaScript API makes the decision on which of the optional views to show and you get clear stats view on which of your options is better. I think this product is too simple and not very helpful as it misses the very basic idea of A/B testing: the whole point of A/B testing is that the designer/product mgmt. can run different views and phrasing to find what works. The problem is that for very small variations of pages you need to push code to your production servers and the marketing team cannot work without a developer that assists them in the process. This leads to too many people being involved and process being too slow. Another problem is that some of the stats are updated on a daily basis. In many cases you’d like to make a decision faster and move to the next test – why wait a day?
VWO after giving up with GWO we moved to VWO. This new and relatively cheap SAAS-based product is great! After a simple JavaScript integration one can create page variations using a WYSIWYG HTML editor in VWO’s site. Set test goals, alter text, images and CSS and even replace whole pages in runtime. This means that the marketing team can create most A/B/Split variations by themselves and run the testing without a developer nearby (well… They will ask some Qs…). The testing stats results are displayed instantly in a very clear way (see images) and you can even tell it to automatically stop the testing and always show the winning variant. As a bonus you get heat-map views of your tested pages. The couple of issues we’ve had with them were (a) their inability to run tests on our logged-in pages. E.g. pass the login barrier with their WYSIWYG editor which. This required some extra work in the service integration phase. (b) at one point the testing stopped and we found out that a new version of their JS integration library was published without informing us (the customers). Anyways, I can tell that their support team was fast and gave us a quick remedy. VWO’s slogan is “World’s easiest A/B testing tool” and I think they are doing a wonderful job at it.
Unbounce is a landing pages SAAS. “… a self-serve hosted service that provides marketers doing paid search, banner ads, email or social media marketing, the easiest way to create, publish & test promotion specific landing pages without the need for IT or developers.” Yes! Landing pages for specific audiences and campaigns is an excellent way to drive traffic to your site. And Unbounce’s platform with its WYSIWYG HTML editor simplify the process even further allowing the marketing to create those pages and amange them as part of campaigns they are having without needing development involvement. They even give you multi-pages per landing page (e.g. a small website), a lead generation module, A/B/Split testing tools and other goodies. So far so good.
BUT! There’s a major but here: the SEO marks for those pages on Unbounce are extremely low. Search engines don’t like websites and landing pages that has only static content. They also don’t like it that the landing page in not under your own domain, but rather on Unbounce’s, and so they incorrectly see the landing page as a spam blog. This (among others, I’m sure) led us to get very few displays of our ads on Google Adwords and very few clicks coming from this major traffic source.
We ended up using some other desktop HTML editor to create a single-page site for each landing page. It was then uploaded to our live production servers, under the ‘/play’ folder, using a FTP we opened for it. This way the marketing team could create their landing pages according to the running campaigns and upload them to production with little or no dev/OPS involved. This is lean-thinking in its best – have as little ppl involved in each task. Ppl should mostly be able to complete their tasks end-to-end without needing to interface with others.
I rarely get to see a new technology that sparks my mind and keeps me late at night, trying to utilize it and doing something with it. It happened to me a couple of months ago when I first played the PC version of Angry Birds (2 white nights…!) and lately again with Node.JS. But this one is no game! To explain the thing I have to take you back in time to year 1999…
It all started when I wrote my first server for Exent’s Games on Demand platform. It was a large-volume file data server designed to respond to the requesting clients very fast and serve thousands of concurrent clients. We wrote the server as a kernel module and accordingly it was written in a fully asynchronous fashion. This project lead by my first team-leader, Amnon Romm, was certainly the most beautiful piece of code I have seen to date. [us, developers, can see beauty and ugliness in simple code. It’s a special gift we have that a non-developer will never understand… J Code is actually mostly old and ugly. If you write something and a colleague comes to you and tells you your code is beautiful – this would be the BEST compliment you can ever get. Really]
Since then I’ve seen so many other servers. Some of them, like Check Point’s Firewall, definitely have a good asynchronous architecture (even if the code is somewhat ugly…) But one thing I could never figure out how come the whole WWW (the browser part of the internet) is running on top of badly designed synchronous servers. Maybe it’s the basic design of the [wonderful] HTTP protocol that is request-response based. Maybe it’s us, developers, who find it harder to design and code asynchronously. Maybe it’s because in the early ‘90s when internet standards were written, running a CGI process on a UNIX was the main way to handle HTTP server requests, and we just never wanted to stop supporting those standards… Anyway, I figured out that all the most common servers – Apache, IIS, JAVA based web-services, the standard .NET stack, Django/python, PHP, Ruby and almost every other piece of HTTP server out there are written to run on a synchronous environment. Every request is either served by a new thread or a thread from a big thread pool. Such a thread is executing the request and response stacks waiting for resources from the DB, from the disk drive, from a memcached service they call etc. And each time they go to sleep waiting for a response from the device to arrive and context-switched out to give another thread some CPU time. This means that heavy-load servers spend MUCH of the CPU and memory-bus time switching threads. The simplest server written with (the newly designed) .NET/WCF can have up to 100 threads running on a dual-core processor. The result is that a strong server can serve only a few thousands of clients/browsers concurrently. So much CPU time and money is wasted.
Another issue is the usage of high-level interpreted languages to write the internet. From JSP to PHP to Python. Most of the internet is written in scripting languages because it is easy to write and easy to deploy. But everyone knows it is running slow. It’s a balance development managers take – code fast and get it out the door. They say “We’ll have other ways to speed up the beast after it’s already out” – clustering, stronger hardware, another caching layer etc. Anyway – once the version is out its now the problem of the IT guys to meet the SLA. WTF??? Some real efforts were recently made by Facebook to speed up PHP with their release of Hiphop – a server add-on that transforms PHP scripts into highly optimized C++ code and then uses g++ to compile it to machine code before its run. They say that on average it reduced CPU usage at Facebook by about 50% (!!!) and that WordPress 3.0 is running x2.7 faster under Hiphop. Wow! Impressive!
But what if we could solve the synchronous design in a similar way? The problem is we cannot – we’ll have to throw away all the code that was ever written and start fresh. That is because asynchronous code cannot call code that blocks and all that code out there is blocking.
So what is it? I see it as the next-gen any-software platform. You write the code in JavaScript that then gets run inside Google’s blazing-fast V8 JavaScript Engine. This engine first compiles the code to binary then executes it. The more important thing is that the core libraries of Node.JS require you to write everything asynchronously. Whether you’re accessing a config file, requesting an access token from a Facebook API or running a SQL query on your DB – all the APIs are synchronous. They also took the event-driven I/O approach and implemented the CommonJS specifications so that it’s extremely simple to write servers using Node.JS. There are also a few other strong features: a way to package modules of code into libraries or Modules; a way have native binary Modules (written mostly in C/C++); a package manager called NPM; and an excellent open-source spirit community of devs around it. It was started by Ryan Dahl in 2009, and its growth is sponsored by cloud provider Joyent, which employs Dahl.
So what can you do with it? Well, basically everything you can do with Python, Perl or Java – client code, server code. But the goal is definitely server code. Blazing fast web-servers that can handle x10 more traffic and do it much faster. This can serve not only “regular” browser based traffic, but can also be utilized to stream music and video, used for sharing applications, reverse proxies etc.
There are a couple of things you need if you want to have a team of developers starting to work on your next big thing. First you want a proper a development environment (use Eclips with Google’s V8 plugin); then you need a Unit Testing framework (use Expresso); an application server with MVC and templating support (use Express); an ORM/Hibernate-like tools to ease coding on the DB (see MongooseJS for MongoDB and SequelizeJS for MySQL); a library of utility Modules to copy-paste from for almost every basic need (see NMP Registry with almost 3000 entries << try searching “facebook”); and a cloud based app-engine to deploy your application on, preferably for free (I found 11 such services but Nodejitsu and Cloud Foundry seems to be the most advanced). Let’s not forget that a strong development community is also very important (NodeJS main newsgroup: ~50 threads/day; ExpressJS group: ~10 threads/day ; Linkedin NodeJS group has 832 members; StackOverflow NodeJS tag: ~13 tags/day). I think we are good to go!
Node.JS thing is catching fire these days – this Google Trends view clearly shows how fast Node.JS is soaring and that it’s now almost as big as Roby on Rails. Many new startups are seeing this as a great opportunity and are developing on Node.JS. It fits so well with the lean-startup concept. One coding language for both front-end and backend >> one developer can write the whole feature, end to end. And no more translations between XML and JSON. Now everything is JSON in all application layers. No more
Some giants have already decided they are joining the party. Microsoft has recently announced it’s going to support Node.JS on Azure (and Visual Studio, for sure). VMWare is already supporting the Node.JS deployments on their cloud services – CloudFoundary.
Interesting blog posts if you want to further read –
Did I say a game-changing technology? I think that this is taking on the HTML5 hype. I’m predicting here that in 5 years Node.JS would be the most prominent coding language in the world, and servers based on Node.JS platform will replace most ailing JBOSS, ISS and Apace servers out there. The LAMP stack is dead, long live JavaScript.
p.s. I guess some of the readers of this post are saying “this guy is crazy! He’s taking an immature technology and convincing it should be used in production today. The risk is too high yada yada yada…” I agree. The risk is high. If you don’t have strong devs that can master a new technology and face some difficulties then you should stick with the usual Django/Rails/GWT. If you have strong devs the up side of this technology is great and I think it’s mature enough for most tasks. Especially if you start something new.