Archive for the ‘development’ Category

Focusing is difficult

Monday, September 10th, 2007

If you are starting out, focusing in a single product is a great idea. Actually, if you can focus in a single product all the time, it will be much better for your productivity. I’ve been overwhelmed just by the sheer amount of stuff I have to do, in almost completely unrelated areas, that I’ve been pretty much blocked from advancing significantly in the past week.

Over the weekend, I have finally formalized my roadmap for the next few weeks:

First, I have to finish the Codekana release process. I know, I know, it’s already been released for one month, there is a 1.1 version already out there, there are a lot of users, etc… but there are some things I haven’t done, and which I like to consider part of the release effort. One, notifying some people explicitly about it, and possibly contacting both dead-trees and online magazines. Apart from this, I’d like to write one or two articles with the potential to become popular on reddit and other social sites, which can bring certain awareness. One has to walk very carefully the fine line between promotion and interesting content to get there, but I think I can pull that off. And it will help a lot with growing traffic to the Codekana web site. There is one additional note here: since I want to publish this article in the Codekana web site for SEO purposes, and since the current web design would make it pretty hard to read it, I will have to revamp the web design before publishing it. All this is priority #1, as I’m doing Codekana a disservice until I provide some exposure.

Second, I want to address some outstanding issues with the ViEmus and prepare new builds. Continuously improving a product over a long period helps a lot with customer satisfaction and the success of the product. I have only done very minor things to the ViEmus in the past few months, and I want to do this before I embark in a major development burst again.

And finally, only in the third place, I will be working in new Codekana features, my Kodumi text editor project, and more ambitious marketing/exposure projects, which include setting up new blog(s) and writing several articles I have the theme for, and for which I just can’t find the time. But it just doesn’t make sense to engage in these activities until the current issues above are properly addressed.

There are two extra tweaks to this “grand plan” which are worth mentioning: first, I am kind of impatient, so I should just wrap my mind around the fact that each of these will easily take weeks. I tend to grow impatient seeing the list of pending things, and trying to finish the current one quickly. It just doesn’t help. And second: while I’m working on each of of the items, I should just erase the others from my mind. Important and interesting as the other things may be, they’re just a distraction until their moment arrives.

Oh well, I had to get that out of my chest. I’ve felt pretty stressed lately.

As a closing note, I’ve been trying to move this blog over to using Feedburner. There are a couple of reasons, the main one being that I’d like to get a more precise count of how many people are reading the blog and show it on the sidebar. Since I browse the http logs every once in a while, and since rss aggregators send the subscriber count in the referer string, I know there are about 200 people subscribing through these services, plus probably at least 100 more subscribing directly. Not bad for a blog I don’t update all that often, and which has often taken second place to actual product and development work. Anyway, I already set up the Feedburner feed, and I also installed a wordpress plug-in that should automatically redirect all subscribers over there, but it seems it’s only working partially. The feedburner subscriber count is showing only 120 readers now, and http logs show that many requests (notably those from bloglines and newsgator, which many people use) are getting a 304 code (“Not modified”), instead of a 307 (“Temporary Redirect”) which is what the Feedburner plugin uses. There are two potential reasons, one being that the transition won’t be complete until I post a new article (which I’m doing right now, so I should find out quickly), and the second being that I should really upgrade my wordpress installation, which is using and older version (I’ll probably do this in a couple of days). I’ll update this and let you know which one it was as soon as I find out, in case you ever have to do the same thing.

[UPDATE: A new post seemed to do the trick. The subscriber count is now added to the sidebar.]

Codekana 1.0 released

Tuesday, July 24th, 2007

Here it is:


I have just officially released Codekana 1.0 for Visual Studio. You can visit www.codekana.com for all the details and to download the latest build. If you installed any of the beta builds, you will have to manually uninstall it before installing this one (hopefully the last time, as post-1.0 builds will sport automatic upgrades).

With regards to the product, its capabilities, how it can make your code reading and writing experience smoother and more productive, I think the best is that you visit the web site. I’ve made a big effort to convey the usefulness of the product, so the text and illustrations on the web site will probably be the best to explain it.

I have tried to design a more modern look for the website: a colorful design, large fonts, concise copy, etc… Even if the product is good (and, of course, I think it’s very good), nice packaging is always very important. I do plan to make quite some effort in marketing this product. ViEmu is a product for a very small niche, but for that niche, just making sure searches for “vi visual studio” or “vim outlook” reach the right page is the most important thing. For a product like Codekana, where hardly anybody will be looking for “enhanced syntax highlighting visual studio”, it is very important to raise awareness and to present the value of the product properly. Since writing articles has proven to be a very powerful method to get many thousands of developers to my site(s), I will probably do quite some writing about various development-related areas in the near future. It’s very likely I will set up another blog, more development-centric, and less oriented towards growing a small business. More news about this coming soon.

I have decided to finally release 1.0 today even if there is still one known issue with Codekana: sometimes, mainly when reinstalling it, Codekana colors and/or Visual Studio colors can get reset to odd values. This only happens occasionally, but it’s annoying, and it gives a certain feeling of instability to an otherwise rock-solid product (even if not perfect, of course). I know for sure that a feeling of being solid is important to sales, so it could detract a bit from sales if someone stumbles into it early. So, why did I decide to release without fixing this? Here is a short list of the relevant reasons:

  • The problem is due to some internal problem’s in Visual Studio’s color configuration system. You can check this VS forums thread for the details, how the behavior can be isolated and reproduced on a clean VS install without having Codekana installed, and how it seems only VS 2008 will fix it. I’ve spent weeks trying to work around this buggy VS behavior with no luck.
  • When it happens, the only effect is that colors can appear wrong, and this is fixed very easily by just going to Tools|Options|Fonts and Colors and clicking “Ok”, or resetting Codekana colors in Codekana’s settings dialog (the Codekana support page describes this in detail).
  • The rest of the product is rock-solid by now, after well over a month in beta testing, and it’s very useful already.
  • I was already planning to implement a revamped coloring system in a future build, to overcome some of VS’s limitations by doing my own rendering and bypassing its coloring system, and I’ve realized this will be the only way to reliably work around the buggy behavior. No need to say it, this will take quite some work to get working (it’s not a couple days’ hack)

All in all, I decided to release 1.0 today, put a prominent notice in the blog announcement and on the support page, and work from there. Hopefully it won’t be too annoying, it won’t detract too much from sales, and I will be able to have a better solution even before the trial period of the first users expires. Posting about a known issue on the release day is not very satisfying, but I think it’s only fair.

I will keep posting about how Codekana fares, what my next steps will be, my marketing initiatives in the near future, and of course the slow but steady advance towards kodumi 1.0, my always-in-development text editor.

Codekana

Saturday, July 21st, 2007

[Update July 25 – Codekana 1.0 has already been released, so you can go to www.codekana.com for the latest release and all the details]

For the past few months, I’ve been working in a new product: Codekana for Visual Studio. It is a Visual Studio add-in which provides enhanced code visualization for C/C++ and C# code in Visual Studio. It enhances the syntax coloring, not for decorative reasons, but in order to provide actual useful information, such as control flow cues; it can draw graphical outlines of the code’s block structure (allowing several One-True-Brace-Styles); it highlights all matches of the last search; it allows you to zoom in and out with control and the mouse wheel; and several other features. Here is a screenshot of how Visual studio looks with Codekana in action (click for a full-size view):



Some details about what you can see in this screenshot:

  • Blocks are highlighted and outlined according to their function: green for ‘if’ blocks, brown (dark-red) for ‘else’ blocks, red for loops and loop control structures, etc… The goal of this is not artistic, it allows you to grasp the control flow of your code without even having to read: you will see where the code loops, whether there is another way to exit a loop than by looping to the end, you can see what block a condition controls, and more – ‘return’s are also highlighted in orange to show early exits, multi-way conditions (‘switch’ blocks) are colored in blue, etc. Once you get accustomed to the coloring (of course, you can customize it to your own taste), you’ll be able to understand control flow at a glance.
  • Also thanks to the coloring above, when you have several nested blocks and a long list of closing braces, you know which brace closes what construct. If you want to insert some doe right before the end of a loop, you can visually tell where to insert it.
  • The name of the function at the top is highlighted too. This is a very quick way of knowing where in the code you are – especially with C-like languages’ brain-dead declaration syntax, the actual defined name can be lost among complex return types, template arguments, and what not. All definitions have the defined name highlighted: function definitions, class/struct/union/enum definitions, etc…
  • All matches of the last search are highlighted in yellow. When you are searching, you rarely want to find just the first occurrence. Visual feedback is invaluable in getting information with high bandwidth.
  • Mismatched braces and parentheses are conveniently highlighted. Also, thanks to some pretty sophisticated incremental parsing technology, and unlike all other tools out there, Codekana is pretty smart about which brace is the mismatched one. Look at the screenshot above, it’s not obvious that the mismatched brace is the one highlighted, and neither VS’s built-in parse, the compiler, or any other tools gets it right, but Codekana tells you which one is the mismatched one.

Of course, all this smart parsing happens in a background multithreaded processing framework – this way, it won’t slow down your editing even if you have just pasted 1,000 lines of pretty convoluted code.

Codekana has several more features, and I could go on talking about them, but since it’s been over one month in closed but intense beta, it’s very solid, and due out next week, I’ll just you furnish you with a link to the documentation if you want to know more, and with a link to download and install the beta version:

Codekana documentation
Download Codekana 0.9

As you might guess, this technology is also part of what will become kodumi, the text editor I’m working on. But, meanwhile, it is already available inside Visual Studio for your development convenience.

I’m now off to getting the web site ready. And BTW, I’m going to release this at a very affordable price-point ($39), to get as many users as possible on board while I prepare great new features.

ViEmu for Word and Outlook released

Thursday, February 8th, 2007

At long last, ViEmu for Word and Outlook 1.0 is out of the door:

ViEmu in Word 2007

Together with the new web design, you can have a look at it at viemu.com.

I’ve had a few sales already after being out for 12 hours or so, so it’s some kind of proof that there is some interest. Thanks to those who’ve bought it!

After this, I’m going knee-deep into the development of kodumi, my up-and-coming text editor. I’m thrilled to go back to it, and I hope I will be able to reach 1.0 in just a few months. The goals are very ambitious, but getting 1.0 out of the door is a priority, even as soon as it offers just a glimpse of what will be coming.

And I’m incredibly happy, not only to get on working in my editor, but also to *stop* having to fight against poor and hostile interfaces, as provided by other apps. It will be refreshing to fight my own interface designs instead of others’.

Thanks to Andrey, Jose, Dennis and Ian for posting about it on their blogs even before I did!

As an aside, I must say I like Word & Outlook 2007’s new interface very much. I think many people will want to have it as soon as they try it out.

First anniversary

Monday, June 19th, 2006

Today is the 1st anniversary of the conception of ViEmu. That is, this very day last year, I came up with the idea of developing a vi/vim emulator for Visual Studio. I had been working for months in the kodumi text editor (back then it was just ngedit), and the last stretch had involved developing a scripting language compiler and VM, and implementing a vi/vim emulation module in this language.

It would only take me about one month and a half to actually release version 1.0. It was a really hectic month, though. Actually, the short time-to-release was largely thanks to the fact that I already had the basic vi/vim emulation code – even if I had to port it from ngedit’s scripting language into C++.

ViEmu is nowadays a very solid product, having gone far beyond what I expected both in functionality and in sales performance. I’m now concentrating in preparing ViEmu 2.0, which will finally integrate the codebase back with kodumi, and provide some pretty advanced features to existing customers. I will also be ending the introductory pricing at the end of this month. I initially planned to introduce the new price at the same time as ViEmu 2.0, even if 2.0 is a free upgrade to existing customers, but the new version will be taking a bit more than that, and I really think ViEmu is a very good value for its full price. Actually, it seems a bit absurd that ViEmu 1.0, which was a much, much more basic product, cost the same as today’s ViEmu.

Working on two projects is a challenging dynamic for me. I am a “depth-over-breadth” type of guy, and I have trouble switching focus. I’ve worked both in kodumi and in ViEmu for the past few months, and I expect to keep doing so for a long time to come. It’s even more challenging because of the different nature and status of both products: one is for a very niche audience, with no competition, while the other is for a large public, with plenty of competition. One is already a selling product, while the other is still in pure development towards 1.0. One has a limited potential, while for the other one I see the sky as the only limit. One needs development work, while the other needs marketing work. One of them already earns me both a long user request list and a large amount of flattering user feedback, while the other is still something that only I have used. One already helps pays the bills, while the other one only helps reduce my social life. I always have some trouble in setting the priorities, but I think I’m striking some kind balance in both improving ViEmu and advancing towards kodumi 1.0.

Fortunately, most of the codebase of both products will shortly be shared, and that will help with at least the part that is common. Also fortunately, the current customers of ViEmu are potentially also interested in kodumi, so I see the effort in improving and supporting ViEmu as an investment in establishing a good relationship with customers that can result in a business benefit.

As a summary of the ViEmu marketing week I last posted about, which of course ended lasting about 10 days, I must say I’m happy that ViEmu sales are breaking new records during June. I cant be sure whether this is due to the announcement of the new pricing policy, to the redesigned web page, to the latest maintenance release, to the richer trial period user experience (no nags, just better notices and a welcome screen that provides the most relevant information), or to a certain maturity of the product. But I’m sure all of them help. I’m looking forward to seeing how sales figures evolve in July, just after the effective pricing changes. I’ll let you know during the next few months what the general trend is, both after the pricing change and after 2.0 is released.

Finally, as soon as ViEmu 2.0 is ready, I will be focusing more in kodumi. Actually, part of the work for ViEmu 2.0 will indeed revert in kodumi. Even if I announced that I may release another derived product before kodumi 1.0, the core technology in that product is needed for kodumi, and I’m pretty much an expert now in building Visual Studio extensions, so it shouldn’t take as long to prepare as ViEmu has taken. On the other hand, I’m really excited to start working in this part of the code, as I will finally be working in an innovative area (a vi/vim emulator as a Visual Studio add-in is an interesting product, but it can hardly be called innovative). If everything goes well, I will be posting about it on the blog as I start working on it, so it will also bring some interesting technical content to the blog. Well, I will hopefully have the energy to post about it at the same time I’m developing it.

Thanks everyone for your continued support during this year.

Fact sheet May’06

Thursday, May 25th, 2006

Fact #1: I haven’t posted on the blog for well over a month. With all the pending things I have (ViEmu 2.0, the text editor, my day job obligations, support, etc…), I can hardly find time to do so. Promises not kept: the “Friggin’ Darn Tough/Functional Dynamic Template-based C++” series, an article on ViEmu I promised to Keith Casey from CodeSnipers, an article with cool graphical charts on the digg effect as seen from viemu.com (more on this below), etc… Hopefully everything will come along. Until I build the business to the point where it will sustain me, I really just can’t afford do put my available energy in anything other than improving & supporting ViEmu, and preparing the next product.

Fact #2: The final name for the NGEDIT text editor will be kodumi. I wanted a name that sounded good, and which wouldn’t be limiting for the future evolution of the product. It means “hacking” in Esperanto, although Esperanto is not, like, so widespread that the meaning is the important part. I like how it sounds and I can identify with it. It still works when the product becomes more than a text editor. If the product is really good, which I’m hoping it will, this should ensure the name sticks. I’m open to feedback and criticism. I’m pretty stubborn and it’s unlikely I’ll change it, though.

Fact #3: The next product I release probably won’t be the kodumi text editor. There is quite some work yet to be done with kodumi before 1.0, and I’ll probably release another product based in another functional part of the editing core, as a VS add-in. Hopefully with a much larger appeal than a vi emulator. It will actually be based in one of the innovative features I’m planning for kodumi 1.0. It’s nice to have a product that has several offspring before being born.

On the other hand, given that I will probably be releasing this product, it may make sense to have a single site for all VS add-ins instead of a separate one for each product (such as viemu.com). Oh well… this right after moving to viemu.com… so much for my strategy forecast skills.

Fact #4: The amount of traffic you get from a reddit / del.icio.us / digg front page is amazing. I’ve also got thousands of visitors from StumbleUpon.

Here are some graphics that show it, as the graphical vi/vim cheat sheet I released made it (twice!) to those front pages. I apologize for not being able to write a full article on this, it would be worth an entire study.

In order to understand these properly, take into account that originally ViEmu was hosted at ngedit.com, and I moved it to its own domain viemu.com together with the release of the cheat sheet. The traffic graphs include both domains, as they’re served from the same account, but the Alexa graphs below show both domains with separate lines.

I released the cheat sheet on March 28. Here is the traffic for that day (click for a full sized image):

You can clearly see the moment it picks up to 100kbps sustained. The climb was caused by it getting to reddit’s homepage, which happened about half an hour after I submitted it (people liked it, so they voted for it, making it reach the front page – it’s not against their guidelines to submit your own stuff).

The traffic before the climb used to be typically low – very nichey product, a few blog readers, etc… enough to result in some sales, but nothing big.

I went to bed as soon as I saw it at the bottom of reddit’s front page. The next day would be crazier.

As a side effect, people started bookmarking it to their del.icio.us account for later reference. This is understandable given the “reference” nature of the cheat sheet. As soon as a fair number of people did this, it also appeared in del.icio.us’ popular page, thus getting more traffic from there.

This is the traffic on the 29th:

I apologize for not presenting a higher-resolution sampling, I forgot to save it from my hosting provider, and I can’t generate it again.

Anyway, please take into account that the lowest bar in the graph is as high as the 100kbps high in the previous one. It was pretty amazing. I first watched it for hours no end in reddit’s and del.icio.us’ homepage, and a lot of traffic coming. But then I submitted it to digg, and watched it play the voting game in digg’s “sub leagues” (the system is very different from reddit). And then the big spike came: it made it to digg’s front page. All hell broke loose, bandwidth requirements grew to 2Mbps sustained, and the number of visitors was amazing. It made reddit and del.icio.us look like a joke.

My hosting provider handled it without a hiccup. On the other hand, that very afternoon after submitting to digg, (1) there was a power outage at my building, (2) when it came back, my DSL service was down and unfixable according to my ISP, (3) I got a flat tire when driving to a friends’ in order to watch the digg effect, and mainly to be on the watch in case bandwidth went beyond the monthly limit, which happened, so (4) I had to upgrade my web hosting account. You can say I had all the hiccups web servers usually have in these cases.

Here you can see the traffic for the next two days:



You can see the long tail of the digg effect. Also, the cheat sheet got linked from many places around the web, and StumbleUpon started to pick it up as well.

Here you can see a graph of all of March’s traffic, a nice picture of the reddit, del.icio.us & digg effects:

And here is a glorious graph of all of 2006’s traffic:

I promise that I had traffic before March 29, even if here it’s squashed into oblivion!

Finally, I’ll bring you some captures of what alexa thinks of my domains (it doesn’t know they are related somehow).

First, here is the Alexa’s “Daily Reach” measure, for the last 12 months, 6 months and 3 months (just for your static zooming enjoyment):





I can almost tell you where each spike comes from: the first one, in May last year, comes from Eric Sink’s kind mention of my blog & NGEDIT. The second one comes after the release of ViEmu. The largish one before the digg effect comes from a mention in Bungie’s web newsletter (which, expectedly, led to thousands of hardcore gamers, only one of whom was courageous enough to actually download ViEmu), etc…

I chose to show the daily reach above just because it is the Alexa measurement which best shows the evolution of my web presence. Their best known stat is the “rank”, which ranks the site globally among all websites. They only plot it for the top 100,000 sites, but they give you the number in any case. Here are the graphs of the rank, for the last 12 and 3 months:



Actually, the second large spike you can see earlier this month was due to the cheat sheet making it once again to digg and del.icio.us’ front pages, this time as a direct link to the cheat sheet’s GIF file.

Amidst all of this traffic madness, there is another important source of visitors which is often overlooked. I know I did. The name is StumbleUpon. This is not a social links site, but a plugin that you install to your browser, and with which you both (a)vote sites up or down, and (b)discover sites other stumblers’ liked. The effect is much slower, but the amount of visitors it can bring during a few weeks competes with the likes of reddit and digg.

In order to show this better, I will show some visitor numbers by referrer (only for viemu.com). I’ve decided not to totalize them by domain, as the distribution of source pages also provides some interesting info. I haven’t included many other sources, generated from bloggers, news sites and site owners discovering it and linking to them.

March
Total unique visitors: 22,901

http://www.digg.com 3910
http://digg.com 3210
http://digg.com/programming/vi_vim_Graphical_Cheat_Sheet_Tutoria… 2665
http://www.digg.com/index/page2 543
http://www.digg.com/index/page3 631
http://www.digg.com/index/page4 238
http://www.digg.com/index/page5 95
http://digg.com/index/page2 398
http://digg.com/index/page3 500
http://digg.com/index/page4 184
http://digg.com/programming 141
http://www.digg.com/programming 141
http://reddit.com 1814
http://del.icio.us/popular/ 1116
http://del.icio.us 112
http://www.stumbleupon.com/refer.html 120
http://popurls.com 392
http://diggdot.us 154

April
Total unique visitors: 20,429

http://www.stumbleupon.com/refer.html 7858
http://digg.com/programming 127
http://www.digg.com/programming 121
http://digg.com/programming/page2 69
http://digg.com/programming/vi_vim_Graphical_Cheat_Sheet_Tutoria… 376
http://www.digg.com/search 68
http://del.icio.us 104
http://del.icio.us/search/ 70
http://hedera.linuxnews.pl/_news/2006/04/03/_long/3795.html 1883
http://www.linuxnews.pl 556
http://linuxnews.pl 536
http://www.wykop.pl 216

May
Total unique visitors: 6,208 (this doesn’t count those coming through the GIF link as that is not considered a “page” by awstats)

http://www.stumbleupon.com/refer.html 805
http://digg.com/programming/vi_vim_Graphical_Cheat_Sheet_Tutoria… 133
http://www.digg.com/programming/vi_vim_Graphical_Cheat_Sheet_Tut… 53
http://digg.com/search 36
http://digg.com/search/page2/ 19
http://del.icio.us/search/ 45

Just for fun, I have included the links from several sites in Poland during April. For some reason it was very popular there during that month. Maybe vi/vim is better suited to heavily accented languages like Polish?

Fact #5: I’d need to sell about 1.5x to 2x as much as I’m selling now to live off of the income from ViEmu. Not a big success 10 months after release. It’s ok, as I’ve learned a lot from the experience, and I needed to do most of it for kodumi anyway, which is the main goal. At least for the kodumi I want to develop and release.

Fact #6: vi/vim emulation for VS is not for the masses. I have gotten over 50k visitors to the site in the past two months. This is about more than 20x as much as I was getting beforehand. I guess a product with a more general appeal would have noticed an enormous spike in sales. I’ve only seen a smallish upwards trend. Even VS users are a minority among vi/vim fans! I’ve sworn not to switch over to a Dvorak keyboard layout until the business really takes off, I could end up targeting an even smaller market!

Fact #7: I don’t understand Google results. I’m on page number one for “vim tutorial”, but nowhere to be seen for “vi tutorial”. I was extra careful to write “vi/vim graphical cheat sheet and tutorial” everywhere, so that I would be found by any of the likely keywords, and the result is so bad it’s sick. Searching for “vi emulation visual studio” gets the old page, even if there are links to www.viemu.com all over the place. If there’s a sandbox, I don’t understand why it affects some keywords and not others. Is “vi” too short? Then how did my SEO work before with the ngedit.com address? I’m starting to experiment with creative redirections to the new site, but I’m going to do it the slow way in order to cut the losses in case Google doesn’t like my playing around.

Fact #8: it was cool to have the vi/vim cheat sheet translated into simplified Chinese by Donglu Feng, a nice guy who sent it over to me. It makes regular vi/vim seem a piece of cake:

Rough strategy sketch

Wednesday, March 22nd, 2006

I think I promised a general strategy post & a status report, some time ago. Here goes.

Development strategy

I am currently sharing my efforts between two development efforts. One of them, ViEmu, has been available for almost 8 months now. It has improved, a lot, and sales have been steadily climbing. Although not a stellar success, it’s working well beyond my realistic forecasts (not beyond my wildest dreams), and I’m really happy that I decided to do it.

The second one, code-named NGEDIT, has been in development for a bit over a year, and it’s still not ready for release. In the time I’ve been developing it, both my belief in the concept, and my disrespect for my own time estimations, have grown a lot. I would be very happy to release 1.0 around July or August, one year after the release of ViEmu, but I know it’s still optimistic. And that’s after I’ve decided to cut out most of the stuff for version 1.0!

Of course, apart from these clear-cut fronts, and not including my day job, there are other fronts I have to attend. Customer support, for example, or this blog, for that matter.

I’ll try to summarize, in a general sense, what my current plans for the next few months are. What the main goals are, and how I’m planning to achieve them.

The #1 goal, as you can guess, is to release NGEDIT version 1.0. This is a bit trickier than sounds. The act of releasing it is, in a general sense, more important than the exact functionality it brings. I have come to this conclusion after over a year in development, and the experience of ViEmu. Emotionally, it’s much better to be working on improving an existing product than it is to be working on a product for its first release, with no users or customers. As long as you are not too impatient to get a lot of sales, having actual users & feedback is a big boost for motivation. Having a few sales helps, as well. And, as long as the product is good and there is a need, sales only get higher as you improve the product.

In order to get this process working, I’ve cut out many planned features from 1.0, in order to release it before long. You might ask, why don’t you already release it in its current stage?

A common answer, but not too informative, would be to answer that it’s still too basic, or unusable. Well, not completely true, as I use it. But a better answer would involve some thought on the market I’m getting in. The text editor market is pretty saturated, and most products out there have many man-years of effort built in. There is at least a general perception of things a text editor must have. I think releasing it without these features would be too much of a stretch. Rest assured, I’ve carefully removed everything which isn’t essential for 1.0. As with ViEmu 1.0, the first release will be pretty basic, but it will hopefully be a better tool for at least some people out there, and that should trigger the initial dynamic of usage-feedback-improvement.

Apart from these essential elements, NGEDIT 1.0 will also sport some interesting things that are well outside the minimum requirements list. The very complete vi/vim emulation, for one, or the native management of text in any format (no conversion on load/save). There are a few more, but these are probably the most interesting to talk about. There are two main forces that have resulted in this uncommon feature set. The first is that I’m building NGEDIT 1.0 as the core framework for the really advanced features, which have some unique requirements. And the second is that I’m building it to become my favorite editor first, and only then a commercial product. This results in the need of powerful vi/vim emulation, which is bound not to have much relevance as a commercial feature.

So, we could say the road to NGEDIT 1.0 is drawn by three guiding principles, listed in increasing priority:

  • III: Build a good foundation for the future versions of the editor, if not fully realized, at least following a scalable design
  • II: Release the minimum product that makes sense
  • I: Build my favorite editor

This is not a list of principles I try to adhere to. It’s more of a recollection of the kind of decisions I’ve found myself taking on intuitive grounds. I’ve seen that I will trade the best design for some functionality, in order to be closer to release, and I’ve found that I’ve traded every sensible business principle by deciding to implement some very complete (and costly) vi/vim emulation. The fact that my sticking to vi/vim emulation has resulted in ViEmu, which is a nice product, (kind of) validates the principles. Actually, I think it validates them because I find myself enjoying the effort, which helps in sustaining the long term effort, and the business is gaining momentum. Apart from this, the ViEmu experience has been an incredible sandbox where to learn, and the lessons learned will play a nice role towards the actual release of NGEDIT. For example, the Google SEO front, and also the adwords & clickfraud front.

In a general strategic view, I’m meshing my efforts on NGEDIT 1.0 with steadily improving ViEmu. Even if ViEmu doesn’t have the business potential of NGEDIT, I think that making all the customers of ViEmu happy only helps with the later stages of building the business. One thing to which I haven’t paid too much attention is marketing ViEmu. I think I could easily improve the sales performance of ViEmu with some effort, but I also think this efforts falls on the other side of the line “makes sense over working on NGEDIT”. So far, a bit of Google-tweaking, a bit of adwords, a bit of word-of-mouth, and a deserted market have been successful in building up sales.

This is very different from what I think I should do if ViEmu were the product on which I wanted to base my business. I would have to be working 100% in promoting it while steadily improving it. But, frankly, I don’t think ViEmu would be a sensible sole-business product. Not everyone is dying for vi/vim emulation.

So, what do all the above principles result in, as practical acting? The first point is that, for the past few months, I’ve been (a)improving ViEmu little by little and releasing new versions, (b)designing and working on the core architecture of NGEDIT, and (c)crossporting ViEmu’s vi/vim core to NGEDIT. The reason for the third point was that, upon using NGEDIT myself, I was sorely missing good vi/vim functionality. It already had some nice vi/vim emulation, written in NGEDIT’s own scripting language, which was the seed for ViEmu, but ViEmu had grown way beyond this seed. Thus, principle (I) kicked in, and I started to crossport ViEmu’s vi/vim engine.

Why do I say crossport? The reason is that I have been rewriting the core in such a way that it can be used both within NGEDIT and within ViEmu. This has had some major requirements on the design of ngvi, as I like to call the new core, and it’s a reason it’s taken some serious time to develop. This effort has some nice side effects:

  • I now have a super-flexible vi/vim core that I can integrate in other products, or use to develop vi/vim plugins for other environments (ah, if only solving interaction problems with other plugins weren’t the worst part!).
  • I can now put in work that benefits both products.
  • I’ll talk about it later, but I have come up with some neat new programming tricks due to this effort. The payoff for this will come later on, but it’s there anyway.

The new core is almost finished, with only ex command line emulation left to be crossported. For testing, this core is being used in NGEDIT. That way, ViEmu can advance as a separate branch. As soon as ngvi is finished, I will start implementing ViEmu 2.0 based on ngvi. This new core already brings some functionality that ViEmu is lacking, and I will be just plain happy that most of ViEmu is now officially part of NGEDIT.

And after this, I have a couple major features in NGEDIT that need to be implemented, and a gazillion minor loose ends. If you are an experienced developer, you’ll know it’s those loose ends that put the July/August release date in danger.

Names, names, names

As I mentioned recently, NGEDIT will not be the name of the final product. I already have the candidate for the name, and there’s only one thing pending before it becomes official: I need to check it with a Japanese person. I haven’t been very successful through asking here on the blog, or through asking the Japanese customers of ViEmu. Understandably, I haven’t insisted too much on my Japanese customers – they are customers after all!

I don’t want to reveal the name just yet, as I don’t want even more confusion if it ends up not being the final name. I would also like to have at least a placeholder page ready when I reveal the name.

Apart from this name change, I also intend to do something with the blog’s name. I plan to blog more and more in the future, as the business doesn’t critically require all my energy. I also plan to cover other areas: programming languages, software design, A.I., O.S.S., operating systems, I’d even like to write on things like economy or the psychology of programming! I think a more general name would be a good idea.

Given that the new editor will have its own new name, that I plan to move ViEmu to is own domain (viemu.com, already up with a simple page), and that the blog needs another name, ngedit.com will very likely end up pretty empty.

All that pagerank accumulated for nothing… sigh! In any case, now should be the best moment to do the deep reforms.

I’ll let you know as these names are ready for general exposure.

Tha blog

If anyone has been reading long enough, you will have probably noticed that I post less often that I used to. The main reason is that development itself already drains most of my available energy. There is not much I can do about that, except wait for days where I have more energy, and wait for the moment when NGEDIT is already released. I will feel much better when NGEDIT is out there, and I think I’ll be able to concentrate better on other things. Having put so much effort so far, and not having it available for download & for sale puts a lot of pressure.

But there are also other reasons. For one, I have many interesting topics I’d like to cover, but which I don’t want to cover just yet. I prefer to wait until I have a working product, before bringing up some of these areas. Should be better business-wise.

This ends up meaning that I don’t want to write about the stuff I want to write about. Ahem.

Anyway, I have come up with an area I’d like to cover with a series of posts. It’s about the techniques I have been using for the development of ngvi, which could be described as the application of dynamic & functional programming to C++. Part of the techniques will be applicable to C++ only, but many other apply to general imperative/OO programming. Hopefully it will be interesting to (some of) you.

Focusing my development effort

Thursday, November 24th, 2005

Long time readers of my blog already know about my tendency to get carried away with stuff. I’ve got carried away with something in the past, just to have to retract the following day. The second post mostly deals with this tendency to get carried away. To sum up: I don’t think the lesson I need to learn is “refrain more”, as that takes away a lot of the energy as well – “learn to acknowledge my mistakes happily and as early as possible” seems a much more valuable lesson for me. And that applies in many other fields.

I’ve also talked about my inability to write short blog posts, and failed miserably to do so almost systematically in the past.

Anyway, to get to the point, this (of course) also applies in my dedication to development. I tend to drift off too easily, especially when the goal involves developing a complex piece of software like NGEDIT. Although I’ve posted in the past about my strategy in the development of NGEDIT, I find that I have to revisit that topic really often – mostly in the messy and hyperactive context of my thoughts, but I thought I’d post about it as it may also apply to other fellow developer-entrepreneurs.

I recently posted about how I had found out the best way to focus my development efforts on NGEDIT. To sum up: try to use it, and implement the features as their need is evident (I’m fortunate enough that I am 100% a future user of my own product). As the first point coming out from that, I found myself working into getting NGEDIT to open a file from the command line. That’s weeks ago, and I have only almost implemented it. How come? It should be simple enough to implement! (At least, given that opening the file through the file-open dialog was already functional).

Well, the thing is that my tendency to drift off, my ambition, and my yearning for beautiful code kicked in. Instead of a simple solution, I found myself implementing the “ultimate” command line (of course). It’s already pretty much fully architected, and about half-working (although opening files from the command line ended up being just a small part of the available functionality). As I did this, I also started refactoring the part of the code that handles file loading into using my C++ string class that doesn’t suck, which is great, but it’s quite an effort by itself. Meanwhile, I found myself whining that I didn’t want to have all that code written using the non-portable Windows API (as a shortcut I took before summer, NGEDIT code is uglily using the Windows API directly in way too many places), so I started implementing an OS-independence layer (I know, I know, these things are better done from day 1, but you sometimes have to take shortcuts and that was one of many cases). Of course, with the OS-independence layer using said generic string class for the interface. And establishing a super-flexible application framework for NGEDIT, which was a bit cluttered to my taste. And sure, I started trying to establish the ultimate error-handling policy, which took me to posting about and researching C++ exceptions and some other fundamental problems of computing…

If that’s not getting carried away, then I don’t know what is!

Today’s conclusion, after going out for a coffee and a walk to the cool air of the winter, is that I should refrain from tackling fundamental problems of computing if I am to have an NGEDIT beta in a few months’ time. The code of NGEDIT 1.0 is bound to have some ugliness to it, and I need to learn to live happily with that. Even if I will have to rewrite some code afterwards, business-wise it doesn’t make sense to have the greatest framework, the most beautiful code, and no product to offer!

In any case, I hope I have improved my ShortPostRank score, even if definitely not among world-class short-post bloggers, and you can see I’ve had some fun with self-linking. Something nice to do after starting beta testing for ViEmu 1.4, which will probably be out later this week.

The lie of C++ exceptions

Thursday, November 17th, 2005

As part of the ongoing work on NGEDIT, I’m now establishing the error management policy. The same way that I’m refactoring the existing code to use my new encoding-independent string management classes, I’m also refactoring it to a more formal error handling policy. Of course, I’m designing along the way.

Probably my most solid program (or, the one on which I felt more confident) was part of a software system I developed for a distribution company about 9 years ago. The system allowed salesmen to connect back to the company headquarters via modem (the internet wasn’t everywhere back then!) and pass on customers’ orders every evening. I developed both the DOS program that ran on their laptops, and the server that ran on AIX. I developed the whole system in C++ – gcc on AIX, I can’t remember what compiler on the DOS side. Lots of portable classes to manage things on both sides. As a goodie, I threw in a little e-mail system to communicate between them and with hq, which was out of spec – and I managed to stay on schedule! It was a once-and-only-once experience, as mostly all my other projects have suffered of delays – but the project I had just done before was so badly underscheduled and underbudgeted that I spent weeks nailing the specs to not fall in the same trap.

The part I felt was most important to keep solid was the server part – salesmen could always redial or retry, as it was an interactive process. The server part was composed of a daemon that served incoming calls on a serial port, and a batch process that was configured to run periodically and export the received files to some internal database system.

How did I do the error management? I thought through every single line in the process, and provided meaningful behavior. Not based on exceptions, mind you. Typical processing would involve sending out a warning to a log file, cleaning up whatever was left (which required its own thinking through), and returning to a well-known state (which was the part that required the most thinking through). I did this for e-v-e-r-y s-i-n-g-l-e high-level statement in the code. This meant: opening a file, reading, writing, closing a file (everyone typically checks file opens, but in such a case I felt a failure in closing a file was important to handle), memory management, all access to the modem, etc…

C++ brought exceptions. I’m not 100% sure yet, but I think exceptions are another lie of C++ (I believe it has many lies which I haven’t found documented anywhere). It promises being able to handle errors with much less effort, and it also promises to allow you to build rock-solid programs.

The deal is that exceptions are just a mechanism, and this mechanism allows you to implement a sensible error handling policy. You need a rock solid policy if you really want to get failproof behavior, and I haven’t seen many examples of such policies. What’s worse, I haven’t yet been able to figure out exactly how it should look like.

Furthermore, exceptions have a runtime cost, but the toughest point is that they force you to write your code in a certain way. All your code has to be written such that if the stack is unwound, stuff gets back automatically to a well-known-state. This means that you need to use the RAII technique: Resource-Acquisition-Is-Initialization. This covers the fact that you have to relinquish the resources you have acquired, such that it doesn’t leak them. But that is only part of returning to a well-known state! If you are doing manipulation of a complex data structure, it’s quite probable that you will need to allocate several chunks of memory, and any one of them may fail. It can be argued that you can allocate all memory in advance and only act if all that memory is actually available – but then, this would force your design around this: either you concentrate resource acquisition in a single place for each complex operation, or you design every single action in your design in two phases – first one to perform all necessary resource acquisition, second one to actually perform the operation.

This reminds me of something… yeah, it is similar what transaction-based databases do. Only elevated to the Nth degree, as a database has a quite regular structure, and your code usually doesn’t. There are collections, collections within collections, external resources accessed through different APIs, caches to other data-structures, etc…

So, I think in order to implement a nice exception-based policy, you have to design a two-phase access to everything – either that, or an undo operation is available. And you better wrap that up as a tentative resource acquisition – which requires a new class with its own name, scope, declaration, etc…

Not to talk about interaction between threads, which elevates this to a whole new level…

For an exceptions-based error-handling policy, I don’t think it is a good design to have and use a simple “void Add()” method to add something a collection. Why? Because if this operation is part of some other larger operation, something else may fail and the addition has to be undone. This means either calling a “Remove()” method, which will turn into explicit error management, or using a “TTentativeAdder” class wrapping it around, so that it can be disguised as a RAII operation. This means any collection should have a “TTentativeAdder” (or, more in line with std C++’s naming conventions, “tentative_adder”).

I don’t see STL containers having something like that. They seem to be exception-aware because they throw when something fails, but that’s the easy part. I would really like to see a failproof system built on top of C++ exceptions.

Code to add something to a container among other things often looks like this:

void function(void)
{
  //... do potentially failing stuff with RAII techniques ...

  m_vector_whatever.push_back(item);

  // ... do other potentially failing stuff with more RAII techniques
}

At first, I thought it should actually look like this:

void function(void)
{
  //... do potentially failing stuff with RAII techniques ...

  std::vector<item>::tentative_adder add_op(m_vector_whatever, item);

  // ... do other potentially failing stuff with more RAII techniques

  add_op.commit();
}

But after thinking a bit about this, this wouldn’t work either. The function calling this one may throw after returning, so all the committing should be delayed to a controllably final stage. So we would need a system-wide “commit” policy and a way to interact with it…

The other option I see is to split everything in very well defined chunks that affect only controlled areas of the program’s data, such that each one can be tentatively done safely… which I think requires thinking everything through in as much detail as without exceptions.

The only accesses which can be done normally are those guaranteed to only touch local objects, as those will be destroyed if any exception is thrown (or, if we catch the exception, we can explicitly handle the situation).

And all this is apart from how difficult it is to spot exception-correct code. Anyway, if everything has to be done transaction-like, it should be easier to spot it – suddenly all code would only consist in a sequence of tentatively-performing object constructions, and a policy to commit everything at the “end”, whatever the “end” is in a given program.

I may be missing something, and there is some really good way to write failproof systems based on exceptions – but, to date, I haven’t seen a single example.

I’ll keep trying to think up a good system-wide error handling policy based on exceptions, but for now I’ll keep my explicit management – at least, I can write code without enabling everything to transaction-like processing, and be able to explicitly return stuff to a well-known safe state.

This was my first attempt at a shorter blog entry – and I think I can safely say I failed miserably!