Archive for the ‘ngdev’ Category

October ’06 Status update

Saturday, October 28th, 2006

It’s way past time I updated the blog with some more recent info. I hope you’ll understand time the 2nd scarcest resource of my single person startup, only right after cash, but very close.

For the past month, I’ve been able to do little more than answer support e-mails, respond to customer’s queries, and take note of the bugs/requests I’ve received. My day-job has required a lot of time and it has been pretty stressful, so I just forgot about trying to actually achieve anything.

I had been pretty busy working in ViEmu for the previous couple of months. I took a quiet August, I started surfing – which I love as a great summer activity – and I worked a lot in ViEmu/VS version 2.0. The worst part of it was a more than 10 hour symbol-less assembly debugging session of the innards of Visual Studio, in order to find a bug in one of the APIs and implement a feature I badly wanted to offer in 2.0 (automatic keybinding removal/management).

After this, I was able to release ViEmu/VS 2.0 back in mid September. The keybindings handling feature in particular has caused some trouble, so I will completely change the approach for the next major version (whenever that happens), but, all in all, 2.0 is a heck of an improvement over the previous 1.4 version. Sales have gone up, the feedback has been great, and I’m very satisfied with the result. It also makes use of the completely new ngvi emulation engine, which is also integrated in kodumi (my upcoming text editor), which will hopefully be released in early 2007. Having the engine confirmed to work right by hundreds of users gives me a great confidence in it.

I also released ViEmu for SQL Server Management Studio 2005. It has made a modest debut, with not too many sales, but it should be useful to some folks into heavy DB development, and turns ViEmu in a rounder offer.

I’ve updated the web site to offer ViEmu/SQL too, but I only did the minimum investment of time into this. And the reason is that I still plan to release a third ViEmu product before taking on kodumi development more seriously: ViEmu for Word and Outlook. Quite a few people have asked for it over time, I think integrating the ngvi engine in the Word framework won’t be too much trouble, and the main point is that I expect to make the maximum ROI from the effort invested so far. Vi/vim emulation will never be a huge market, and implementing it for many other environments wouldn’t be a sensible business decision, but having the triad of ViEmu/VS, ViEmu/SQL and ViEmu/Word+Outlook seems like the best trade-off of effort and potential. ViEmu sales are already in a place where I could live off of it, and adding up a third product could make it a comfortable situation to confront the release of kodumi 1.0 and developing the technology I intend to.

I will have to do a pretty complete redesign of presenting the 3 products. And presenting multiple products is always much more difficult than presenting a single one. Given that this effort is in the near future, I decided to do the minimum redesign possible for the release of ViEmu/SQL.

Some interesting facts:

  • July and August sales were slow (especially predictable for July, given June had been the last month of the previous pricepoint and I cannibalized a lot of natural July sales), but September managed to catch up with dollar-sales in June (the best selling month ever so far), and October has again broken that record, almost catching up with the maximum ever unit sales in June.
  • Finally, has made it to the first page of both the “visual studio vi” and “visual studio vim” Google searches. As soon as I have an afternoon to sort it out, I will finally be redirecting the old “” page to “”. It’s taken 6 months for Google to acknowledge the new location (I didn’t want to redirect straight away and risk losing the ranking, as it had taken many, many months to have that page on the first page for these very interesting searches).
  • I have a chart and an article almost ready, called “The Ultimate WM_KEYDOWN/WM_CHAR Table from Hell”. I’ve had to delve even more deeply in the broken-ness of the Win32 input model, as ViEmu 2.0 has full keyboard mapping support, and it’s simply amazing how broken it is. The previous article on the subject is, funnily, the 2nd Google result for “WM_CHAR”, right after the MSDN reference page, and the 4th or 5th for WM_KEYDOWN (and brings quite some traffic to my site). I believe the new chart will be very useful and it will be pretty popular on, etc… more exposure is always good.
  • As always, I still plan to blog profusely… in the future :). I certainly enjoy writing and sharing my experience, and it’s definitely useful for the business, but I still have to prepare ViEmu/Word+Outlook and get kodumi 1.0 ready before I can dedicate more time to blogging. Actually, there is some very interesting technology I am preparing for kodumi (and for other projects afterwards), and I’d love to blog about it. But I don’t have time for everything… As soon as I have a released product which appeals to a higher percentage of developers, it will make more sense to invest in blogging as a means to gain awareness.
  • Andrey Butov took the plunge, left his day job, and went fulltime into his business. The effects have already been noticeable: a new web site specialized in Wall Street Programmers, a new design for his main site, etc… He was even so kind as to feature ViEmu/VS in the front page of the new site! When he released his book So You Want To Be A Wall Street Programmer a few weeks ago, I decided to buy it and read it. The reason is not that I intend to ever work in Wall Street, I am as close to 100% sure as possible that I won’t. But I enjoy his writing style, and I was curious about the development industry over there. I found the book as interesting and entertaining as expected, and I also got a good idea of how the internal development in investment firms works. Since my products are and will keep being oriented towards developers, I found that the new knowledge would be useful for better targeting of my upcoming products. I’m familiar of how development works in 2 or 3 different industries, and I’m confident that I can target my products efficiently to those, but I’ve now added another one to the strategy-decisions mixing pot, an industry which can spend a lot of money, so I think I’ll be glad I spent the time to read the book. Recommended.

And a closing note with regards to blogging subjects: I’m doing some core technology development for kodumi. It’s quite probable that the blog will turn towards that subject area: basic computer science, parsers, languages, types, the nature of code and data, etc… I’ll still post about business and other issues, but I plan to blog a lot about the technology – I think it’s pretty groundbreaking and that it will be useful in many areas. So don’t be too surprised if you find a post here talking about really basic stuff (such as “what is a number”, “what is a type”, or “code and data are one and the same thing”).

But if you really, really want to read purely about setting up a small-software-company, you have to head over to Patrick McKenzie’s “MicroISV on a Shoestring” blog. Patrick is a smart guy (“smart” as in “really smart”), and he also writes very well, so his blog is the best account of going from zero to having a working business I’ve found. Recommended, too.

First anniversary

Monday, June 19th, 2006

Today is the 1st anniversary of the conception of ViEmu. That is, this very day last year, I came up with the idea of developing a vi/vim emulator for Visual Studio. I had been working for months in the kodumi text editor (back then it was just ngedit), and the last stretch had involved developing a scripting language compiler and VM, and implementing a vi/vim emulation module in this language.

It would only take me about one month and a half to actually release version 1.0. It was a really hectic month, though. Actually, the short time-to-release was largely thanks to the fact that I already had the basic vi/vim emulation code – even if I had to port it from ngedit’s scripting language into C++.

ViEmu is nowadays a very solid product, having gone far beyond what I expected both in functionality and in sales performance. I’m now concentrating in preparing ViEmu 2.0, which will finally integrate the codebase back with kodumi, and provide some pretty advanced features to existing customers. I will also be ending the introductory pricing at the end of this month. I initially planned to introduce the new price at the same time as ViEmu 2.0, even if 2.0 is a free upgrade to existing customers, but the new version will be taking a bit more than that, and I really think ViEmu is a very good value for its full price. Actually, it seems a bit absurd that ViEmu 1.0, which was a much, much more basic product, cost the same as today’s ViEmu.

Working on two projects is a challenging dynamic for me. I am a “depth-over-breadth” type of guy, and I have trouble switching focus. I’ve worked both in kodumi and in ViEmu for the past few months, and I expect to keep doing so for a long time to come. It’s even more challenging because of the different nature and status of both products: one is for a very niche audience, with no competition, while the other is for a large public, with plenty of competition. One is already a selling product, while the other is still in pure development towards 1.0. One has a limited potential, while for the other one I see the sky as the only limit. One needs development work, while the other needs marketing work. One of them already earns me both a long user request list and a large amount of flattering user feedback, while the other is still something that only I have used. One already helps pays the bills, while the other one only helps reduce my social life. I always have some trouble in setting the priorities, but I think I’m striking some kind balance in both improving ViEmu and advancing towards kodumi 1.0.

Fortunately, most of the codebase of both products will shortly be shared, and that will help with at least the part that is common. Also fortunately, the current customers of ViEmu are potentially also interested in kodumi, so I see the effort in improving and supporting ViEmu as an investment in establishing a good relationship with customers that can result in a business benefit.

As a summary of the ViEmu marketing week I last posted about, which of course ended lasting about 10 days, I must say I’m happy that ViEmu sales are breaking new records during June. I cant be sure whether this is due to the announcement of the new pricing policy, to the redesigned web page, to the latest maintenance release, to the richer trial period user experience (no nags, just better notices and a welcome screen that provides the most relevant information), or to a certain maturity of the product. But I’m sure all of them help. I’m looking forward to seeing how sales figures evolve in July, just after the effective pricing changes. I’ll let you know during the next few months what the general trend is, both after the pricing change and after 2.0 is released.

Finally, as soon as ViEmu 2.0 is ready, I will be focusing more in kodumi. Actually, part of the work for ViEmu 2.0 will indeed revert in kodumi. Even if I announced that I may release another derived product before kodumi 1.0, the core technology in that product is needed for kodumi, and I’m pretty much an expert now in building Visual Studio extensions, so it shouldn’t take as long to prepare as ViEmu has taken. On the other hand, I’m really excited to start working in this part of the code, as I will finally be working in an innovative area (a vi/vim emulator as a Visual Studio add-in is an interesting product, but it can hardly be called innovative). If everything goes well, I will be posting about it on the blog as I start working on it, so it will also bring some interesting technical content to the blog. Well, I will hopefully have the energy to post about it at the same time I’m developing it.

Thanks everyone for your continued support during this year.

ViEmu Marketing Week

Monday, May 29th, 2006

As I mentioned in my last post, apart from ViEmu and the kodumi editor, I’m working in another product. I tend to concentrate on development most of the time, rather than marketing ViEmu. Don’t get me wrong, not only do I think that marketing is second only to product quality as the most important part of this business, but I enjoy marketing. The reason is that I think that ViEmu can not become a large business, because of the inherently small audience of a vi-vim emulator for Visual Studio. Thus, I think that the best way to grow the business is to release a product for a larger audience, rather than trying to squeeze every extra N% sales by implementing effective sales techniques.

Anyway, there are two phenomenons that push in the other direction. For one, pure coding of a product before it’s released lacks the thrill of direct feedback, so it’s very tiring. At least that’s how I experience it. And second, any improvements in ViEmu sales make it directly to the monthly bottom line, which is a pretty good motivator.

Thus, after a solid coding Saturday, I decided to dedicate some time to marketing ViEmu better. There were two main things that were irking me:

  • I get pretty good feedback by e-mail and through the forums, but looking at the number of downloads it still looks paltry. Not having a super-easy way to get feedback (esp. criticisms!) leads to my ignorance about why those that don’t buy ViEmu don’t buy it.
  • The main page of ViEmu (also doubling as landing page from Google adwords) was a bit dull. Too much text. Informative for those interested, but I don’t think it really “grabbed” visitors.

So, I dedicated all of Sunday to redesigning it, adding functionality so that visitors can send feedback from a simple form there, and making it more “catchy”. This ended up as an animated demo of Visual Studio running ViEmu. You can see the result here:

New main page

And, just for reference, the old one is still here:

Old main page

I’ll let you know how it turns out to work. I am planning to implement some other marketing “tricks” during this week, as well as releasing ViEmu 1.4.5, and then I’ll go back to more coding and support, coding and support, coding and support…

Rough strategy sketch

Wednesday, March 22nd, 2006

I think I promised a general strategy post & a status report, some time ago. Here goes.

Development strategy

I am currently sharing my efforts between two development efforts. One of them, ViEmu, has been available for almost 8 months now. It has improved, a lot, and sales have been steadily climbing. Although not a stellar success, it’s working well beyond my realistic forecasts (not beyond my wildest dreams), and I’m really happy that I decided to do it.

The second one, code-named NGEDIT, has been in development for a bit over a year, and it’s still not ready for release. In the time I’ve been developing it, both my belief in the concept, and my disrespect for my own time estimations, have grown a lot. I would be very happy to release 1.0 around July or August, one year after the release of ViEmu, but I know it’s still optimistic. And that’s after I’ve decided to cut out most of the stuff for version 1.0!

Of course, apart from these clear-cut fronts, and not including my day job, there are other fronts I have to attend. Customer support, for example, or this blog, for that matter.

I’ll try to summarize, in a general sense, what my current plans for the next few months are. What the main goals are, and how I’m planning to achieve them.

The #1 goal, as you can guess, is to release NGEDIT version 1.0. This is a bit trickier than sounds. The act of releasing it is, in a general sense, more important than the exact functionality it brings. I have come to this conclusion after over a year in development, and the experience of ViEmu. Emotionally, it’s much better to be working on improving an existing product than it is to be working on a product for its first release, with no users or customers. As long as you are not too impatient to get a lot of sales, having actual users & feedback is a big boost for motivation. Having a few sales helps, as well. And, as long as the product is good and there is a need, sales only get higher as you improve the product.

In order to get this process working, I’ve cut out many planned features from 1.0, in order to release it before long. You might ask, why don’t you already release it in its current stage?

A common answer, but not too informative, would be to answer that it’s still too basic, or unusable. Well, not completely true, as I use it. But a better answer would involve some thought on the market I’m getting in. The text editor market is pretty saturated, and most products out there have many man-years of effort built in. There is at least a general perception of things a text editor must have. I think releasing it without these features would be too much of a stretch. Rest assured, I’ve carefully removed everything which isn’t essential for 1.0. As with ViEmu 1.0, the first release will be pretty basic, but it will hopefully be a better tool for at least some people out there, and that should trigger the initial dynamic of usage-feedback-improvement.

Apart from these essential elements, NGEDIT 1.0 will also sport some interesting things that are well outside the minimum requirements list. The very complete vi/vim emulation, for one, or the native management of text in any format (no conversion on load/save). There are a few more, but these are probably the most interesting to talk about. There are two main forces that have resulted in this uncommon feature set. The first is that I’m building NGEDIT 1.0 as the core framework for the really advanced features, which have some unique requirements. And the second is that I’m building it to become my favorite editor first, and only then a commercial product. This results in the need of powerful vi/vim emulation, which is bound not to have much relevance as a commercial feature.

So, we could say the road to NGEDIT 1.0 is drawn by three guiding principles, listed in increasing priority:

  • III: Build a good foundation for the future versions of the editor, if not fully realized, at least following a scalable design
  • II: Release the minimum product that makes sense
  • I: Build my favorite editor

This is not a list of principles I try to adhere to. It’s more of a recollection of the kind of decisions I’ve found myself taking on intuitive grounds. I’ve seen that I will trade the best design for some functionality, in order to be closer to release, and I’ve found that I’ve traded every sensible business principle by deciding to implement some very complete (and costly) vi/vim emulation. The fact that my sticking to vi/vim emulation has resulted in ViEmu, which is a nice product, (kind of) validates the principles. Actually, I think it validates them because I find myself enjoying the effort, which helps in sustaining the long term effort, and the business is gaining momentum. Apart from this, the ViEmu experience has been an incredible sandbox where to learn, and the lessons learned will play a nice role towards the actual release of NGEDIT. For example, the Google SEO front, and also the adwords & clickfraud front.

In a general strategic view, I’m meshing my efforts on NGEDIT 1.0 with steadily improving ViEmu. Even if ViEmu doesn’t have the business potential of NGEDIT, I think that making all the customers of ViEmu happy only helps with the later stages of building the business. One thing to which I haven’t paid too much attention is marketing ViEmu. I think I could easily improve the sales performance of ViEmu with some effort, but I also think this efforts falls on the other side of the line “makes sense over working on NGEDIT”. So far, a bit of Google-tweaking, a bit of adwords, a bit of word-of-mouth, and a deserted market have been successful in building up sales.

This is very different from what I think I should do if ViEmu were the product on which I wanted to base my business. I would have to be working 100% in promoting it while steadily improving it. But, frankly, I don’t think ViEmu would be a sensible sole-business product. Not everyone is dying for vi/vim emulation.

So, what do all the above principles result in, as practical acting? The first point is that, for the past few months, I’ve been (a)improving ViEmu little by little and releasing new versions, (b)designing and working on the core architecture of NGEDIT, and (c)crossporting ViEmu’s vi/vim core to NGEDIT. The reason for the third point was that, upon using NGEDIT myself, I was sorely missing good vi/vim functionality. It already had some nice vi/vim emulation, written in NGEDIT’s own scripting language, which was the seed for ViEmu, but ViEmu had grown way beyond this seed. Thus, principle (I) kicked in, and I started to crossport ViEmu’s vi/vim engine.

Why do I say crossport? The reason is that I have been rewriting the core in such a way that it can be used both within NGEDIT and within ViEmu. This has had some major requirements on the design of ngvi, as I like to call the new core, and it’s a reason it’s taken some serious time to develop. This effort has some nice side effects:

  • I now have a super-flexible vi/vim core that I can integrate in other products, or use to develop vi/vim plugins for other environments (ah, if only solving interaction problems with other plugins weren’t the worst part!).
  • I can now put in work that benefits both products.
  • I’ll talk about it later, but I have come up with some neat new programming tricks due to this effort. The payoff for this will come later on, but it’s there anyway.

The new core is almost finished, with only ex command line emulation left to be crossported. For testing, this core is being used in NGEDIT. That way, ViEmu can advance as a separate branch. As soon as ngvi is finished, I will start implementing ViEmu 2.0 based on ngvi. This new core already brings some functionality that ViEmu is lacking, and I will be just plain happy that most of ViEmu is now officially part of NGEDIT.

And after this, I have a couple major features in NGEDIT that need to be implemented, and a gazillion minor loose ends. If you are an experienced developer, you’ll know it’s those loose ends that put the July/August release date in danger.

Names, names, names

As I mentioned recently, NGEDIT will not be the name of the final product. I already have the candidate for the name, and there’s only one thing pending before it becomes official: I need to check it with a Japanese person. I haven’t been very successful through asking here on the blog, or through asking the Japanese customers of ViEmu. Understandably, I haven’t insisted too much on my Japanese customers – they are customers after all!

I don’t want to reveal the name just yet, as I don’t want even more confusion if it ends up not being the final name. I would also like to have at least a placeholder page ready when I reveal the name.

Apart from this name change, I also intend to do something with the blog’s name. I plan to blog more and more in the future, as the business doesn’t critically require all my energy. I also plan to cover other areas: programming languages, software design, A.I., O.S.S., operating systems, I’d even like to write on things like economy or the psychology of programming! I think a more general name would be a good idea.

Given that the new editor will have its own new name, that I plan to move ViEmu to is own domain (, already up with a simple page), and that the blog needs another name, will very likely end up pretty empty.

All that pagerank accumulated for nothing… sigh! In any case, now should be the best moment to do the deep reforms.

I’ll let you know as these names are ready for general exposure.

Tha blog

If anyone has been reading long enough, you will have probably noticed that I post less often that I used to. The main reason is that development itself already drains most of my available energy. There is not much I can do about that, except wait for days where I have more energy, and wait for the moment when NGEDIT is already released. I will feel much better when NGEDIT is out there, and I think I’ll be able to concentrate better on other things. Having put so much effort so far, and not having it available for download & for sale puts a lot of pressure.

But there are also other reasons. For one, I have many interesting topics I’d like to cover, but which I don’t want to cover just yet. I prefer to wait until I have a working product, before bringing up some of these areas. Should be better business-wise.

This ends up meaning that I don’t want to write about the stuff I want to write about. Ahem.

Anyway, I have come up with an area I’d like to cover with a series of posts. It’s about the techniques I have been using for the development of ngvi, which could be described as the application of dynamic & functional programming to C++. Part of the techniques will be applicable to C++ only, but many other apply to general imperative/OO programming. Hopefully it will be interesting to (some of) you.

Focusing my development effort

Thursday, November 24th, 2005

Long time readers of my blog already know about my tendency to get carried away with stuff. I’ve got carried away with something in the past, just to have to retract the following day. The second post mostly deals with this tendency to get carried away. To sum up: I don’t think the lesson I need to learn is “refrain more”, as that takes away a lot of the energy as well – “learn to acknowledge my mistakes happily and as early as possible” seems a much more valuable lesson for me. And that applies in many other fields.

I’ve also talked about my inability to write short blog posts, and failed miserably to do so almost systematically in the past.

Anyway, to get to the point, this (of course) also applies in my dedication to development. I tend to drift off too easily, especially when the goal involves developing a complex piece of software like NGEDIT. Although I’ve posted in the past about my strategy in the development of NGEDIT, I find that I have to revisit that topic really often – mostly in the messy and hyperactive context of my thoughts, but I thought I’d post about it as it may also apply to other fellow developer-entrepreneurs.

I recently posted about how I had found out the best way to focus my development efforts on NGEDIT. To sum up: try to use it, and implement the features as their need is evident (I’m fortunate enough that I am 100% a future user of my own product). As the first point coming out from that, I found myself working into getting NGEDIT to open a file from the command line. That’s weeks ago, and I have only almost implemented it. How come? It should be simple enough to implement! (At least, given that opening the file through the file-open dialog was already functional).

Well, the thing is that my tendency to drift off, my ambition, and my yearning for beautiful code kicked in. Instead of a simple solution, I found myself implementing the “ultimate” command line (of course). It’s already pretty much fully architected, and about half-working (although opening files from the command line ended up being just a small part of the available functionality). As I did this, I also started refactoring the part of the code that handles file loading into using my C++ string class that doesn’t suck, which is great, but it’s quite an effort by itself. Meanwhile, I found myself whining that I didn’t want to have all that code written using the non-portable Windows API (as a shortcut I took before summer, NGEDIT code is uglily using the Windows API directly in way too many places), so I started implementing an OS-independence layer (I know, I know, these things are better done from day 1, but you sometimes have to take shortcuts and that was one of many cases). Of course, with the OS-independence layer using said generic string class for the interface. And establishing a super-flexible application framework for NGEDIT, which was a bit cluttered to my taste. And sure, I started trying to establish the ultimate error-handling policy, which took me to posting about and researching C++ exceptions and some other fundamental problems of computing…

If that’s not getting carried away, then I don’t know what is!

Today’s conclusion, after going out for a coffee and a walk to the cool air of the winter, is that I should refrain from tackling fundamental problems of computing if I am to have an NGEDIT beta in a few months’ time. The code of NGEDIT 1.0 is bound to have some ugliness to it, and I need to learn to live happily with that. Even if I will have to rewrite some code afterwards, business-wise it doesn’t make sense to have the greatest framework, the most beautiful code, and no product to offer!

In any case, I hope I have improved my ShortPostRank score, even if definitely not among world-class short-post bloggers, and you can see I’ve had some fun with self-linking. Something nice to do after starting beta testing for ViEmu 1.4, which will probably be out later this week.

The lie of C++ exceptions

Thursday, November 17th, 2005

As part of the ongoing work on NGEDIT, I’m now establishing the error management policy. The same way that I’m refactoring the existing code to use my new encoding-independent string management classes, I’m also refactoring it to a more formal error handling policy. Of course, I’m designing along the way.

Probably my most solid program (or, the one on which I felt more confident) was part of a software system I developed for a distribution company about 9 years ago. The system allowed salesmen to connect back to the company headquarters via modem (the internet wasn’t everywhere back then!) and pass on customers’ orders every evening. I developed both the DOS program that ran on their laptops, and the server that ran on AIX. I developed the whole system in C++ – gcc on AIX, I can’t remember what compiler on the DOS side. Lots of portable classes to manage things on both sides. As a goodie, I threw in a little e-mail system to communicate between them and with hq, which was out of spec – and I managed to stay on schedule! It was a once-and-only-once experience, as mostly all my other projects have suffered of delays – but the project I had just done before was so badly underscheduled and underbudgeted that I spent weeks nailing the specs to not fall in the same trap.

The part I felt was most important to keep solid was the server part – salesmen could always redial or retry, as it was an interactive process. The server part was composed of a daemon that served incoming calls on a serial port, and a batch process that was configured to run periodically and export the received files to some internal database system.

How did I do the error management? I thought through every single line in the process, and provided meaningful behavior. Not based on exceptions, mind you. Typical processing would involve sending out a warning to a log file, cleaning up whatever was left (which required its own thinking through), and returning to a well-known state (which was the part that required the most thinking through). I did this for e-v-e-r-y s-i-n-g-l-e high-level statement in the code. This meant: opening a file, reading, writing, closing a file (everyone typically checks file opens, but in such a case I felt a failure in closing a file was important to handle), memory management, all access to the modem, etc…

C++ brought exceptions. I’m not 100% sure yet, but I think exceptions are another lie of C++ (I believe it has many lies which I haven’t found documented anywhere). It promises being able to handle errors with much less effort, and it also promises to allow you to build rock-solid programs.

The deal is that exceptions are just a mechanism, and this mechanism allows you to implement a sensible error handling policy. You need a rock solid policy if you really want to get failproof behavior, and I haven’t seen many examples of such policies. What’s worse, I haven’t yet been able to figure out exactly how it should look like.

Furthermore, exceptions have a runtime cost, but the toughest point is that they force you to write your code in a certain way. All your code has to be written such that if the stack is unwound, stuff gets back automatically to a well-known-state. This means that you need to use the RAII technique: Resource-Acquisition-Is-Initialization. This covers the fact that you have to relinquish the resources you have acquired, such that it doesn’t leak them. But that is only part of returning to a well-known state! If you are doing manipulation of a complex data structure, it’s quite probable that you will need to allocate several chunks of memory, and any one of them may fail. It can be argued that you can allocate all memory in advance and only act if all that memory is actually available – but then, this would force your design around this: either you concentrate resource acquisition in a single place for each complex operation, or you design every single action in your design in two phases – first one to perform all necessary resource acquisition, second one to actually perform the operation.

This reminds me of something… yeah, it is similar what transaction-based databases do. Only elevated to the Nth degree, as a database has a quite regular structure, and your code usually doesn’t. There are collections, collections within collections, external resources accessed through different APIs, caches to other data-structures, etc…

So, I think in order to implement a nice exception-based policy, you have to design a two-phase access to everything – either that, or an undo operation is available. And you better wrap that up as a tentative resource acquisition – which requires a new class with its own name, scope, declaration, etc…

Not to talk about interaction between threads, which elevates this to a whole new level…

For an exceptions-based error-handling policy, I don’t think it is a good design to have and use a simple “void Add()” method to add something a collection. Why? Because if this operation is part of some other larger operation, something else may fail and the addition has to be undone. This means either calling a “Remove()” method, which will turn into explicit error management, or using a “TTentativeAdder” class wrapping it around, so that it can be disguised as a RAII operation. This means any collection should have a “TTentativeAdder” (or, more in line with std C++’s naming conventions, “tentative_adder”).

I don’t see STL containers having something like that. They seem to be exception-aware because they throw when something fails, but that’s the easy part. I would really like to see a failproof system built on top of C++ exceptions.

Code to add something to a container among other things often looks like this:

void function(void)
  //... do potentially failing stuff with RAII techniques ...


  // ... do other potentially failing stuff with more RAII techniques

At first, I thought it should actually look like this:

void function(void)
  //... do potentially failing stuff with RAII techniques ...

  std::vector<item>::tentative_adder add_op(m_vector_whatever, item);

  // ... do other potentially failing stuff with more RAII techniques


But after thinking a bit about this, this wouldn’t work either. The function calling this one may throw after returning, so all the committing should be delayed to a controllably final stage. So we would need a system-wide “commit” policy and a way to interact with it…

The other option I see is to split everything in very well defined chunks that affect only controlled areas of the program’s data, such that each one can be tentatively done safely… which I think requires thinking everything through in as much detail as without exceptions.

The only accesses which can be done normally are those guaranteed to only touch local objects, as those will be destroyed if any exception is thrown (or, if we catch the exception, we can explicitly handle the situation).

And all this is apart from how difficult it is to spot exception-correct code. Anyway, if everything has to be done transaction-like, it should be easier to spot it – suddenly all code would only consist in a sequence of tentatively-performing object constructions, and a policy to commit everything at the “end”, whatever the “end” is in a given program.

I may be missing something, and there is some really good way to write failproof systems based on exceptions – but, to date, I haven’t seen a single example.

I’ll keep trying to think up a good system-wide error handling policy based on exceptions, but for now I’ll keep my explicit management – at least, I can write code without enabling everything to transaction-like processing, and be able to explicitly return stuff to a well-known safe state.

This was my first attempt at a shorter blog entry – and I think I can safely say I failed miserably!


Monday, November 7th, 2005

See, I had an enlightening experience yesterday. I now know exactly what the best roadmap to follow with NGEDIT is.

I’ve just finished ViEmu 1.3 with the powerful regular expressions and ex command line support. I haven’t been able to release it yet, as Microsoft has changed the way you get the PLK (“package load key”) which you have to include in every new version of your product. It was automated beforehand, so you filled the online form and received the PLK in an e-mail 30 seconds afterwards. But with the release of Visual Studio 2005, they now want to actually approve your product before issuing the key. This means I’m stuck with the code and the whole web info revamp at home. Ah well, I’m a bit angry with that, but then you’re at their mercy – and the important lesson is: I could have filled out forms for ViEmu 1.3, 1.4, 1.5, 1.6, … and even 2.0 last month, and I would have the keys nicely stored in my hard disk. Something to watch out for in the future: anything you use which is a web service is not under your control, remove that dependency if you can.

Anyway, going back to the point, now that ViEmu 1.3 is ready, I am putting ViEmu development to a secondary position, and getting focus back on NGEDIT. ViEmu is now at a more than acceptable level, and although I’ll keep improving it, I better focus on NGEDIT which is the product with most potential.

I’ve already talked in the past about how to tackle the development of NGEDIT. I give a lot of thought to how I invest my time – not in order not to make mistakes, I make many of them and it’s not really a problem. But I like to think and rethink what my goals are, what the best way to reach them is, what the tasks are, and how (and in what order) it makes most sense to work. Once I developed ViEmu, I had quite clear that I had to keep putting a lot of effort until the product reached a “serious” level. Slowing down beforehand wouldn’t make any sense: even if the vi-lovers audience is a small one, the only way to actually discover its size is to get a decent product out there. A so-so product would leave me thinking, if sales are not big, that the problem was the product’s quality.

And now that I’m focusing on NGEDIT, deciding exactly how to work on it is no easy feat. I’ve already talked about this in the past, but I had already decided that I would focus on an NGEDIT 1.0 which had some of the compelling innovative parts – no sense to release YATE (“yet another text editor”) and hope it will bring a decently sized audience. I actually started designing and coding some of the most interesting parts of NGEDIT, even if many of the core parts are yet incomplete.

But now that I’ve already started opening the NGEDIT workspace from Visual Studio, the amount of things that I can do (and that I have to do in some moment) is mind boggling. Just to list a very few:

  • Make the menus configurable… they actually are, but the UI is not there. Of course, this is one-in-one-hundred little things that need to be done.
  • Integrate it well with windows: registry entries, shell extension, etc…
  • Let the user configure the editor window colors – now all the UI elements can be configured, with the exception of the actual text-editing window itself (funny, as that’s the most important part).
  • Finally finish off implementing that DBCS support for japanese/etc… – it’s designed in, but it’s not implemented. Either that, or properly remove support for it for 1.0.
  • Now that ViEmu is so complete… port back the emulation functionality to NGEDIT. The vi/vim emulation in NGEDIT is written in my scripting language, it’s already 4 months old, and really basic compared to what ViEmu does. While I’m at it, properly structure it so that the vi/vim emulation core is shared code between ViEmu and NGEDIT – once again template based, text-encoding independent, and supporting integration with both mammooth and invasive environments, such as VS (where the vi/vim emu core is just a “slave”), and also with a friendlier environment like NGEDIT, that is already thought out to provide custom editor emulation.
  • Clean up many things in the code base – you know, after you’ve been months away from the code, you come back and see many things which are plain malformed. Sure, you also kind of know about them when you are actually coding, but you have to get the job done and you do take shortcuts. I believe in shortcuts, it’s the way to actually advance, but then you want to properly pave those shortcuts into proper roads.
  • Really study all that maddeningly beautiful mac software, understand what the heck makes it so incredibly and undeniably beautiful, and try to bring some of that beauty to NGEDIT.
  • Actually work in the most innovative aspects of NGEDIT – the parts with which I hope to create an outstanding product.
  • etc…

You can see… this is just a sample. Not to mention the many things that I have in my todo list, in my various “NGEDIT 1.0 FEATURE LIST” lists, in my handwritten sketches, etc…

It can be quite overwhelming… where do I start? And worse than that, you need motivation to actually be productive. See, one thing is that I’m determined to work on NGEDIT. Another thing is that I need to give myself something concrete to focus on, with which I feel comfortable strategy-wise, so that I will put all my energy on it.

Not having a single idea on how to tackle a problem is blocking. Having too many ideas or tasks can be as blocking.

This is a common problem when you are developing – sometimes, the only way out of such blocking crossroads is to start from the top of the list, even if it is alphabetically sorted, and work on items one by one. I have found that this works best for me when I am about to finish a release of a product or a development cycle. When, after digging for weeks or months, you see the light at the end of the tunnel, but there is still a lot of digging to be done, a similar phenomenon happens. And what I usually do is visualize the image of the finished product on my mind, and then just work on the items in sequential order (not sequential by priority, sequential by whatever random order they ended up listed on the todo list).

But I couldn’t see myself doing this for NGEDIT now – it’s still to far from any end of any tunnel.

I decided to just start browsing around the source code, thinking about code-architecture issues (I have never stopped thinking about many NGEDIT product details even while I was working 100% on ViEmu), and just spending my time with NGEDIT until the mud would clear up.

And it’s happenned – I’ve seen exactly the right way to approach it.

What is it? I just need to start actually using NGEDIT. I’m so fortunate that I use a text editor daily during many hours, for many tasks, and I can just use that time on NGEDIT – and just implement the stuff I need along the way! It’s clear, I just need to make it the best editor for me, and the rest will follow naturally. No need to prioritize features, no need to do heavy code spelunking, just start using it and implement the stuff as I go.

Which ones have been the first tasks to come out of this? Two quite obvious one – first, I needed to fix a bug in BOM-mark autodetection code, as I tried to use NGEDIT to edit some registry-export files, and I just noticed the bug while using it. And second, and the reason I was working on registry-exports, I need to implement associating file extensions to NGEDIT so that I can just double-click on files! And that requires that I implement command line parsing in NGEDIT (which, of course, was buried somewhere around #53 in the todo list!). Why? Because if I am to use NGEDIT for all my tasks, I just need to open files efficiently now.

It’s incredible how this path is already starting to work – I’m writing this blog post in NGEDIT, and the most important part is that I already feel confident. Confident that I’m on the right track for the earliest possible release date. And that lets me relax and focus on actual work.

Long time no see

Friday, October 28th, 2005

I’ve been swamped with work in the past few days, so I didn’t have any time to blog. But just yesterday I made available the first alpha version of ViEmu 1.3, which provides a single star feature: regular expressions and ex command line emulation. This means my regular expression engine is working nicely (on top of my encoding-independent C++ string template class!). And that ViEmu is starting to bring the full power of vi/vim to Visual Studio. I hope to iron out the remaining bugs and release it to the public around next week.

I think I will have be having more time to blog starting next week, and I have a line up of stuff I want to blog about. Part of it thanks to Baruch Even who’s started the very nice Planet uISV blog aggregator (be sure to check it out if you’re interested in other small software shops and start-ups!).

Visual Studio 2005 has just been released (finally!) and I’m downloading it through msdn subscriptions, but I already have news from some customer that the build I provide for VS2005-beta-2 seems to work nicely with it. I will finally prepare a version of ViEmu that installs dually to both VS.NET 2003 and VS2005. Some customers were already using ViEmu with VS2005 beta versions quite happily, but I had some “false positives” on ViEmu problems where VS2005 was actually the culprit – I hope everything of importance will be fixed now and I’ll be able to release ViEmu for VS2005 “officially”.

To finish this post, I’ll extract the information on what ViEmu 1.3 brings – be warned, it is for heavy vi users and regex experts, so skip it as soon as you have a doubt whether you’re interested in it.

Summary of what's contained in ViEmu 1.3-a-1:

  - Regular expression support for '/' and '?' searches
  - Command line editing, with command history
    (use the cursor arrows, BACKSPACE and DEL)
  - '< , '> marks for the last active visual selection
  - gv normal mode command to restore the last visual selection
  - The following ex commands:
    - :set - basic implementation allowing [no]ig[norecase]/[no]ic,
      [no]sm[artcase]/[no]sc, and [no]ma[gic]
    - :d   - :[range]d[elete] [x] [count] to delete (x is the register)
    - :y   - :[range]y[ank] [x] [count] to yank (x is the register)
    - :j   - :[range]j[oin][!] to join the lines in the range,
               or default to the given line (or cursor line) and the next
    - :pu  - :[range]pu[t][!] [dest] to paste after (!=before) the given
    - :co  - :[range]co[py] [dest] to copy the lines in range to the
               destination address (:t is a synonim for this)
    - :m   - :[range]m[ove] [dest] to move the lines in range to the
               destination address
    - :p   - :[range]p[rint] [count] to print the lines (send them to the
               output window) (:P is a synonim for this)
    - :nu  - :[range]nu[mber] [count] to print the lines (send them to the
               output window), w/line number (:# is a synonim for this)
    - :s   - :[range]s[ubstitute]/re/sub/[g] to substitute matches for the
               given regex with sub (do not give 'g' for only 1st match on
               each line)
    - :g   - :[range]g[lobal][!]/re/cmd to run ':cmd' on all lines matching
               the given regex (! = *not* matching)
    - :v   - :[range]v[global]/re/cmd to run ':cmd' on all lines *not*
               matching the given regex

You can now use :g/^/m0 to invert the file, :g/^$/d to remove
all empty lines, :%s/\s\+$// to remove all trailing whitespace,
and use many of your favorite vi/vim tricks.

In implementing the regular expression engine, I've gone through vim
documentation and implemented everything there. There are a few things
not implemented yet - I plan to add them later on. This is a summary of the
implemented features (for now, you can look at vim's documentation for

 - Regular matching characters
 - '.' for any character
 - Sets (full vim syntax): [abc], [^1-9a-z], [ab[:digit:]], ...
     (including '\_[' to include newline)
 - Standard repetitions: * for 0-or-more, \+ for 1-or-more, \= or \? for
     0 or 1
 - Counted repetitions: {1,2} for 1-to-2 repetitions, {1,} for 1-to-any,
     {,5} for 5 or less, {-1,} for non-greedy versions
 - Branches: foo\|bar matches either "foo" or "bar"
 - Concats: foobar\|.. matches the first two characters where 'foobar'
 - Subexpressions: \( and \) to delimit them ('\%(' to make them
 - ^ and $ for start- and end-of-line. (See the note on the limitation
 - \_^ and \_$ for s-o-l and eol anywhere in the pattern
 - \_. for any character including newline
 - \zs and \ze to mark the match boundaries
 - \< and \> for beg and end of word
 - Character classes: \s for space, \d for digit, \S for non-space, etc...
     and '\_x' for the '\x' class plus newline (all of them work)
 - Special chars: \n for newline, \e, \t, \r, \b
 - \1..\9 repeat matches
 - Regex control: \c to forece ignore chase, \C to force check case, \m for
     magic, \M for nomagic, \v for verymagic, \V for verynomagic

Full [very][no]magic is supported.

These are the vim regular expression features not yet implemented by ViEmu:

 - ~ to match last substitute string
 - \@>, \@=, \@!, \@< = and |@<! zero-width and dependent matches/non-matches
 - \%^ (beg-of-file), \%$ (end-of-file), \%# (cursor-pos), \%23l
     (specific line), \%23c (col) and \%23v (vcol)
 - optional tail "wh\%[atever]"
 - *NO PROTECTION* for repetitions of possibly zero-width matches, be
     careful! \zs* or \(a*\)* MAY HANG VIEMU!!!
 - ^ and $ are only detected as special at the very beginning and very end
     of the regular expression string, use \_^ or \_$
 - \Z (ignore differences in unicode combining chars)

Other limitations:

 - The :s replacement string does not yet understand the full vi/vim options,
and cannot insert multi-line text. Only & and \1..\9 are recognized as special,
and if some of them matched a multi-line range, only the regular characters
will be inserted. You can't insert new line breaks by using \r either.

 - After-regular-expression displacement strings are not implemented
     ('/abc/+1' to go to the line after the match).

 - Ex-ranges accept everything (%, *, ., $, marks, searches) but not
     references to previous searches (\/, \?, \&) or +n/-n arithmetics.

 - The command line editing at the status bar looks a bit crude, with
     that improvised cursor, but it should make the ex emulation very

 - :p and :# output is sent to a "ViEmu" pane on the output window.

Beautiful regular expressions code

Sunday, October 9th, 2005

My regular expression engine is starting to work. I can already compile and match basic regular expressions, and the framework for the most complex features is already there, even if not completely implemented yet. The first use of the engine will be for ViEmu (I might go straight from 1.2 to 1.5, as I feel regex and ex command line support take it to the next level, and Firefox is already going to do a 1.0->1.5 jump so I shouldn’t be less).

It’s probably the piece of code with which I’ve been most happy in a long time. The reason? Not that it is complex and it was hard to write (which it was), as several other pieces have been more complex, and many others have taken a lot more work. The actual reason is that it is free of tight bindings to anything else: it uses the generic string template framework I talked about, and so it can handle any variation of string encoding, format, storage, access-mechanism… whatever – and not losing an ounce of efficiency from writing code that uses straight ‘char’ or ‘wchar_t’.

For one, I feel I will never need to write another regular expression engine, and that is a good feeling.

But, most important, when I look at the code, I get a feeling of beauty. And it is a feeling that I miss most of the time I write code. I don’t know how you feel about it, but it actually hurts me when I write code that is too tightly bound to some specific circumstance. Reusability is great, but the feeling of being right is a separate issue and that’s what makes me happiest.

I’m going to try to post the declaration here so that you can have a look at it. Let’s see if the beauty survives the adjustment to the blog’s width:

// Regular expressions support for NGEDIT and ViEmu, templatized to
//  support text and input in any encoding (ViEmu only uses wchar_t)

#ifndef _NGREGEXP_H_
#define _NGREGEXP_H_

#include "ngbase.h"
#include "vector.h"

namespace nglib

template<class TREADSTR>
class TRegExp
      typedef typename TREADSTR::TREF           TSREF;
      typedef typename TREADSTR::TREF::iterator TSITER;
      typedef typename TREADSTR::TPOS           TSPOS;
      typedef typename TREADSTR::TCACHE         TSCACHE;
      typedef typename TREADSTR::TCHAR          TSCHAR;

    enum ECompileError

    struct TCompileError
      ECompileError type;
      TSPOS         pos;

    struct TMatchResult
      TSPOS posStart, posAfterEnd;
      struct TSubMatch
        TSPOS posStart, posAfterEnd;
      TVector<TSubMatch> vSubmatches;

    TRegExp() { m_ok = false; }
    //~TRegExp() { }

    TRet Compile (
      TSREF rsRegExp,
      TCompileError *pCompileError = NULL,
      TSCHAR chTerm = TSCHAR::zero

    // Methods for simple string matching
    bool TryToMatch (
      TSREF rsInput,
      TMatchResult *pMatchResult,
      bool bBOL = true,
      bool bEOL = true

    bool Contains (
      TSREF rsInput,
      TMatchResult *pMatchResult,
      bool bBOL = true,
      bool bEOL = true

    // Methods for possibly multi-line regexps
    template <class TTEXTBUF>
    bool TryToMatch (
      TTEXTBUF txtBuf,
      unsigned uStartLine,
      TSPOS pos,
      TMatchResult *pMatchResult
    ); // At a certain line pos

    template <class TTEXTBUF>
    bool Contains (
      TTEXTBUF txtBuf,
      unsigned uStartLine,
      TSPOS pos,
      TMatchResult *pMatchResult
    ); // Starting anywhere in that line


    enum ENodeType
      NT_MANDATORY_JUMP,      // + 2 bytes signed offset
      NT_OPTIONAL_JUMP,       // + 2 bytes signed offset
      NT_OPTIONAL_JUMP_PREF,  // + 2 bytes signed offset
            // jump with preference (to control greediness)

      NT_MATCH,               // + 1 byte match_type + details
      NT_OPEN_SUBEXPR,        // + 1 byte subexpr #
      NT_CLOSE_SUBEXPR,       // + 1 byte subexpr #

      NT_SAVE_IPOS,    // + 1 byte pos-reg where to save
      NT_JUMPTO_IPOS,  // + 1 byte temp to read
      NT_SET_TEMP,     // + 1 byte temp where to save
      NT_INC_TEMP,     // + 1 byte temp to read

    enum EMatchType
      MT_CHAR,          // any literal char (+ ENC_CHAR)
      MT_DOT,           // .                        
      MT_BOL,           // ^
      MT_EOL,           // $
      MT_NEXTLINE,      // \n
      MT_SET,           // [abc] or [a-zA-Z]
                        //+ (byte)num_chars
                        //+ (byte)num_ranges
                        //+ nc * ENC_CHAR
                        //+ 2 * nr * ENC_CHAR
      MT_NEGSET,        // [^abc] or [^a-zA-Z]  same
      MT_WHITESPACE,    // \s
      MT_WORD,          // \w
      MT_NONWORD        // \W

    // Compiler and matcher classes, not accessible externally
    class TCompiler;
    class TMatcher;

    bool          m_ok;
    TVector<byte> m_vbCompiledExpr;


} // end namespace

#include "ngregexp.inl"


Main things: the class is in a namespace that I use for all my common code, you can see how the interface uses the string’s specific types, the TCompiler and TMatcher classes are just declared and are unnecessary for the user of the class, and the only declarations – apart from the main interface – are for the node types, which need to be accessible both by the TCompiler and the TMatcher.

Even if C++ templates force you to include the definition of the members at the point of use, I usually separate template-based code in a header file with the declarations and an “.inl” (inline) file with the definitions, which helps keep code sanity.

The only actual types are bool (which is fair enough to use without loss of generality), byte, which allows generality through the TSCHAR::EncodeToBytes() and TSCHAR::DecodeFromBytes(), and the lonely unsigned to refer to a line number within a multi-line text buffer. I will probably get rid of that one by using an abstract TIXLINE (line-index type) in TTEXTBUF.

The only shortcut I took from my original idea is that both the regex definition string and the target text have to be of the same type, but templatizing on two string types seemed a bit overkill and I can always easily convert at the use point via special-purpose inter-string-type conversion inline functions (possibly even template-based to avoid too much rewriting).

Now that I think of it, if the matched target is a sparse or arbitrarily ordered disk-based text buffer, abstracted away through a very smart TTEXTBUF class, I will probably have to allow specifying the regex itself with another type.

It’s taken a long time to develop this C++ style, but I’m starting to feel really happy with how my code is looking – for the first time after over 10 years of C++ programming!

I haven’t posted much on the blog lately, as I’ve been focusing in development. Bringing ViEmu to maturity is taking quite some work, although the result is satisfying – and I’d like to blog more in the future, but we’ll have to see if development leaves time and energy for this…

ViEmu 1.2 released & next plans

Friday, September 30th, 2005

I’ve just released ViEmu 1.2 a while ago. During the last weeks, I feel fully energized, and I’ve completed quite a lot of work.

I haven’t seen almost any traffic at all coming from the new articles, but then, Google doesn’t like my page a lot, so it’s understandable. Maybe I am in the dreaded “sandbox”, or maybe I’m missing some key stuff in the page. My consolation is that anyone who looks for “vi keystrokes visual studio” or anything remotely similar will decidedly find it through the many mentions that appear in the first search results page (my page is nowhere to be seen on the first 20 or so pages).

I thought I’d share my plans for the next steps, especially since I’m not working a lot in NGEDIT in the last times.

Well, actually a lot of the current work will reflect directly on NGEDIT. I already have the regular expression framework that will power ViEmu, but the good thing is that it’s written using the C++ string classes I talked about a while ago, so it will transplant directly to NGEDIT’s multi-format text processing engine (even if ViEmu only uses UCS-2 two-bytes-per-char support). I will be porting the latest code I did for NGEDIT, which is among the most innovative stuff, to this string support, which is evolving within ViEmu.

My intention is to focus on ViEmu for a while more – until I get it to a level with which I will feel comfortable. That means, basically, customers’ requests and vi (ex) command line emulation. NGEDIT is a product with much more potential, but I feel I need to give ViEmu enough gas so that it will be able to work well. It’s very motivating to work directly on customer’s requests. And the fact that iit is already able to generate income is also a great incentive (when comparing it with NGEDIT, which will stil take some time until it can become a product).

I will estimate a time frame, given that I can always explain later while I missed badly 🙂 I calculate that a bit over one month will be enough to get ViEmu to my desired level, and then I will be able to invest much more effort on NGEDIT while still improving ViEmu.